id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.08757
InfoDiffusion: Representation Learning Using Information Maximizing Diffusion Models
While diffusion models excel at generating high-quality samples, their latent variables typically lack semantic meaning and are not suitable for representation learning. Here, we propose InfoDiffusion, an algorithm that augments diffusion models with low-dimensional latent variables that capture high-level factors of variation in the data. InfoDiffusion relies on a learning objective regularized with the mutual information between observed and hidden variables, which improves latent space quality and prevents the latents from being ignored by expressive diffusion-based decoders. Empirically, we find that InfoDiffusion learns disentangled and human-interpretable latent representations that are competitive with state-of-the-art generative and contrastive methods, while retaining the high sample quality of diffusion models. Our method enables manipulating the attributes of generated images and has the potential to assist tasks that require exploring a learned latent space to generate quality samples, e.g., generative design.
Yingheng Wang, Yair Schiff, Aaron Gokaslan, Weishen Pan, Fei Wang, Christopher De Sa, Volodymyr Kuleshov
2023-06-14T21:48:38Z
http://arxiv.org/abs/2306.08757v1
# InfoDiffusion: Representation Learning Using ###### Abstract While diffusion models excel at generating high-quality samples, their latent variables typically lack semantic meaning and are not suitable for representation learning. Here, we propose InfoDiffusion, an algorithm that augments diffusion models with low-dimensional latent variables that capture high-level factors of variation in the data. InfoDiffusion relies on a learning objective regularized with the mutual information between observed and hidden variables, which improves latent space quality and prevents the latents from being ignored by expressive diffusion-based decoders. Empirically, we find that InfoDiffusion learns disentangled and human-interpretable latent representations that are competitive with state-of-the-art generative and contrastive methods, while retaining the high sample quality of diffusion models. Our method enables manipulating the attributes of generated images and has the potential to assist tasks that require exploring a learned latent space to generate quality samples, e.g., generative design. Machine Learning, ICML ## 1 Introduction Diffusion models are a family of generative models characterized by high sample quality (Ho et al., 2020; Dhariwal and Nichol, 2021; Rombach et al., 2021). These models achieve state-of-the-art performance across a range of generative tasks, including image generation (Dhariwal and Nichol, 2021; Ramesh et al., 2022), audio synthesis (Kong et al., 2020), and molecule design (Jing et al., 2022; Xu et al., 2022). However, diffusion models rely on latent variables that typically lack semantic meaning and are not well-suited for the task of representation learning (Yang et al., 2022)--the unsupervised discovery of high-level concepts in data (e.g., topics across news articles, facial features in human photos, clusters of related molecules). This paper seeks to endow diffusion models with a semantically meaningful latent space while retaining their high sample quality. Specifically, we propose InfoDiffusion, an algorithm that augments diffusion models with low-dimensional latent variables that capture high-level factors of variation in the data. InfoDiffusion relies on variational inference to optimize the mutual information between the low-dimensional latents and the generated samples (Zhao et al., 2017); this prevents expressive diffusion-based generators from ignoring auxiliary latents and promotes their use for storing semantically meaningful and disentangled information (Chen et al., 2016). The InfoDiffusion algorithm generalizes several existing methods for representation learning (Kingma and Welling, 2013; Makhzani et al., 2015; Higgins et al., 2017). Our method is a principled probabilistic extension of DiffAE (Preechakul et al., 2022) that supports custom priors and discrete latents and improves latents via mutual information regularization. It also extends InfoVAEs (Zhao et al., 2017) to leverage more flexible diffusion-based decoders. See Figure 2 for an overview of our method. Figure 1: **InfoDiffusion produces semantically meaningful latent space for a diffusion model. (_Top_) Smooth latent space. (_Bottom_) Disentangled, human-interpretable factors of variation.** We evaluate InfoDiffusion on a suite of benchmark datasets and find that it learns latent representations that are competitive with state-of-the-art generative and contrastive methods (Chen et al., 2020; Chen et al., 2020; Caron et al., 2021), while retaining the high sample quality of diffusion models. Unlike many existing methods, InfoDiffusion finds disentangled representations that accurately capture distinct human-interpretable factors of variation; see Figure 1 for examples. ContributionsIn summary, we make the following contributions: (1) we propose a principled probabilistic extension of diffusion models that supports low-dimensional latents; (2) we introduce associated variational learning objectives that are regularized with a mutual information term; (3) we show that these algorithms simultaneously yield high-quality samples and latent representations, achieving competitive performance with state-of-the-art methods on both fronts. ## 2 Background A diffusion model defines a latent variable distribution \(p(\mathbf{x}_{0:T})\) over data \(\mathbf{x}_{0}\) sampled from the data distribution, as well as latents \(\mathbf{x}_{1:T}\!:=\!\mathbf{x}_{1},\!\mathbf{x}_{2},\)...,\(\mathbf{x}_{T}\) that represent a gradual transformation of \(\mathbf{x}_{0}\) into random Gaussian noise \(\mathbf{x}_{T}\). The distribution \(p\) factorizes as a Markov chain \[p(\mathbf{x}_{0:T})\!=\!p(\mathbf{x}_{T})\prod_{t=0}^{T-1}p_{\theta}(\mathbf{ x}_{t}\,|\,\mathbf{x}_{t+1}). \tag{1}\] that maps noise \(\mathbf{x}_{T}\) into data \(\mathbf{x}_{0}\) by "undoing" a noising (or diffusion) process denoted by \(q\). Here we use a learned denoising distribution \(p_{\theta}\), which we parameterize by a neural network with parameters \(\theta\). The noising process \(q\) starts from a clean \(\mathbf{x}_{0}\), drawn from the data distribution (denoted by \(q(\mathbf{x}_{0})\)) and defines a sequence of \(T\) variables \(\mathbf{x}_{1},\)...,\(\mathbf{x}_{T}\) via a Markov chain that factorizes as \[q(\mathbf{x}_{1:T}\,|\,\mathbf{x}_{0})\!=\!\prod_{t=1}^{T}\!\!q(\mathbf{x}_{t} \,|\,\mathbf{x}_{t-1}). \tag{2}\] In this factorization, we define \(q(\mathbf{x}_{t}\,|\,\,\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\,\sqrt{ \alpha_{t}}\mathbf{x}_{t-1},\,\sqrt{1\!-\!\alpha_{t}}\mathbf{I})\) as a Gaussian distribution centered around a progressively corrupted version of \(\mathbf{x}_{t-1}\) with a schedule \(\alpha_{1},\)\(\alpha_{2},\)...,\(\alpha_{T}\). As shown in Ho et al. (2020), the marginal distribution of \(q\) can be expressed as \[q(\mathbf{x}_{t}\,|\,\mathbf{x}_{0})\!=\!\mathcal{N}(\mathbf{x}_{t};\!\sqrt{ \bar{\alpha}_{t}}\mathbf{x}_{0},\!\sqrt{1\!-\!\bar{\alpha}_{t}}\mathbf{I}),\] where \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{t}\) is the cumulative product of the schedule parameters \(\alpha_{t}\). Normally, \(p\) is trained via maximization of an evidence lower bound (ELBO) objective derived using variational inference: \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \(\mathbf{z}\) distributed according to a prior \(p(\mathbf{z})\). The \(\mathbf{z}\) is independent of the forward process because \(\mathbf{z}\) is meant to be a latent representation of the input, not a control variable of diffusion. ### Auxiliary Latent Variables and Semantic Prior The goal of the auxiliary latents \(\mathbf{z}\) is to encode a high-level representation of \(\mathbf{x}_{0}\). Unlike \(\mathbf{x}_{1:T}\), the \(\mathbf{z}\) are not constrained to have a particular dimension and can represent a low-dimensional vector of latent factors of variation. They can be continuous, as well as discrete. The prior \(p(\mathbf{z})\) ensures that we have a principled probabilistic model and enables the unconditional sampling of \(\mathbf{x}_{0}\). The prior can also be used to encode domain knowledge about \(\mathbf{z}\)--e.g., if we know that the dataset contains \(K\) distinct classes, we may set \(p(\mathbf{z})\) to be a mixture of \(K\) components. Alternatively, we may set \(p(\mathbf{z})\) to be a simple distribution from which we can easily sample (e.g., a Gaussian). ### Auxiliary-Variable Diffusion Decoder The decoder \(p_{\theta}(\mathbf{x}_{t-1}\,|\,\mathbf{x}_{t},\mathbf{z})\) is conditioned on the auxiliary latents \(\mathbf{z}\). In a trained model, the \(\mathbf{z}\) are responsible for high-level concepts (e.g., the age or skin color of a person), while the sequence of \(\mathbf{x}_{t}\) progressively adds lower-level details (e.g., hair texture). Following previous work (Ho et al., 2020), we define the decoder \[p_{\theta}(\mathbf{x}_{t-1}\,|\,\mathbf{x}_{t},\mathbf{z})=\frac{1}{\sqrt{ \alpha_{t}}}\bigg{(}\mathbf{x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\alpha_{t}}} \epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{z})\bigg{)}\] with a noise prediction network \(\epsilon_{\theta}(\mathbf{x}_{t-1},t,\mathbf{z})\) parameterized by a U-Net (Ronneberger et al., 2015). We condition this network on \(\mathbf{z}\) using adaptive group normalization layers (AGN), inspired by Dhariwal and Nichol (2021), \[\text{AGN}(\mathbf{h},\mathbf{z})=(1+\mathbf{s}(\mathbf{z}))\cdot\text{ GroupNorm}(\mathbf{h})+\mathbf{b}(\mathbf{z}).\] Specifically, we implement two successive AGN layers for the auxiliary variable and time embeddings, respectively, to fuse them into each residual block. ## 4 Learning and Inference Algorithms For Auxiliary-Variable Diffusion Models Next, we introduce learning algorithms for auxiliary-variable models based on variational inference. We refer to the resulting method as variational auxiliary-variable diffusion. ### Variational Inference for Auxiliary-Variable Models We apply variational inference twice to form a variational lower bound on the marginal log-likelihood of the data (see the full derivation in Appendix A): \[\begin{split}\log& p(\mathbf{x}_{0})=\log\mathbb{ E}_{q_{\mathbf{z}}}\Bigg{[}\frac{p(\mathbf{x}_{0},\mathbf{z})}{q_{\phi}( \mathbf{z}\,|\,\mathbf{x}_{0})}\Bigg{]}\\ &\geq\mathbb{E}_{q_{\mathbf{z}}}\Bigg{[}\log\mathbb{E}_{q_{ \mathbf{z}}}\Bigg{[}\frac{p(\mathbf{x}_{0:T},\mathbf{z})}{q_{\phi}(\mathbf{z} \,|\,\mathbf{x}_{0})q(\mathbf{x}_{1:T}\,|\,\mathbf{x}_{0})}\bigg{]}\Bigg{]}\\ &\geq\mathbb{E}_{q_{\mathbf{z}}}\Bigg{[}\mathbb{E}_{q_{\mathbf{z }}}\Bigg{[}\log\frac{p(\mathbf{x}_{0:T},\mathbf{z})}{q_{\phi}(\mathbf{z}\,|\, \mathbf{x}_{0})q(\mathbf{x}_{1:T}\,|\,\mathbf{x}_{0})}\Bigg{]}\Bigg{]}\\ &=\mathbb{E}_{q_{\mathbf{x}_{1}}}[\mathbb{E}_{q_{\mathbf{z}}}[ \log p_{\theta}(\mathbf{x}_{0}\,|\,\mathbf{x}_{1},\mathbf{z})]]-\text{KL}(q (\mathbf{z}\,|\,\mathbf{x}_{0})||p(\mathbf{z}))\\ &\quad-\text{KL}(q(\mathbf{x}_{T}\,|\,\mathbf{x}_{0})||p(\mathbf{ x}_{T}))-\sum_{t=2}^{T}\mathbb{E}_{q_{\mathbf{z}_{t}}}[\mathbb{E}_{q_{\mathbf{z}}}[ \text{KL}(q_{t}||p_{t})]]\\ &:=\mathcal{L}_{D}(\mathbf{x}_{0})\end{split} \tag{4}\] where \(\mathcal{L}_{D}(\mathbf{x}_{0})\) denotes the ELBO for a variational auxiliary-variable diffusion model, \(q_{t},p_{t}\) denote the distributions \(q(\mathbf{x}_{t-1}\,|\,\mathbf{x}_{t},\mathbf{x}_{0})\) and \(p_{\theta}(\mathbf{x}_{t-1}\,|\,\mathbf{x}_{t},\mathbf{z})\), respectively, \(q_{\mathbf{z}}:=q_{\phi}(\mathbf{z}\,|\,\,\mathbf{x}_{0})\) is an approximate variational posterior, \(q_{\mathbf{x}}:=q(\mathbf{x}_{1:T}\,|\,\mathbf{x}_{0})\), and \(q_{\mathbf{x}_{t}}:=q(\mathbf{x}_{t}\,|\,\mathbf{x}_{0})\). We optimize the above objective end-to-end using gradient descent by using the reparameterization trick to backpropagate through samples from \(q_{\phi}(\mathbf{z}\,|\,\mathbf{x}_{0})\)(Kingma and Welling, 2013). We use a neural network with parameters \(\phi\) to encode the parameters of the approximate posterior distribution of \(\mathbf{z}\). ### Inferring Latent Representations Once the model is trained, we rely on the approximate posterior \(q_{\phi}(\mathbf{z}\,|\,\,\mathbf{x}_{0})\) to infer \(\mathbf{z}\). In our experiments, we parameterize \(q_{\phi}(\mathbf{z}\,|\,\mathbf{x}_{0})\) as a UNet encoder (see Appendix E for more details). Additionally, we may encode \(\mathbf{x}_{0}\) into a latent-variable \(\mathbf{x}_{T}\), which contains information not captured by the auxiliary variable \(\mathbf{z}\)--usually details such as texture and high-level frequencies. Our method iteratively runs the diffusion process using the learned noise model \(\epsilon_{\theta}(\mathbf{x}_{0},t,\mathbf{z})\): \[\mathbf{x}_{t+1}=\sqrt{\alpha_{t+1}}\hat{\mathbf{x}}_{0}(\mathbf{x}_{t},t, \mathbf{z})+\sqrt{1-\alpha_{t+1}}\epsilon_{\theta}(\mathbf{x}_{t},t,\mathbf{z}),\] where \(\mathbf{z}\) is a latent code and \(\hat{\mathbf{x}}_{0}(\mathbf{x},\,\,t,\,\,\mathbf{z})=\frac{1}{\sqrt{\alpha_{t} }}\big{(}\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\epsilon_{\theta}(\mathbf{x}_{t},t, \mathbf{z})\big{)}\) is an estimate of \(\mathbf{x}_{0}\) from \(\mathbf{x}_{t}\). ### Discrete Auxiliary-Variable Diffusion In many settings, latent representations are inherently discrete--e.g., the presence of certain objects in a scene, the choice of topic in a text, etc. Variational auxiliary-variable diffusion supports such discrete variables via relaxation methods for deep latent variable models (Jang et al., 2016). Specifically, at training time, we replace \(\mathbf{z}\) with a continuous relaxation \(\mathbf{z}_{\tau}\) sampled from \(q\) using the Gumbel-Softmax technique with a temperature \(\tau\). Higher temperatures \(\tau\) yield continuous approximations \(\mathbf{z}_{\tau}\) of \(\mathbf{z}\); as \(\tau\!\to\!0\), \(\mathbf{z}_{\tau}\) approaches a discrete \(\mathbf{z}\). We train using a categorical distribution for the prior \(p(\mathbf{z})\), and we estimate gradients using the reparameterization trick. We anneal \(\tau\) over the course of training to keep gradient variance in check. At inference time, we set \(\tau\!=\!0\) to obtain fully discrete latents. See Appendix G for more details. ### Sampling Methods At inference time, our model supports multiple sampling procedures. First, to generate \(\mathbf{x}_{0}\) unconditionally, we can sample from the original prior \(p(\mathbf{z})\), as in a VAE (see Appendix D.1 for details on generating high-quality samples with \(\mathbf{z}\!\sim\!p(\mathbf{z})\)). Alternatively, we can utilize a learned prior to potentially improve sample quality (see Appendix D.2 for details on implementing the learned prior used in Section 6). This learned prior is similar to the approach described in DiffAE (Preechakul et al., 2022), where a latent diffusion model is required to enable sampling. ## 5 InfoDiffusion: Regularizing Semantic Latents By Maximizing Mutual Information Diffusion models with auxiliary latents face two risks. First, an expressive decoder \(p_{\theta}(\mathbf{x}_{t-1}\,|\,\mathbf{x}_{t},\mathbf{z})\) may choose to ignore low-dimensional latents \(\mathbf{z}\) and generate \(\mathbf{x}_{t-1}\) unconditionally (Chen et al., 2016). Second, the approximate posterior \(q_{\phi}(\mathbf{z}\,|\,\mathbf{x}_{0})\) may fail to match the prior \(p(\mathbf{z})\) because the prior regularization term is too weak relative to the reconstruction term (Zhao et al., 2017). This degrades the quality of ancestral sampling as well as that of latent representations. ### Regularizing Auxiliary-Variable Diffusion We propose dealing with the issues of ignored latents and degenerate posteriors by using two regularization terms--a mutual information term and a prior regularizer. We refer to the resulting algorithm as InfoDiffusion. Mutual Information RegularizationTo prevent the diffusion model from ignoring the latents \(\mathbf{z}\), we augment the learning objective from Equation (4) with a mutual information term (Chen et al., 2016; Zhao et al., 2017) between \(\mathbf{x}_{0}\) and \(\mathbf{z}\) under \(q_{\phi}(\mathbf{x}_{0},\mathbf{z})\), the joint distribution over observed data \(\mathbf{x}_{0}\) and latent variables \(\mathbf{z}\). Formally, we define the mutual information regularizer as \[\text{MI}_{\mathbf{x}_{0},\mathbf{z}}\!=\!\mathbb{E}_{q_{\phi}(\mathbf{x}_{0}, \mathbf{z})}\!\left[\log\!\frac{q_{\phi}(\mathbf{x}_{0},\mathbf{z})}{q(\mathbf{ x}_{0})q_{\phi}(\mathbf{z})}\right]\] where \(q_{\phi}(\mathbf{z})\) is the marginal approximate posterior distribution--defined as the marginal of the product \(q_{\phi}(\mathbf{z}\,|\,\mathbf{x}_{0})q(\mathbf{x}_{0})\). Intuitively, maximizing mutual information encourages the model to generate \(\mathbf{x}_{0}\) from which we can predict \(\mathbf{z}\). Prior RegularizationTo prevent the model from learning a degenerate approximate posterior, we regularize the encoded samples \(\mathbf{z}\) to look like the prior \(p\). Formally, we define the prior regularizer as \[\mathcal{R}\!=\!\text{D}(q_{\phi}(\mathbf{z})||p(\mathbf{z})),\] where D is any strict divergence. ### A Tractable Objective for InfoDiffusion We train InfoDiffusion by maximizing a regularized ELBO objective of the form \[\mathbb{E}_{q(\mathbf{x}_{0})}[\mathcal{L}_{D}(\mathbf{x}_{0})]\!+\!\zeta\! \cdot\!\text{MI}_{\mathbf{x}_{0},\mathbf{z}}\!-\!\beta\!\cdot\!\mathcal{R}, \tag{5}\] where \(\mathcal{L}_{D}(\mathbf{x}_{0})\) is from Equation (4), and \(\zeta\!,\!\beta\!>\!0\) are scalars controlling the strength of the regularizers. However, both the mutual information and the prior regularizer are intractable. Following Zhao et al. (2017), we rewrite the above learning objective into an equivalent tractable form, as described in Proposition 5.1 (see Appendix A for the full derivation). Defining \(\lambda\!:=\!\beta\!-\!1\), we have **Proposition 5.1**.: _The regularized InfoDiffusion objective, Equation (5), can be rewritten as_ \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(k\) is a positive definite kernel. In order to optimize \(\text{MMD}(q_{\phi}(\mathbf{z})||p(\mathbf{z}))\), we use sample-based optimization methods for implicit models. Specifically we estimate expectations over \(q_{\phi}(\mathbf{z})\) by taking empirical averages over samples \(\{\mathbf{x}_{0}^{(i)}\}_{i=1}^{N}\!\sim\!q(\mathbf{x}_{0})\). ### Comparing InfoDiffusion to Existing Models The InfoDiffusion algorithm generalizes several existing methods in the literature. When the decoder performs one step of diffusion (\(T\!=\!1\)), we recover a model that is equivalent to the InfoVAE model (Zhao et al., 2017), up to choices of the decoder architecture. When we additionally choose \(\lambda\!=\!0\), we recover the \(\beta\)-VAE model (Higgins et al., 2017). When \(T=1\) and D is the Jensen-Shannon divergence, we recover adversarial auto-encoders (AAEs) (Makhzani et al., 2015). Our InfoDiffusion method can be seen as an extension of \(\beta\)-VAE, InfoVAE, and AAE to diffusion decoders, similar to how denoising diffusion probabilistic models (DDPM; Ho et al. (2020)) extend VAEs. Finally, when \(\zeta\!=\!\lambda\!=\!0\), we recover the DiffAE model (Preechakul et al., 2022). We further discuss how our method relates to these prior works in Section 7. In Table 1, we detail this comparison to special cases. ## 6 Experiments In this section, we evaluate our proposed method by comparing it to several baselines, using metrics that span generation quality, utility of latent space representations, and disentanglement. The baselines we compare against are: a vanilla auto-encoder (AE) (LeCun, 1987), a VAE (Kingma and Welling, 2013; Higgins et al., 2017), an InfoVAE (Zhao et al., 2017), and a DiffAE (Preechakul et al., 2022). We measure performance on the following datasets: FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009), FFHQ (Karras et al., 2019), CelebA (Liu et al., 2015), and 3DShapes (Burgess and Kim, 2018). See Appendix C for complete hyperparameter and computational resource details, by dataset. As discussed in Section 4.4, for InfoDiffusion, we experiment with generating images using either \(\mathbf{z}\) drawn from the prior or drawn from a learned latent distribution (denoted as "w/Learned Latent" in Table 2 and Table 3, see Appendix D.2 for details). ### Exploring Latent Representations We start by exploring three qualitative desirable features of learned representations: (1) their ability to capture high level semantic content, (2) smooth interpolation in latent space translating to smooth changes in generated output, and (3) their utility in downstream tasks. Auxiliary Variables Capture Semantic InformationIn Figure 3, we demonstrate that our model is able to encode high-level semantic information in the auxiliary variable. For a fixed \(\mathbf{z}\) and varying \(\mathbf{x}_{T}\), we find that decoded images change in their low-level features, e.g., background, hair style. Latent Space InterpolationWe begin with two images \(\mathbf{x}_{0}^{(i)}\),\(\mathbf{x}_{0}^{(j)}\) and retrieve their corresponding noise and auxiliary latent encodings \((\mathbf{z}^{(i)},\mathbf{x}_{T}^{(i)}),(\mathbf{z}^{(j)},\mathbf{x}_{T}^{(j)})\). Then, for 10 fixed steps \(l\!\in\![0,\!1]\), we generate images from the latent representations \((\mathbf{z}^{l},\mathbf{x}_{T}^{l})\) where \(\mathbf{z}^{l}=\cos(l\pi/2)\mathbf{z}^{(i)}+\sin(l\pi/2)\mathbf{z}^{(j)}\) and \(\mathbf{x}_{T}^{l}=\sin((1-l)\psi)\mathbf{x}_{T}^{(i)}+\sin(l\psi)\mathbf{x}_{T }^{(j)}\) are spherical interpolations between the auxiliary latent representation and noise tensors of the two images, with \(\pi\) denoting the angle between \(\mathbf{z}^{(i)}\) and \(\mathbf{z}^{(j)}\) and \(\psi\) the angle between \(\mathbf{x}_{T}^{(i)}\) and \(\mathbf{x}_{T}^{(j)}\). In Figure 5, we see that our model is able to combine the smooth interpolation of variational methods with the high sample quality of diffusion models. Latent Variables Discover and Predict Class LabelsIn addition to the qualitative inspection of our latent space, \begin{table} \begin{tabular}{l c c c c c} \hline \hline & Semantic & Discrete & Custom & Max & HQ \\ & latents & latents & prior & MI & samples \\ \hline AE & ✗ & ✗ & ✗ & ✗ & ✗ \\ VAE & ✔ & ✔ & ✔ & ✔ & ✗ \\ \(\beta\)-VAE & ✔ & ✔ & ✔ & ✔ & ✗ \\ AAE & ✔ & ✗ & ✗ & ✔ & ✔ \\ InfoVAE & ✔ & ✔ & ✔ & ✔ & ✔ \\ \hline DDPM & ✗ & ✗ & ✗ & ✗ & ✗ \\ DiffAE & ✔ & ✗ & ✗ & ✗ & ✔ \\ InfoDiff & ✔ & ✔ & ✔ & ✔ & ✔ \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of InfoDiffusion model to other auto-encoder (_top_) and diffusion (_bottom_) frameworks in terms of enabling semantic latents, discrete latents, custom priors, mutual information maximization (Max MI), and high-quality sample generation (HQ samples). Figure 3: \(\mathbf{z}\) captures high-level semantic detail. Varying \(\mathbf{x}_{T}\sim\mathcal{N}(0,\!1)\) (across the columns in each row) changes lower level detail in the image. Red box indicates original image. we run downstream classification tasks on \(\mathbf{z}\) to measure its utility, which we report in Table 2 and Table 3 as "Latent Qual." Specifically, we train a logistic regression classifier on the auxiliary latent encodings of images to predict labels and report the accuracy/AUROC (or average accuracy/AUROC if multiple annotations are predicted) on a test set. We split the data into 80% training and 20% test, fit the classifier on the training data, and evaluate on the test set. We repeat this 5-fold and report mean metrics \(\pm\) one standard deviation. We also compute FID based on five random sample sets of 10,000 images to obtain mean and standard deviation. Across datasets, we consistently see that the compact latent representations from our models are most informative of labels. In addition to the utility of the latent space, we generate high-quality images. ### Disentanglement #### 6.2.1 Finding disentangled dimensions We find that maximizing mutual information in the InfoDiffusion objective yields disentangled components of our latent representations. For example, in Figure 1, we see several examples of disentangled factors. In Figure 4, we demonstrate this in more detail, traversing a specific dimension of \(\mathbf{z}\) that controls smiling from values of -1.5 to 1.5. #### 6.2.2 Disentanglement Metrics DCI ScoreFor the 3DShapes dataset, we use the Disentanglement term of the DCI scores proposed in Eastwood and Williams (2018). This disentanglement metric is calculated as follows: for each attribute, a model is trained to predict it using the auxiliary latent vector \(\mathbf{z}\). The model must also provide the importance of each dimension of \(\mathbf{z}\) in predicting each attribute. Relative importance weights are converted to probabilities that dimension \(i\) of \(\mathbf{z}\) is important for predicting a given label. The disentanglement score for each dimension of \(\mathbf{z}\) is calculated as 1 minus the entropy of the relative importance probabilities. If a dimension is important for predicting only a single attribute, the score will be 1. If a dimension is equally important for predicting all attributes, the disentanglement score will be 0. The disentanglement scores are then averaged, with weights determined by the relative importance Figure 4: Finding disentangled dimensions in InfoDiffusion’s auxiliary latent variable \(\mathbf{z}\). Images are produced through a linear traversal along a particular dimension, spanning values from -1.5 to 1.5. Figure 5: Latent space interpolation for relevant baselines (a-c) and InfoDiffusion (d). InfoDiffusion has a smooth latent space and maintains high image generation quality. Reconstructions of the original images two different images are on the left and right ends of each row and are marked by red boxes. of each dimension across \(\mathbf{z}\), to get the DCI disentanglement score. In Table 3, we see that for the 3DShapes dataset, InfoDiffusion attains the highest DCI disentanglement scores. TadFor the CelebA dataset, we quantify disentanglement using TAD (Yeats et al., 2022), which is a disentanglement metric specifically proposed for this dataset that accounts for the presence of correlated and imbalanced attributes. First, we quantify attribute correlation by calculating the proportion of entropy reduction of each attribute given any other single attribute. Any attribute with an entropy reduction greater than 0.2 is removed. For each remaining attribute, we calculate AUROC score of each dimension of the auxiliary latent vector \(\mathbf{z}\) in detecting that attribute. If an attribute can be detected by at least one dimension of \(\mathbf{z}\), i.e., AUROC \(\geq 0.75\), it is considered to be "captured." The TAD score is the summation of the differences of the AUROC between the two most predictive latent dimensions for all captured attributes. In Table 3, we again see that InfoDiffusion has the best disentanglement performance with more captured attributes and higher TAD scores. We additionally note that the InfoDiffusion model balances disentanglement with high-quality generation and good latent space quality. For calculating DCI on 3DShapes, we follow previous work (Locatello et al., 2019) and treat the attributes as discrete variables, using a gradient boosting classifier implemented by scikit-learn (Pedregosa et al., 2011) as our predictor. For disentanglement metric calculation, we split the data into 80% training and 20% test, fit the classifier on the training data, and calculate AUROC on the test data. We repeat this for 5-folds and report mean metrics \(\pm\) one standard deviation. ### Discrete Latent Priors We demonstrate the flexibility of our model by training with a relaxed discrete prior. We train InfoDiffusion with a Relaxed Bernoulli prior (Jang et al., 2016) on the CelebA dataset and find that latent space quality is comparable to other models, with average AUROC of 0.73 (details in Appendix G). ### Comparison to Contrastive Methods We compare the quality of our learned representations to those from established contrastive learning methods, including SimCLR (Chen et al., 2020), MOCO-v2 (Chen et al., 2020), and DINO (Caron et al., 2021). In Table 4, we report average AUROC for classifiers trained on \(\mathbf{z}\) to predict CelebA annotations and the TAD scores for disentanglement1. Our findings indicate that our latent representations are comparable, and in some instances superior, to these robust baselines. Our approach also has the added benefit of being a generative model. We also note that our model uses a much smaller capacity latent variable compared to these contrastive method baselines. Footnote 1: We excluded the “Number of attributes captured” metric for this comparison, as the pre-trained contrastive method baselines use larger latent dimension, which artificially inflates the value for this metric. When comparing to methods with similar latent dimension, InfoDiffusion is able to significantly outperform baseline models. In Table 5, we compare to a fine-tuned, pre-trained encoder of SIMCLR with an additional dense layer that projects to 32 dimensions. We also introduce an another baseline, PDAE (Zhang et al., 2022), which builds an auto-encoder based on pre-trained diffusion models. Our method outperforms these alternatives on both the disentanglement and latent quality metrics. ### Exploring InfoDiffusion Modeling Choices Regularization CoefficientsAn evaluation of various \(\zeta\) and \(\lambda\) parameters for InfoDiffusion is presented in Appendix H. We find that prioritizing information maximization improves both generation quality and latent space coherence, with better performance achieved by maintaining a constant \(\lambda\) and increasing \(\zeta\). However, assigning \(\zeta\) values greater \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{FashionMNIST} & \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{FFHQ} \\ \hline & Latent & FID \(\downarrow\) & Latent & FID \(\downarrow\) & Latent & FID \(\downarrow\) \\ & Qua. \(\uparrow\) & \multirow{2}{*}{FID \(\downarrow\)} & \multirow{2}{*}{Qual. \(\uparrow\)} & \multirow{2}{*}{FID \(\downarrow\)} & \multirow{2}{*}{Qual. \(\uparrow\)} \\ \hline AE & \(0.819\pm 0.003\) & \(62.9\pm 2.1\) & \(0.336\pm 0.005\) & \(169.4\pm 2.4\) & \(0.615\pm 0.002\) & \(92.3\pm 2.7\) \\ VAE & \(0.796\pm 0.002\) & \(63.4\pm 1.6\) & \(0.342\pm 0.004\) & \(177.2\pm 3.2\) & \(\mathbf{0.622\pm 0.002}\) & \(95.4\pm 2.4\) \\ beta-VAE & \(0.779\pm 0.004\) & \(66.9\pm 1.8\) & \(0.253\pm 0.003\) & \(183.3\pm 3.1\) & \(0.588\pm 0.002\) & \(99.7\pm 3.4\) \\ InfoVAE & \(0.807\pm 0.003\) & \(55.0\pm 1.7\) & \(0.357\pm 0.005\) & \(160.7\pm 2.5\) & \(0.613\pm 0.002\) & \(86.9\pm 2.2\) \\ DiffAE & \(0.835\pm 0.002\) & \(8.2\pm 0.3\) & \(0.395\pm 0.006\) & \(32.1\pm 1.1\) & \(0.608\pm 0.001\) & \(31.6\pm 1.2\) \\ \hline InfoDiffusion (\(\lambda\!=\!0.1\!\zeta\!=\!1\)) & \multirow{2}{*}{\(\mathbf{0.839\pm 0.003}\)} & \multirow{2}{*}{\(8.5\pm 0.3\)} & \multirow{2}{*}{\(\mathbf{0.412\pm 0.003}\)} & \multirow{2}{*}{\(31.7\pm 1.2\)} & \multirow{2}{*}{\(0.609\pm 0.002\)} & \multirow{2}{*}{\(31.2\pm 1.6\)} \\ w/Learned Latent & & & & & \(\mathbf{31.5\pm 1.8\)} & & \(\mathbf{30.9\pm 2.5}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Latent quality, as measured by classification accuracies for logistic regression classifiers trained on the auxiliary latent vector \(\mathbf{z}\), and FID. We report mean \(\pm\) one standard deviation. Darkly shaded cells indicate the best while lightly shaded cells indicate the second best. See Table 10 for the performance of varying hyperameters. than \(1\) results in instability in the KL divergence term; thus, we cap \(\zeta=1\) for optimal performance. For \(\zeta=1\), we find that our model is robust to the choice of \(\lambda\), however for the natural image datasets, the optimal setting is \(\lambda=0.1\). model that produces semantically meaningful latents. The relationship between our method and DiffAE is analogous to the relationship between InfoVAE (Zhao et al., 2017) and a regular non-probabilistic auto-encoder. Our method augments DiffAE with: (1) a principled probabilistic auxiliary-variable model family and (2) new learning objectives based on variational mutual information maximization. This yields a number of advantages. First, our method allows users to specify domain knowledge through a prior and supports the use of discrete variables. Additionally, our improved objective maximizes mutual information, which empirically yields more useful and disentangled latents. Table 6 illustrates how our approach relates to previous work on both diffusion models and mutual information regularization by showing an analogy between progress in the space of auto-encoders and similar progress for diffusion models. ## 8 Conclusion In this work, we proposed InfoDiffusion, a new learning algorithm based on a diffusion model that uses an auxiliary variable to encode semantically meaningful information. We derive InfoDiffusion from a principled probabilistic extension of diffusion models that relies on variational inference to discover low-dimensional latents. Augmenting this variational auxiliary-variable diffusion framework with mutual information regularization enables InfoDiffusion to simultaneously achieve high-quality sample generation _and_ informative latent representations, which we use to control generation and improve downstream prediction. We evaluate InfoDiffusion on several image datasets and against state-of-the-art generative and representation learning baselines and show that it consistently produces semantically rich and more disentangled latent representations and high-quality images. We expect InfoDiffusion will be useful in generative design and other applications that require both exploring a latent space and quality generation. ## Acknowledgements This work was supported by Tata Consulting Services, the Presidential Life Science Fellowship, the Hal & Inge Marcus PhD Fellowship, and NSF CAREER grants (#1750326, #2046760, and #2145577).
2305.07134
Euclidean minimum spanning trees with location dependent and power weighted edges
Consider~\(n\) nodes~\(\{X_i\}_{1 \leq i \leq n}\) independently distributed in the unit square~\(S,\) each according to a distribution~\(f\) and let~\(K_n\) be the complete graph formed by joining each pair of nodes by a straight line segment. For every edge~\(e\) in~\(K_n\) we associate a weight~\(w(e)\) that may depend on the \emph{individual locations} of the endvertices of~\(e.\) Denoting~\(MST_n\) to be the minimum weight of a spanning tree of~\(K_n\) and assuming an equivalence condition on the weight function~\(w(.),\) we prove that~\(MST_n\) appropriately scaled and centred converges to zero a.s.\ and in mean as~\(n \rightarrow \infty.\) We also obtain upper and lower bound deviation estimates for~\(MST_n.\)
Ghurumuruhan Ganesan
2023-05-11T20:56:18Z
http://arxiv.org/abs/2305.07134v1
# Euclidean minimum spanning trees with location dependent and power weighted edges ###### Abstract Consider \(n\) nodes \(\{X_{i}\}_{1\leq i\leq n}\) independently distributed in the unit square \(S,\) each according to a distribution \(f\) and let \(K_{n}\) be the complete graph formed by joining each pair of nodes by a straight line segment. For every edge \(e\) in \(K_{n}\) we associate a weight \(w(e)\) that may depend on the _individual locations_ of the endvertices of \(e.\) Denoting \(MST_{n}\) to be the minimum weight of a spanning tree of \(K_{n}\) and assuming an equivalence condition on the weight function \(w(.),\) we prove that \(MST_{n}\) appropriately scaled and centred converges to zero a.s. and in mean as \(n\rightarrow\infty.\) We also obtain upper and lower bound deviation estimates for \(MST_{n}.\) **Key words:** Minimum spanning tree, location dependent edge weights, edge weight exponent. **AMS 2000 Subject Classification:** Primary: 60J10, 60K35; Secondary: 60C05, 62E10, 90B15, 91D30. ## 1 Introduction The study of the minimum weight spanning trees of a graph is of great practical importance and many algorithms have been proposed over the years for various kinds of graphs. For example, the well-known Kruskal's algorithm (Cormen et al (2009)) iteratively adds edges to a sequence of increasing subtree of the original graph until a spanning tree is obtained with the constraint that no cycle is created in any of the iterations. The spanning tree with minimum weight so obtained is usually called the _Minimum Spanning Tree_ (MST). We are interested in MSTs of Euclidean random graphs whose nodes are randomly distributed in the unit square and whose edges are assigned weights related to the Euclidean length. When the weight of an edge equals its Euclidean length raised to a positive power, we refer to the resulting MSTs as power weighted Euclidean MSTs. One of the main objects of interest in the study of power weighted Euclidean MST is its total weight: How does it scale with the number of nodes and the power weight exponent and what are its convergence properties? Analytical results for such MSTs have been studied extensively before (see Steele (1988, 1993), Kesten and Lee (1996), Penrose and Yukich (2003) and references therein). For example, Steele (1988) uses edge counting techniques to obtain variance estimates for the MST weight and Kesten and Lee (1996) use martingale methods to obtain central limit theorems (CLTs) for the MST weight, appropriately scaled and centred. Penrose and Yukich (2003) use coupling arguments to obtain weak laws for functionals of point processes thereby including the MST as a special case. Recently Chatterjee and Sen (2017) used percolation theoretic arguments to study convergence rate of the CLTs for Euclidean MSTs. In this paper, we study MSTs of Euclidean random graphs whose edge weights depend on the _individual locations_ of the endvertices. Such MSTs frequently arise in practice and as an example, suppose it is required to establish a fully connected wireless communication network among the set of nodes randomly located in a geographical area. An edge between two nodes signifies a potential communication link with the corresponding weight being proportional to the cost of physically installing the link. If some areas within the area are highly populated, it might actually be cost effective to set up links in such "hotspots". Thus the edge weights are location dependent and it is of interest to estimate the minimum cost of setting up such a network. Throughout we use the unit square for illustrating our results and the analysis holds analogous for other regular shapes like rectangle or circle. In the rest of this section, we describe the model under consideration and describe an important property regarding the maximum vertex degree, that distinguishes location dependent MSTs from their homogenous (i.e. location independent) counterparts. We then describe the outline and the main objectives of the paper towards understanding the properties of location dependent MSTs. ### Model Description Let \(f\) be any distribution on the unit square \(S\) satisfying such that \[\epsilon_{1}\leq\inf_{x\in S}f(x)\leq\sup_{x\in S}f(x)\leq\epsilon_{2} \tag{1.1}\] for some constants \(0<\epsilon_{1}\leq\epsilon_{2}<\infty.\) Throughout all constants are independent of \(n.\) Let \(\{X_{i}\}_{i\geq 1}\) be independently and identically distributed (i.i.d.) with the distribution \(f(.)\) defined on the probability space \((\Omega,\mathcal{F},\mathbb{P}).\) For \(n\geq 1,\) let \(K_{n}=K(X_{1},\ldots,X_{n})\) be the complete graph whose edges are obtained by connecting each pair of nodes \(X_{i}\) and \(X_{j}\) by the straight line segment \((X_{i},X_{j})\) with \(X_{i}\) and \(X_{j}\) as endvertices. A path \(\mathcal{P}=(Y_{1},\ldots,Y_{t})\) containing \(t\) distinct vertices is a subgraph of \(K_{n}\) with vertex set \(\{Y_{j}\}_{1\leq j\leq t}\) and edge set \(\{(Y_{j},Y_{j+1})\}_{1\leq j\leq t-1}.\) The nodes \(Y_{1}\) and \(Y_{t}\) are said to be _connected_ by edges of the path \(\mathcal{P}.\) A subgraph \(\mathcal{T}\) of \(K_{n}\) with vertex set \(\{Y_{i}\}_{1\leq i\leq t}\) and edge set \(E_{\mathcal{T}}\) is said to be a _tree_ (Steele (1988)) if the following two conditions hold: (1) The graph \(\mathcal{T}\) is connected; i.e., any two nodes in \(\mathcal{T}\) are connected by a path containing only edges in \(E_{\mathcal{T}}.\) (2) The graph \(\mathcal{T}\) is acyclic; i.e., no subgraph of \(\mathcal{T}\) is a cycle. The tree \(\mathcal{T}\) is said to be a _spanning tree_ of \(K_{n}\) if \(\mathcal{T}\) contains all the \(n\) nodes \(\{X_{k}\}_{1\leq k\leq n}.\) In what follows we assign weights to edges of the graph \(K_{n}\) and study minimum weight spanning trees. ### Minimum spanning trees For points \(x,y\in S,\) we let \(d(x,y)\) denote the Euclidean distance between \(x\) and \(y\) and let \(h:S\times S\rightarrow(0,\infty)\) be a deterministic measurable function satisfying \[c_{1}d(x,y)\leq h(x,y)=h(y,x)\leq c_{2}d(x,y) \tag{1.2}\] for some positive constants \(c_{1},c_{2}.\) For a constant \(\alpha>0\) and for \(1\leq i<j\leq n\) we let \(h^{\alpha}(X_{i},X_{j})=h^{\alpha}(e)\) denote the _weight_ of the edge \(e=(X_{i},X_{j})\) with corresponding edge weight exponent \(\alpha.\) We simply refer to \(h^{\alpha}(e)\) as the weight of the edge \(e\) and define \(d(e)=d(X_{i},X_{j})\) to be the Euclidean length of \(e.\) Unless mentioned otherwise, all weights are with respect to the edge weight exponent \(\alpha\) and all lengths are Euclidean. For a tree \(\mathcal{T}\) with vertex set \(\{Y_{1},\ldots,Y_{t}\},\) the weight of \(\mathcal{T}\) is the sum of the weights of the edges in \(\mathcal{T};\) i.e., \(W(\mathcal{T}):=\sum_{e\in\mathcal{T}}h^{\alpha}(e).\) Let \(\mathcal{T}_{n}\) be a spanning tree of \(K_{n}\) containing all the nodes \(\{X_{i}\}_{1\leq i\leq n}\) and satisfying \[MST_{n}=W(\mathcal{T}_{n}):=\min_{\mathcal{T}}W(\mathcal{T}), \tag{1.3}\] where the minimum is taken over all spanning trees \(\mathcal{T}.\) We denote \(\mathcal{T}_{n}\) to be the _minimum spanning tree_ (MST) with corresponding weight \(MST_{n}.\) If there is more than one choice for \(\mathcal{T}_{n},\) we choose one according to a deterministic rule. Let \(d(X_{i})=d\left(X_{i},\mathcal{T}_{n},h\right)\) denote the degree of the node \(X_{i},1\leq i\leq n\) in the MST \(\mathcal{T}_{n}\) with edge weight function \(h\) so that \(\sum_{i=1}^{n}d(X_{i})=2(n\ -\ 1),\) since \(\mathcal{T}_{n}\) contains \(n-1\) edges. Taking expectations, we therefore get that \(n\mathbb{E}d(X_{1})=2(n-1)\) and so \(\mathbb{E}d(X_{1})\leq 2.\) In case the edge weight function \(h\ =\ d,\) the Euclidean length, then the weight of any edge does not depend on the location of its endvertices and the _maximum_ degree of any node in \(\mathcal{T}_{n}\) is at most \(6,\) almost surely. This is because the angle between any two edges in the MST sharing an endvertex, cannot be more than \(60\) degrees (see Aldous and Steele (1992)). This geometric property frequently occurs in the study of the properties of the random MST (Kesten and Lee (1996), Yukich (2000)). For edge weight functions \(h\) that are location dependent, it is possible for the maximum degree of a vertex in the MST to take arbitrary large values with positive probability as stated in the following result. **Proposition 1**.: _For every integer \(K\geq 2\) there exists an edge weight function \(h=h_{K}\) satisfying (1.2), a sequence \(n_{l}\rightarrow\infty\) as \(l\rightarrow\infty\) and a constant \(\epsilon_{0}=\epsilon_{0}(K)>0\) such that_ \[\mathbb{P}\left(\max_{1\leq i\leq n_{l}}d\left(X_{i},\mathcal{T}_{n_{l}},h \right)\geq K\right)\geq\epsilon_{0} \tag{1.4}\] _for all \(l\geq 1.\)_ In light of Proposition 1, we are therefore interested to study the effect of large vertex degrees on the total weight of location dependent MSTs. We demonstrate via common properties like variance, deviation estimates and almost sure convergence, that the total weight behaves roughly the same way as in the location independent case. ### Paper outline The paper is organized as follows. In Section 2, we prove Proposition 1 and in Section 3, we state and prove deviation estimates for the weight of location dependent MSTs and also obtain upper and lower bounds for their expected value. We use stochastic domination and coupling techniques to obtain the deviation estimates and provide numerical simulations to illustrate the bounds obtained. We show that the MST weight \(MST_{n}\) defined in (1.3) still remains of the order of \(n^{1-\frac{\alpha}{2}}\) as in the location independent case. Next in Section 4, we use the deviation estimates obtained in Section 3 to find upper bounds on the variance of location dependent MSTs. The main tools we use in this section are one node difference estimates and the martingale difference method and we also briefly explain using Propostion 1, how the analysis varies between the location dependent and independent cases. As before the variance upper bound is of the order of \(n^{1-\alpha}\) and the same order as in the location independent case. In Section 5, we obtain lower bounds on the variance of MST weight by studying occurrence of predetermined nice configurations that differ in the location of at most one node. In particular combining with the variance upper bound obtained in Section 5, we obtain that the variance _is_ of the order of \(n^{1-\alpha}\). Finally in Sections 6 and 7, for completeness, we study a.s. convergence for location dependent MSTs and uniform MSTs using subsequence arguments and the deviation and variance estimates obtained in the previous sections. ## 2 Proof of Proposition 1 We start with some preliminary computations. For \(i\geq 1\) let \(n_{i}=D\cdot i^{3}\) for some constant \(D>0\) to be determined later and let \(q_{i}=\frac{2K-1}{\sqrt{n_{i}}}.\) Place disjoint \(10q_{i}\times 10q_{i}\) squares \(S_{i}^{big}\) along the diagonal of the unit square \(S\) as shown in Figure 1 (\(a\)) and choose \(D>0\) large enough so that the total sum length of the diagonals of the squares in \(\{S_{i}^{big}\}\) is at most the length of the diagonal of \(S\); i.e., we choose \(D\) large enough so that \[\sum_{i\geq 1}10q_{i}\sqrt{2}\leq\sqrt{2}.\] Let \(S_{i}\) be the \(q_{i}\times q_{i}\) subsquare with the same centre as \(S_{i}^{big}\) as shown in Figure 1\((b)\) and consider \(K\) small subsquares each of size \(\frac{1}{\sqrt{n_{i}}}\times\frac{1}{\sqrt{n_{i}}}\) placed \(x_{i}=\frac{1}{\sqrt{n_{i}}}\) apart on each of the four sides of \(S_{i}\). The total number of the small subsquares is \(4(K-1)\), which we label as \(S_{i}(1),\ldots,S_{i}(4K-4)\). The central small square (labeled \(S_{i}(0)\)) has the same centre as the square \(S_{i}\) and is also of size \(\frac{1}{\sqrt{n_{i}}}\times\frac{1}{\sqrt{n_{i}}}\). Let \(c_{1}\) and \(c_{2}\) be positive constants such that \(c_{1}<\frac{c_{2}}{8K}\) and define the edge weight function \(h\) as: \[h(u,v)=\left\{\begin{array}{ll}c_{1}d(u,v)&\mbox{ if }u\in\cup_{j}\{S_{j}(0) \}\mbox{ or }v\in\cup_{j}\{S_{j}(0)\}\\ \\ c_{2}d(u,v)&\mbox{ otherwise},\end{array}\right. \tag{2.1}\] where \(d(u,v)\) represents the Euclidean distance between \(u\) and \(v\) as before. By choice of \(h\) in (2.1), the edge weight containing a node is less if the node is present in one of the central squares and larger otherwise. For \(i\geq 1\), let \(\mathcal{T}_{n_{i}}\) be the MST of the nodes \(\{X_{k}\}_{1\leq k\leq n_{i}}\) as defined in (1.3) with edge weight function \(h\) as in (2.1). To identify nodes in \(\mathcal{T}_{n_{i}}\) with large degree we define \(E_{K}(i)\) to be the event that there is exactly one node \(v_{i}\in\{X_{k}\}_{1\leq k\leq n_{i}}\) present in \(S_{i}(l)\) for each \(0\leq l\leq 4K-4\) and the rest of \(S_{i}^{big}\) is empty. We recall from the first paragraph of this section that \(S_{i}^{big}\) is the \(10q_{i}\times 10q_{i}\) square with the same centre as \(S_{i}(0)\). The following Lemma implies that the event \(E_{K}(i)\) occurs with positive probability and that if \(E_{K}(i)\) occurs, then the node \(v_{0}\) present in the central square \(S_{i}(0)\) has large degree. **Lemma 1**.: _There exists a constant \(\epsilon_{0}>0\) depending only on \(K,\epsilon_{1}\) and \(\epsilon_{2}\) such that_ \[\mathbb{P}(E_{K}(i))\geq\epsilon_{0}. \tag{2.2}\] _If \(E_{K}(i)\) occurs and if \(\mathcal{T}_{loc}\) denotes the induced subgraph of \(\mathcal{T}_{n_{i}}\) formed by the nodes in \(\{v_{j}\}_{0\leq j\leq 4K-4},\) then \(\mathcal{T}_{loc}\) is a tree (more specifically, a star graph) with edge set \(\{(v_{0},v_{j})\}_{1\leq j\leq 4K-4}.\)_ From Lemma 1, we therefore get that the degree of \(v_{0}\) in the MST \(\mathcal{T}_{n_{i}}\) is \(4K-4\) if the event \(E_{K}(i)\) occurs. From the probability estimate (2.2), we therefore get Proposition 1. We now prove Lemma 1 beginning with (2.2). _Proof of (2.2) in Lemma 1_: Any node of \(\{X_{k}\}_{1\leq k\leq n_{i}}\) is present within the square \(S_{i}(l)\) with probability \(p_{i}(l):=\int_{S_{i}(l)}f(x)dx\) and so \[\mathbb{P}(E_{K}(i))=\binom{n_{i}}{4K-3}(4K-3)!\prod_{l=0}^{4K-4}p_{i}(l)\left( \int_{S\setminus S_{i}^{big}}f(x)dx\right)^{n_{i}-4K+3}. \tag{2.3}\] The area of the square \(S_{i}(l)\) equals \(\frac{1}{n_{i}}\) and so \(p_{i}(l)\geq\frac{\epsilon_{1}}{n_{i}}\) using the bounds in (1.1). Similarly, \(\int_{S_{i}^{big}}f(x)dx\leq\epsilon_{2}100q_{i}^{2}=\frac{C}{n_{i}}\) where \(C=100\epsilon_{2}(2K-1)^{2}\) and so \[\mathbb{P}(E_{K}(i))\geq\binom{n_{i}}{4K-3}(4K-3)!\left(\frac{\epsilon_{1}}{n _{i}}\right)^{4K-3}\left(1-\frac{C}{n_{i}}\right)^{n_{i}}. \tag{2.4}\] For \(a>2b\) we have \(\binom{a}{b}b!\geq(a-b)^{b}\geq\frac{a^{b}}{2^{b}}.\) Therefore choosing \(i\) larger if necessary so that \(n_{i}\geq 8K-6\) and setting \(a=n_{i}\) and \(b=4K-3\), we get Figure 1: \((a)\) The \(10q_{i}\times 10q_{i}\) squares placed along the diagonal of the unit square. \((b)\) The \(q_{i}\times q_{i}\) square \(S_{i}\) along with the smaller subsquares. from (2.4) that \[\mathbb{P}(E_{K}(i))\geq\left(\frac{\epsilon_{1}}{2}\right)^{4K-3}\cdot\left(1- \frac{C}{n_{i}}\right)^{n_{i}}. \tag{2.5}\] To evaluate the final term in (2.5), we use \(1-y\geq e^{-2y}\) for all \(0<y<\frac{1}{2}.\) To see this estimate is true we write \(\log(1-y)=-y-\sum_{k\geq 2}\frac{y^{k}}{k}\) and use \(y<\frac{1}{2}\) to get that \[\sum_{k\geq 2}\frac{y^{k}}{k}\leq\frac{1}{2}\sum_{k\geq 2}y^{k}=\frac{y^{2}}{2(1- y)}\leq y^{2}<y.\] Choosing \(i\) large enough so that \(y=\frac{(4K-3)\epsilon_{2}}{n_{i}}<\frac{1}{2},\) we then get from (2.5) that \[\mathbb{P}(E_{K}(i))\geq\left(\frac{\epsilon_{1}}{2}\right)^{4K-3}\cdot e^{-2C},\] proving (2.2). _Proof of rest of Lemma 1_: Let \(S_{i}^{big}\) be the \(10q_{i}\times 10q_{i}\) square in Figure 1\((b).\) First we show that \(\mathcal{T}_{loc}\) is a tree by a contradiction argument. Suppose for example that two nodes in opposite \(\frac{1}{\sqrt{n_{i}}}\times\frac{1}{\sqrt{n_{i}}}\) squares are joined by a path \(P\) in \(\mathcal{T}_{n_{i}}\) containing at least one vertex not in \(\{v_{i}\}\) as in Figure 1\((b).\) Since \(E_{K}(i)\) occurs, the rest of \(S_{i}^{big}\) not containing \(\cup_{l}S_{i}(l)\) is empty and so the path \(P\) contains at least two "long" edges \(f_{1}=(u_{1},x_{1}),f_{2}=(u_{2},x_{2})\) with endvertices outside \(S_{i}^{big}\) as shown in Figure 1\((b).\) The nodes \(u_{1}\) and \(u_{2}\) both belong to \(\{v_{i}\}_{1\leq i\leq 4K-4}\) and the dotted edge \((u_{1},u_{2})\) is not present in the MST \(\mathcal{T}_{n_{i}}\) because otherwise, we would get a cycle. The weights of the edges \(f_{1}\) and \((u_{1},u_{2})\) equal \((c_{2}d(u_{1},x_{1}))^{\alpha}\) and \((c_{2}d(u_{1},u_{2}))^{\alpha},\) respectively, by definition of the edge weight function in (2.1). The length \(d(u_{1},x_{1})\) of the edge \(f_{1}=(u_{1},x_{1})\) is at least \(\frac{9q_{i}}{2},\) the width of the annulus \(S_{i}^{big}\setminus S_{i}\) and the length of \((u_{1},u_{2})\) at most \(q_{i}\sqrt{2}.\) This implies that removing the edge \(f_{1}\) and adding \((u_{1},u_{2}),\) we would get an MST with weight strictly larger than \(\mathcal{T}_{n_{i}},\) a contradiction. This proves that \(\mathcal{T}_{loc}\) is a tree. Next, to see that \(\mathcal{T}_{loc}\) is a star graph, we use the definition of the edge weight function (2.1) and obtain that the weight of any edge of the form \((v_{0},v_{j})\) is at most \[(c_{1}\cdot d(v_{0},v_{j}))^{\alpha}\leq\left(c_{1}\cdot\left(\frac{q_{i}}{ \sqrt{2}}+\frac{\sqrt{2}}{\sqrt{n_{i}}}\right)\right)^{\alpha}\leq\left(\frac{ c_{1}\cdot 4K}{\sqrt{n_{i}}}\right)^{\alpha} \tag{2.6}\] since \(q_{i}=\frac{2K-1}{\sqrt{n_{i}}}\) (see first paragraph of this Section). Similarly, the weight of any edge of the form \((v_{s},v_{t}),s,t\neq 0\) is at least \(\left(\frac{c_{2}}{\sqrt{n_{i}}}\right)^{\alpha}.\) Since \(c_{1}<\frac{c_{2}}{8K}\) (see (2.1)), we get that \((v_{s},v_{t})\) has larger weight that \((v_{0},v_{j}).\) Thus the minimum weight tree containing all the nodes \(\{v_{j}\}_{0\leq j\leq 4K-4}\) is the star graph with vertex set \(\{(v_{0},v_{j})\}_{0\leq j\leq 4K-4}.\) ## 3 Deviation estimates for the MST As a first step in the study of properties of location dependent MSTs, we obtain deviation estimates in this Section. The bounds obtained are of same order and we also use these estimates in later Sections for the study of variance and almost sure convergence. We begin with a couple of preliminary definitions. Let \(\epsilon_{1},\epsilon_{2}\) be as in (1.1) and set \(\delta=\delta(\alpha)=\epsilon_{1}\) if the edge weight exponent \(\alpha\leq 1\) and \(\delta=\epsilon_{2}\) if \(\alpha>1.\) Recalling that \(c_{1}\) and \(c_{2}\) are the bounds for the edge weights as in (1.2), we define for \(A>0\) the terms \(C_{1}(A)=C_{1}(A,c_{1},c_{2},\epsilon_{1},\epsilon_{2},\alpha)\) and \(C_{2}(A)=C_{2}(A,c_{1},c_{2},\epsilon_{1},\epsilon_{2},\alpha)\) as \[C_{1}(A) := \frac{(c_{1}A)^{\alpha}}{2A^{2}}(1-e^{-\epsilon_{1}A^{2}})e^{-8 \epsilon_{2}A^{2}}\text{ and}\] \[C_{2}(A) := (2c_{2}A)^{\alpha}\left(1+\frac{\mathbb{E}T^{\alpha}}{A^{2}} \right), \tag{3.1}\] where \(T\) is a geometric random variable with success parameter \(p=1-e^{-\delta A^{2}},\) independent of the node locations \(\{X_{i}\};\) i.e., \(\mathbb{P}(T=k)=(1-p)^{k-1}p\) for all integers \(k\geq 1.\) Letting \(MST_{n}\) be the MST weight as defined in (1.3), we have the following main result. **Theorem 2**.: _Let \(\alpha>0\) be the edge weight exponent. For every \(A>0\) and integer \(k\geq 1\) and all \(n\geq n_{0}(A,k)\) large,_ \[\mathbb{P}\left(MST_{n}\geq C_{1}(A)n^{1-\frac{\alpha}{2}}\left(1- \frac{4\sqrt{A}}{n^{1/4}}\right)\right)\geq 1-e^{-n^{1/3}}, \tag{3.2}\] \[\mathbb{P}\left(MST_{n}\leq C_{2}(A)n^{1-\frac{\alpha}{2}}\left(1 +\frac{2}{n^{1/16}}\right)\right)\geq 1-\frac{1}{n^{2k}} \tag{3.3}\] \[C_{1}^{k}(A)\left(1-\frac{36k\sqrt{A}}{n^{1/4}}\right)\leq\mathbb{E}\left(\frac{ MST_{n}^{k}}{n^{k\left(1-\frac{\alpha}{2}\right)}}\right)\leq C_{2}^{k}(A) \left(1+\frac{2k}{n^{1/16}}\right). \tag{3.4}\] ### Remarks on Theorem 2 From Theorem 2, we see that the weights of the MST in the location dependent case, is of the same order \(n^{1-\frac{\alpha}{2}}\) as in the location independent case (see Steele (1988)). In fact, using (3.4) we get that the normalized MST weight \(\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\) in fact satisfies \[c_{1}^{\alpha}\cdot\beta_{low}(\alpha)\leq\liminf_{n}\frac{\mathbb{E}MST_{n}}{ n^{1-\frac{\alpha}{2}}}\leq\limsup_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{ \alpha}{2}}}\leq c_{2}^{\alpha}\cdot\beta_{up}(\alpha), \tag{3.5}\] where \[\beta_{low}(\alpha)=\beta_{low}(\alpha,\epsilon_{1},\epsilon_{2}):=\sup_{A>0 }\frac{A^{\alpha}}{2A^{2}}(1-e^{-\epsilon_{1}A^{2}})e^{-8\epsilon_{2}A^{2}}, \tag{3.6}\] \[\beta_{up}(\alpha)=\beta_{up}(\alpha,\epsilon_{1},\epsilon_{2}):=\inf_{A>0}( 2A)^{\alpha}\left(1+\frac{\mathbb{E}T^{\alpha}}{A^{2}}\right), \tag{3.7}\] and \(T\) is a geometric random variable with success parameter \(p=1-e^{-\delta A^{2}}\) (see (3.1)). For the case of homogenous distribution \(\epsilon_{1}=\epsilon_{2}=1\) we get \[\beta_{low}(\alpha)=\frac{1}{2}\sup_{A>0}A^{\alpha-2}(1-e^{-A^{2}})e^{-8A^{2} }>0 \tag{3.8}\] and \[\beta_{up}(\alpha)=\inf_{A>0}(2A)^{\alpha}\left(1+\frac{\mathbb{E}T^{\alpha}} {A^{2}}\right)<\infty, \tag{3.9}\] where \(T\) is a geometric random variable with success parameter \(p=1-e^{-A^{2}}.\) For illustration, we plot \(\beta_{low}(\alpha)\) and \(\beta_{up}(\alpha)\) as a function of \(\alpha\) in Figures 3 and 2, respectively. As we see from the figures \(\beta_{up}(\alpha)\) increases with \(\alpha\) and \(\beta_{low}(\alpha)\) decreases with \(\alpha.\) Expressions (3.8) and (3.9) also allow us to numerically evaluate the bounds in (3.5) for various values of \(\alpha.\) For example, for \(\alpha=1,\) we get that \[\beta_{low}(1)=\frac{1}{2}\sup_{A>0}A^{-1}(1-e^{-A^{2}})e^{-8A^{2}}\approx 0.07 35633>\frac{1}{20}\] and since \(\mathbb{E}T=\frac{1}{p}=\frac{1}{1-e^{-A^{2}}},\) we get that \[\beta_{up}(1)=\inf_{A>0}2A\left(1+\frac{1}{A^{2}(1-e^{-A^{2}})}\right)\approx 4.46256<5.\] Substituting these bounds back in (3.5) we find that \[\frac{c_{1}^{\alpha}}{20}\leq\liminf_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{ \alpha}{2}}}\leq\limsup_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}} \leq 5\cdot c_{2}^{\alpha} \tag{3.10}\] and this obtains bounds for the scaled MST weights in terms of the edge weight function parameters \(c_{1}\) and \(c_{2}.\) Similarly for \(\alpha=2,\) we get that \[\beta_{low}(2)=\frac{1}{2}\sup_{A>0}(1-e^{-A^{2}})e^{-8A^{2}}\approx 0.0216525\] and since \(\mathbb{E}T^{2}=\frac{2-p}{p^{2}}=\frac{1+e^{-A^{2}}}{(1-e^{-A^{2}})^{2}},\) we get that \[\beta_{up}(2)=\inf_{A>0}(2A)^{2}\left(1+\frac{1+e^{-A^{2}}}{A^{2}(1-e^{-A^{2}} )^{2}}\right)\approx 13.8772\] and we have analogous bounds as in (3.10) in this case as well. In general, \(\beta_{low}(\alpha)\) and \(\beta_{up}(\alpha)\) in (3.5) can be evaluated to get some knowledge of the dependence of the normalized MST weight on the node distribution parameters \(\epsilon_{1},\epsilon_{2}\) (see (1.1)) and the edge weight function parameters \(c_{1},c_{2}\) (see (1.2)). For example, suppose the distribution \(f=\frac{1}{2}\) on \([0,0.5]^{2}\) and \(f=\frac{7}{6}\) on the remaining area in \(S\) so that \(\int f=1.\) In this case \(\epsilon_{1}=\frac{1}{2}\) and \(\epsilon_{2}=\frac{7}{6}\) and we directly evaluate (3.6) and (3.7) to get that \(\beta_{low}(1)\approx 0.0346363\) and \(\beta_{up}(1)\approx 4.92912.\) As a final remark, we provide a simple upper bound for \(\mathbb{E}T^{\alpha}\) in order to obtain quick evaluations of \(\beta_{up}(\alpha)\) in (3.7). Using \(\mathbb{P}(T\geq k)=(1-p)^{k-1}\leq e^{-p(k-1)}\) where \(p=1-e^{-\delta A^{2}},\) we see that relation (7.5) in Appendix is satisfied and so letting \(r\) be the smallest integer greater than or equal \(\alpha\) we have from (7.6) that \[\mathbb{E}T^{\alpha}\leq\mathbb{E}T^{r}\leq\frac{r!}{(1-e^{-p})^{r}}.\] Plugging this estimate into (3.7) provides an upper bound for \(\beta_{up}(\alpha).\) Figure 3: Plot of \(\beta_{low}(\alpha)\) as a function of \(\alpha\) for the homogenous case \(\epsilon_{1}=\epsilon_{2}=1\). ### Proof outline and Poissonization The rough idea for the proof of Theorem 2 is as follows. We first tile the unit square into small subsquares and for obtaining the lower bound, we determine configurations within these subsquares that result in long edges. For the upper bound, we explicitly construct a spanning tree whose weight is at most a constant multiple of the weight of the MST, with high probability, i.e., with probability one as \(n\rightarrow\infty.\) To prove the deviation estimates in Theorem 2, we use Poissonization and let \(\mathcal{P}\) be a Poisson process in the unit square \(S\) with intensity \(nf(.).\) We join each pair of nodes by a straight line segment and denote the resulting complete graph as \(K_{n}^{(P)}.\) As in (1.3), we let \(MST_{n}^{(P)}\) be the weight of the MST of \(K_{n}^{(P)}\) and first find deviation estimates for \(MST_{n}^{(P)}.\) We then use dePoissonization to obtain corresponding deviation estimates for \(MST_{n},\) the weight of the MST of the complete graph \(K_{n}\) as defined in (1.3). For a real number \(A>0,\) we tile the unit square \(S\) into small \(\frac{A(n)}{\sqrt{n}}\times\frac{A(n)}{\sqrt{n}}\) squares \(\{R_{i}\}_{1\leq i\leq\frac{n}{A^{2}(n)}}\) where \(A(n)\in\left[A,A+\frac{1}{\log n}\right]\) is chosen such that \(\frac{\sqrt{n}}{A(n)}\) is an integer. This is possible since \[\frac{\sqrt{n}}{A}-\frac{\sqrt{n}}{A+(\log n)^{-1}}=\frac{\sqrt{n}}{\log n} \cdot\frac{1}{A(A+(\log n)^{-1})}\geq\frac{\sqrt{n}}{2A^{2}\log n} \tag{3.11}\] for all \(n\) large. For notational simplicity, we denote \(A(n)\) as \(A\) henceforth and label the squares as in Figure 4 so that \(R_{i}\) and \(R_{i+1}\) share an edge for \(1\leq i\leq\frac{n}{A^{2}}-1.\) ### Lower deviation bounds (3.2) For \(1\leq i\leq\frac{n}{A^{2}}\) let \(E(R_{i})\) denote the event that the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) square \(R_{i}\) is occupied i.e., contains at least one node of \(\mathcal{P},\) and all squares sharing a corner with \(R_{i}\) are empty. If \(E(R_{i})\) occurs, then there is at least one edge in the MST of \(K_{n}^{(P)}\) with one endvertex in \(R_{i}\) and other endvertex in a square not sharing a corner with \(R_{i}.\) Such an edge has a Euclidean length of at least \(\frac{A}{\sqrt{n}}\) and so a weight of at least \(\left(\frac{c_{1}A}{\sqrt{n}}\right)^{\alpha}\) (see (1.2)). Consequently \[MST_{n}^{(P)}\geq\frac{1}{2}\cdot\sum_{i=1}^{\frac{n}{A^{2}}}\left(\frac{c_{1} A}{\sqrt{n}}\right)^{\alpha}\mathbf{1}(E(R_{i}))=\frac{1}{2}\cdot\left(\frac{c_{1}A}{ \sqrt{n}}\right)^{\alpha}\cdot G_{\alpha}, \tag{3.12}\] where \(G_{\alpha}:=\sum_{i=1}^{\frac{n}{A^{2}}}\mathbf{1}(E(R_{i}))\) and the factor \(\frac{1}{2}\) occurs, since each edge is counted twice in the summation. To estimate \(G_{\alpha},\) we would like to split it into sums of independent r.v.s using the following construction. For a square \(R_{i},\) let \(\mathcal{N}(R_{i})\) be the set of all squares sharing a corner with \(R_{i},\) including \(R_{i}.\) If \(R_{i}\) does not intersect the sides of the unit square \(S,\) then there are \(9\) squares in \(\mathcal{N}(R_{i})\) and if \(R_{j}\) is another square such that \(\mathcal{N}(R_{i})\cap\mathcal{N}(R_{j})=\emptyset,\) then the corresponding events \(E(R_{i})\) and \(E(R_{j})\) are independent, by Poisson property. We therefore extract nine disjoint subsets \(\{\mathcal{U}_{l}\}_{1\leq l\leq 9}\) of \(\{R_{i}\}\) with the following properties: \((A)\) If \(R_{i},R_{j}\in\mathcal{U}_{l},\) then \(\#\mathcal{N}(R_{i})=\#\mathcal{N}(R_{j})=9\) and \(\mathcal{N}(R_{i})\cap\mathcal{N}(R_{j})=\emptyset.\) \((B)\) The number of squares \(\#\mathcal{U}_{l}\geq\frac{n}{9A^{2}}-\frac{4\sqrt{n}}{A}\) for each \(1\leq l\leq 9.\) This is possible since there are at most \(\frac{4\sqrt{n}}{A}-4<\frac{4\sqrt{n}}{A}\) squares in \(\{R_{k}\}\) intersecting the sides of the unit square \(S\) and the total number of squares in \(\{R_{k}\}\) is \(\frac{n}{A^{2}}.\) We now write \(G_{\alpha}=\sum_{i=1}^{\frac{n}{A^{2}}}\mathbf{1}(E(R_{i}))\geq\sum_{l=1}^{9} \sum_{R_{i}\in\mathcal{U}_{l}}\mathbf{1}(E(R_{i})),\) where each inner summation on the right side is a sum of independent Bernoulli random variables, which we bound via standard deviation estimates. Indeed for \(1\leq l\leq 9\) and \(R_{i}\in\mathcal{U}_{l},\) the number of nodes \(N(R_{i})\) is Poisson distributed with mean \(n\int_{R_{i}}f(x)dx\in[\epsilon_{1}A^{2},\epsilon_{2}A^{2}]\) (see (1.1)) and so \(R_{i}\) is occupied with probability at least \(1-e^{-\epsilon_{1}A^{2}}.\) Also each of the eight squares sharing a corner with \(R_{i}\) is empty with probability at least \(e^{-\epsilon_{2}A^{2}},\) implying that \(\mathbb{P}(E(R_{i}))\geq(1-e^{-\epsilon_{1}A^{2}})e^{-8\epsilon_{2}A^{2}}.\) Using the standard deviation estimate (7.2) of Lemma 13 in Appendix with \(\mu_{1}=(1-e^{-\epsilon_{1}A^{2}})e^{-8\epsilon_{2}A^{2}},m=\frac{n}{9A^{2}}- \frac{4\sqrt{n}}{A}\) and \(\epsilon=\frac{1}{m^{1/4}}\) we then get that \[\mathbb{P}_{0}\left(\sum_{R_{i}\in\mathcal{U}_{l}}\mathbf{1}(E(R_{i}))\geq(1- \epsilon)\left(\frac{n}{9A^{2}}-\frac{4\sqrt{n}}{A}\right)(1-e^{-\epsilon_{1}A ^{2}})e^{-8\epsilon_{2}A^{2}}\right)\geq 1-e^{-D_{1}\epsilon^{2}n} \tag{3.13}\] for some constant \(D_{1}>0\) not depending on \(l.\) Since \(m^{1/4}<\left(\frac{n}{9A^{2}}\right)^{1/4},\) we get that \(D_{1}\epsilon^{2}n\geq 2D_{2}\sqrt{n}\) for some constant \(D_{2}>0\) and since \(m^{1/4}>\left(\frac{n}{10A^{2}}\right)^{1/4}\) for all \(n\) large, we have \[(1-\epsilon)\left(\frac{n}{9A^{2}}-\frac{4\sqrt{n}}{A}\right)\geq\frac{n}{9A^ {2}}-\frac{4\sqrt{n}}{A}-\frac{n}{A^{2}m^{1/4}}\geq\frac{n}{9A^{2}}\left(1- \frac{36\sqrt{A}}{n^{1/4}}\right)\] for all \(n\) large. Letting \[E_{low}:=\left\{G_{\alpha}\geq(1-e^{-\epsilon_{1}A^{2}})e^{-8\epsilon_{2}A^{2} }\frac{n}{A^{2}}\left(1-\frac{36\sqrt{A}}{n^{1/4}}\right)\right\},\] we get from (3.13) that \(\mathbb{P}_{0}(E_{low})\geq 1-9e^{-2D_{2}\sqrt{n}}\) and moreover, from (3.12) we also get that \[MST_{n}^{(P)}\mathbf{1}(E_{low})\geq\Delta_{n}:=C_{1}(A)n^{1-\frac{\alpha}{2}} \left(1-\frac{36\sqrt{A}}{n^{1/4}}\right),\] where \(C_{1}(A)\) is as defined in (3.1). From the estimate for the probability of the event \(E_{low}\) above we therefore get \(\mathbb{P}_{0}\left(MST_{n}^{(P)}\geq\Delta_{n}\right)\geq 1-9e^{-2D_{2} \sqrt{n}}\) for all \(n\) large. To convert the estimate from Poisson to the Binomial process, we let \(E_{P}:=\left\{MST_{n}^{(P)}\geq\Delta_{n}\right\},E:=\left\{MST_{n}\geq\Delta_ {n}\right\}\) and use the dePoissonization formula (Bradonjic et al. (2010)) \[\mathbb{P}(E)\geq 1-D\sqrt{n}\mathbb{P}(E_{P}^{c}) \tag{3.14}\] for some constant \(D>0\) to get that \(\mathbb{P}(E)\geq 1-D\sqrt{n}e^{-2D_{2}\sqrt{n}}\geq 1\,-\,e^{-D_{2}\sqrt{n}}\) for all \(n\) large. This proves (3.2) and so using \[\mathbb{E}MST_{n}^{k}\geq\mathbb{E}MST_{n}^{k}\mathbf{1}\left(MST_{n}\geq\Delta _{n}\right)\geq\Delta_{n}^{k}\left(1-e^{-D_{2}\sqrt{n}}\right)\] and \[\left(1-\frac{36\sqrt{A}}{n^{1/4}}\right)^{k}(1-e^{-D_{2}\sqrt{n}})\geq\left( 1-\frac{36k\sqrt{A}}{n^{1/4}}\right)(1-e^{-D_{2}\sqrt{n}})\geq 1-\frac{37k \sqrt{A}}{n^{1/4}}\] for all \(n\) large, we also obtain the lower bound on the expectation in (3.4). To see that (3.14) is true, we let \(N_{P}\) denote the random number of nodes of \(\mathcal{P}\) in all the squares \(\{S_{j}\}\) so that \(\mathbb{E}_{0}N_{P}=n\) and \(\mathbb{P}_{0}(N_{P}=n)=e^{-n}\frac{n^{n}}{n!}\geq\frac{D_{1}}{\sqrt{n}}\) for some constant \(D_{1}>0\), using the Stirling formula. Given \(N_{P}=n\), the nodes of \(\mathcal{P}\) are i.i.d. with distribution \(f(.)\) as defined in (1.1); i.e., \(\mathbb{P}_{0}(E_{P}^{c}|N_{P}=n)=\mathbb{P}(E^{c})\) and so \[\mathbb{P}_{0}(E_{P}^{c})\geq\mathbb{P}_{0}(E_{P}^{c}|N_{P}=n)\mathbb{P}_{0}(N _{P}=n)=\mathbb{P}(E^{c})\mathbb{P}_{0}(N_{P}=n)\geq\mathbb{P}(E^{c})\frac{D_ {1}}{\sqrt{n}},\] proving (3.14). ### Upper deviation bounds (3.3) As before, we first obtain upper bounds for the Poissonized MST \(MST_{n}^{(P)}\). The idea is to first connect all the nodes within each square \(R_{j}\) to get a collection of subtrees and then join all these subtrees together to get an overall spanning tree. Suppose there is at least one node of the Poisson process \(\mathcal{P}\) in the unit square \(S\) and let \(R_{i_{1}},R_{i_{2}},\ldots,R_{i_{Q}},1\leq i_{1}<i_{2}<\ldots<i_{Q}\leq\frac{n }{A^{2}},Q\leq\frac{n}{A^{2}}\) be the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares containing all the nodes of \(\mathcal{P}\). For \(1\leq j\leq Q\), let \(\mathcal{T}_{i_{j}}\) be any spanning tree containing all the nodes of \(R_{i_{j}}\) and for \(1\leq j\leq Q-1\) let \(e_{j+1}\) be any edge joining some node in \(R_{i_{j}}\) and some node in \(R_{i_{j+1}}\) so that the union \(\mathcal{T}_{uni}:=\cup_{1\leq j\leq Q}\mathcal{T}_{i_{j}}\cup\cup_{2\leq l\leq Q }\{e_{l}\}\) is a spanning tree of the complete graph \(K_{n}^{(P)}.\) The weight \(W(\mathcal{T}_{uni})\geq MST_{n}^{(P)}\) and so it suffices to upper bound \(W(\mathcal{T}_{uni}).\) For \(1\leq j\leq Q\), there are \(N(R_{i_{j}})\) nodes of the Poisson process in the square \(R_{i_{j}}\) and any two such nodes are connected by an edge of length at most \(\frac{A\sqrt{2}}{\sqrt{n}}.\) Therefore the spanning tree \({\cal T}_{i_{j}}\) has a total weight of at most \(N(R_{i_{j}})\cdot\left(\frac{c_{2}A\sqrt{2}}{\sqrt{n}}\right)^{\alpha},\) using the bounds for the function \(h\) in (1.2). The edge \(e_{j+1}\) that connects some node in \(R_{i_{j}}\) with some node of \(R_{i_{j+1}}\) has a Euclidean length of at most \(\frac{2T_{j+1}A}{\sqrt{n}}\) where \(T_{j+1}:=i_{j+1}-i_{j}\) and therefore has a weight of at most \(\left(\frac{2c_{2}T_{j+1}A}{\sqrt{n}}\right)^{\alpha},\) by (1.2). In effect, \[W\left({\cal T}_{uni}\right) \leq \sum_{j=1}^{Q}N(R_{i_{j}})\cdot\left(\frac{c_{2}A\sqrt{2}}{\sqrt{n }}\right)^{\alpha}+\sum_{j=1}^{Q-1}\left(\frac{2c_{2}T_{j+1}A}{\sqrt{n}} \right)^{\alpha}\] \[= \sum_{i=1}^{\frac{n}{A^{2}}}N(R_{i})\cdot\left(\frac{c_{2}A\sqrt{ 2}}{\sqrt{n}}\right)^{\alpha}+\sum_{j=2}^{Q}\left(\frac{2c_{2}T_{j}A}{\sqrt{n }}\right)^{\alpha}.\] Setting \(T_{1}:=i_{1}-1,T_{Q+1}:=\frac{n}{A^{2}}-i_{Q}\) and \(S_{\alpha}:=\sum_{j=1}^{Q+1}T_{j}^{\alpha}\) we then get \[MST_{n}^{(P)}\leq W\left({\cal T}_{uni}\right)\leq\left(\frac{2c_{2}A}{\sqrt{n }}\right)^{\alpha}\left(\sum_{i=1}^{\frac{n}{A^{2}}}N(R_{i})+S_{\alpha}\right). \tag{3.15}\] The first sum \(\sum_{i=1}^{\frac{n}{A^{2}}}N(R_{i})\) in the right side of (3.15) is a Poisson random variable with mean \(n\) since this denotes the total number of nodes of the Poisson process in the unit square. Using the deviation estimate (7.1) in Appendix with \(m=1,\mu_{2}=n\) and \(\epsilon=\frac{\log n}{\sqrt{n}},\) we have that \[\mathbb{P}_{0}\left(\sum_{i=1}^{\frac{n}{A^{2}}}N(R_{i})\leq n\left(1+\frac{ \log n}{\sqrt{n}}\right)\right)\geq 1-e^{-D(\log n)^{2}} \tag{3.16}\] for some constant \(D>0.\) Setting \(E_{node}:=\left\{\sum_{i=1}^{\frac{n}{A^{2}}}N(R_{i})\leq n\left(1+\frac{\log n }{\sqrt{n}}\right)\right\},\) we get from (3.15) that \[MST_{n}^{(P)}{\bf 1}(E_{node})\leq\left(\frac{2c_{2}A}{\sqrt{n}}\right)^{ \alpha}\left(n\left(1+\frac{\log n}{\sqrt{n}}\right)+S_{\alpha}\right). \tag{3.17}\] The second term \(S_{\alpha}\) in (3.17) is well-defined for any configuration \(\omega\) of the Poisson process provided we set \(S_{\alpha}(\omega_{0})=\left(\frac{n}{A^{2}}-1\right)^{\alpha}\) where \(\omega_{0}\) is the configuration containing no node of the Poisson process in the unit square \(S.\) The following Lemma obtains an estimate on \(S_{\alpha}.\) **Lemma 3**.: _Let \(T\) be a geometric random variable with success parameter \(p=1-e^{-A^{2}\delta}\) independent of the node locations \(\{X_{i}\}.\) For every even integer \(m\geq 1\) and every \(A>0\) and for all \(n\geq n_{0}(m,A)\) large, we have_ \[\mathbb{P}_{0}\left(S_{\alpha}\leq\left(1+\frac{1}{n^{1/16}}\right)\frac{n}{A^ {2}}\mathbb{E}T^{\alpha}\right)\geq 1-\frac{D}{n^{7m/16}}, \tag{3.18}\] _where \(D>0\) is a constant._ Since \(S_{\alpha}\) is not an i.i.d. sum, we use coupling techniques to estimate \(S_{\alpha}\) and prove Lemma 3 at the end of this section. We continue with the proof of the deviation upper bound. Let \(m\geq 2\) be an even integer to be determined later. Letting \(E_{S}\) be the event on the left side of (3.18) we have from the previously computed upper bound on \(MST_{n}^{(P)}\) (see (3.17)) that \[MST_{n}^{(P)}{\bf 1}(E_{node}\cap E_{S}) \leq \left(\frac{2c_{2}A}{\sqrt{n}}\right)^{\alpha}\left(n\left(1+ \frac{\log n}{\sqrt{n}}\right)+\left(1+\frac{1}{n^{1/16}}\right)\frac{n}{A^{2 }}\mathbb{E}T^{\alpha}\right) \tag{3.19}\] \[\leq C_{2}(A)n^{1-\frac{\alpha}{2}}\left(1+\frac{1}{n^{1/16}}\right)\] where \(C_{2}(A)\) is as in (3.1) and the final estimate in (3.19) is obtained using \(\frac{2\log n}{\sqrt{n}}\leq\frac{1}{n^{1/16}}\) for all \(n\) large. From (3.19) and the estimates for the events \(E_{node}\) and \(E_{S}\) from (3.16) and (3.18), respectively, we have \[\mathbb{P}_{0}\left(MST_{n}^{(P)}\leq C_{2}(A)n^{1-\frac{\alpha}{ 2}}\left(1+\frac{1}{n^{1/16}}\right)\right) \geq 1-e^{-D(\log n)^{2}}-\frac{D}{n^{7m/16}} \tag{3.20}\] \[\geq 1-\frac{2D}{n^{7m/16}}\] for all \(n\) large and some constant \(D>0.\) From (3.20) and the dePoissonization formula (3.14), we obtain \[\mathbb{P}\left(MST_{n}\leq C_{2}(A)n^{1-\frac{\alpha}{2}}\left(1+\frac{1}{n^ {1/16}}\right)\right)\geq 1-\frac{2D\sqrt{n}}{n^{7m/16}}. \tag{3.21}\] Choosing \(m\) large enough such that \(\frac{7m}{16}-\frac{1}{2}\geq k,\) we obtain the estimate in (3.3). For bounding the expectation of \(MST_{n}^{k}\), we let \(\Delta_{n}=C_{2}(A)n^{1-\frac{\alpha}{2}}\left(1+\frac{1}{n^{1/16}}\right)\) and write \[\mathbb{E}MST_{n}^{k} = \mathbb{E}MST_{n}^{k}\mathbf{1}(MST_{n}\leq\Delta_{n})+\mathbb{E} MST_{n}^{k}\mathbf{1}(MST_{n}>\Delta_{n}) \tag{3.22}\] \[\leq \Delta_{n}^{k}+\mathbb{E}MST_{n}^{k}\mathbf{1}(MST_{n}>\Delta_{n})\] To estimate \(\Delta_{n}^{k}\), we use \(\left(1+\frac{1}{n^{1/16}}\right)^{k}\leq e^{k/n^{1/16}}\leq 1+\frac{2k}{n^{1/16}}\) for all \(n\) large and get that \(\Delta_{n}^{k}\leq C_{2}^{k}(A)n^{k\left(1-\frac{\alpha}{2}\right)}\left(1+ \frac{2k}{n^{1/16}}\right)\) for all \(n\) large. For the second term in (3.22), we use the estimate \(MST_{n}\leq n\cdot\left(c_{2}\sqrt{2}\right)^{\alpha}\) since there are \(n\) edges in the spanning tree, each such edge has an Euclidean length of at most \(\sqrt{2}\) and so the weight of any edge is at most \((c_{2}\sqrt{2})^{\alpha}\), using (1.2). Letting \(\theta_{m}=\frac{7m}{16}-\frac{1}{2}-\frac{\alpha k}{2}\) and using the probability estimate (3.21) we then get that \[\mathbb{E}MST_{n}^{k}\leq\Delta_{n}^{k}+\frac{3D_{2}n^{k}\sqrt{n}}{n^{7m/16}} \leq C_{2}^{k}(A)n^{k\left(1-\frac{\alpha}{2}\right)}\left(1+\frac{2k}{n^{1/16 }}+\frac{D_{3}}{n^{\theta_{m}}}\right)\] for all \(n\) large and some constant \(D_{3}>0.\) Choosing \(m\) larger if necessary so that \(\theta_{m}\geq 1>\frac{1}{16},\) we obtain the expectation upper bound in (3.4). _Proof of Lemma 3_: We show that \(S_{\alpha}(\omega)\) is monotonic in \(\omega\) in the sense that adding more nodes increases \(S_{\alpha}\) if \(\alpha\leq 1\) and decreases \(S_{\alpha}\) if \(\alpha>1.\) This then allows us to use coupling and upper bound \(S_{\alpha}\) by simply considering homogenous Poisson processes. _Monotonicity of \(S_{\alpha}\)_: We recall that \(\omega_{0}\) is the configuration containing no node of the Poisson process in the unit square. For a configuration \(\omega\neq\omega_{0}\) let \(1\leq i_{1}(\omega)<\ldots<i_{Q}(\omega)\leq\frac{n}{A^{2}}\) be the indices of the squares in \(\{R_{j}\}\) containing at least one node of the Poisson process \(\mathcal{P}.\) Letting \(i_{0}(\omega)=1\) and \(i_{Q+1}(\omega)=\frac{n}{A^{2}}\) we have \(S_{\alpha}(\omega)=\sum_{j=0}^{Q}(i_{j+1}(\omega)-i_{j}(\omega))^{\alpha}.\) Suppose \(\omega^{\prime}=\omega\cup\{x\}\) is obtained by adding a single extra node at \(x\in R_{j_{0}}\) for some \(1\leq j_{0}\leq\frac{n}{A^{2}}.\) If \(j_{0}\in\{i_{k}(\omega)\}_{0\leq k\leq Q+1},\) then \(S_{\alpha}(\omega^{\prime})=S_{\alpha}(\omega).\) Else there exists \(0\leq a\leq Q\) such that \(i_{a}(\omega)<j_{0}<i_{a+1}(\omega)\) and so \[S_{\alpha}(\omega^{\prime})=S_{\alpha}(\omega)+(i_{a+1}(\omega)-j_{0})^{ \alpha}+(j_{0}-i_{a}(\omega))^{\alpha}-(i_{a+1}(\omega)-i_{a}(\omega))^{\alpha}.\] If \(\alpha\leq 1\) then using \(a^{\alpha}+b^{\alpha}\geq(a+b)^{\alpha}\) for positive numbers \(a,b\) we get that \(S_{\alpha}(\omega^{\prime})\geq S_{\alpha}(\omega).\) If \(\alpha>1\) then \(a^{\alpha}+b^{\alpha}\leq(a+b)^{\alpha}\) and so \(S_{\alpha}(\omega^{\prime})\leq S_{\alpha}(\omega).\) This monotonicity property together with coupling allows us to upper bound \(S_{\alpha}\) as follows. Letting \(\delta=\epsilon_{2}\) if \(\alpha\leq 1\) and \(\delta=\epsilon_{1}\) if \(\alpha>1,\) we let \(\mathcal{P}_{\delta}\) be a homogenous Poisson process of intensity \(\delta n\) on the unit square \(S,\) defined on the probability space \((\Omega_{\delta},\mathcal{F}_{\delta},\mathbb{P}_{\delta}).\) Let \(F_{\delta}\) denote the event that there is at least node of \(\mathcal{P}_{\delta}\) in the unit square \(S\) and set \(S_{\alpha}^{(\delta)}:=\left(\frac{n}{A^{2}}-1\right)^{\alpha}\) if \(F_{\delta}\) does not occur. If \(F_{\delta}\) occurs, then as before let \(\{i_{j}^{(\delta)}\}_{1\leq j\leq Q_{\delta}}\) be the indices of the squares in \(\{R_{j}\}\) containing at least one node of \(\mathcal{P}_{\delta}.\) Moreover, let \(T_{j+1}^{(\delta)}:=i_{j+1}^{(\delta)}-i_{j}^{(\delta)}\) for \(1\leq j\leq Q_{\delta}\) and set \(T_{1}^{(\delta)}:=i_{1}^{(\delta)}-1\) and \(T_{Q_{\delta}+1}^{(\delta)}:=\frac{n}{A^{2}}-i_{Q_{\delta}}^{(\delta)}.\) Defining \(S_{\alpha}^{(\delta)}:=\sum_{j=1}^{Q_{\delta}+1}\left(T_{j}^{(\delta)}\right) ^{\alpha}\) in this case, we have for any \(x>0\) that \[\mathbb{P}_{\delta}\left(S_{\alpha}^{(\delta)}<x\right)\leq\mathbb{P}_{0} \left(S_{\alpha}<x\right). \tag{3.23}\] The proof of (3.23) follows from standard coupling arguments and for completeness, we provide a proof in Appendix. To estimate \(S_{\alpha}^{(\delta)}\) we let \(N^{(\delta)}(R_{i}),1\leq i\leq\frac{n}{A^{2}},\) be the random number of nodes of \(\mathcal{P}_{\delta}\) in the square \(R_{i}.\) The random variables \(\{N^{(\delta)}(R_{i})\}\) are i.i.d. Poisson distributed each with mean \(A^{2}\delta.\) For \(i\geq\frac{n}{A^{2}}+1,\) we define \(N^{(\delta)}(R_{i})\) to be i.i.d. Poisson random variables with mean \(A^{2}\delta,\) that are also independent of \(\{N^{(\delta)}(R_{i})\}_{1\leq i\leq\frac{n}{A^{2}}}.\) Without loss of generality, we associate the probability measure \(\mathbb{P}_{\delta}\) for the random variables \(\{N^{(\delta)}(R_{i})\}_{i\geq\frac{n}{A^{2}}+1}\) as well. Let \(\tilde{T}_{1}:=\min\{j\geq 1:N^{(\delta)}(R_{j})\geq 1\}\) and for \(j\geq 2,\) let \[\tilde{T}_{j}:=\min\{k\geq\tilde{T}_{j-1}+1:N^{(\delta)}(R_{k})\geq 1\}- \tilde{T}_{j-1}.\] The random variables \(\{\tilde{T}_{i}\}\) are nearly the same as \(\{T_{i}^{(\delta)}\}\) in the following sense: Suppose the event \(F_{\delta}\) occurs so that there is at least one node of \(\mathcal{P}_{\delta}\) in the unit square. This means that \(1\leq Q_{\delta}\leq\frac{n}{A^{2}}\) and so \(T_{1}^{(\delta)}=i_{1}-1=\tilde{T}_{1}-1,T_{j}^{(\delta)}=\tilde{T}_{j}\) for \(2\leq j\leq Q_{\delta}\) and \(T_{Q_{\delta}+1}^{(\delta)}\leq\tilde{T}_{Q_{\delta}+1}.\) Consequently \[S_{\alpha}^{(\delta)}\mathbf{1}(F_{\delta})\leq\sum_{i=1}^{Q_{\delta}+1} \tilde{T}_{i}^{\alpha}\mathbf{1}(F_{\delta})\leq\sum_{i=1}^{\frac{n}{A^{2}}+1 }\tilde{T}_{i}^{\alpha}\mathbf{1}(F_{\delta})\leq\sum_{i=1}^{\frac{n}{A^{2}}+ 1}\tilde{T}_{i}^{\alpha}, \tag{3.24}\] since \(Q_{\delta}\leq\frac{n}{A^{2}}.\) From (3.24), it suffices to estimate the sum on the right side to find an upper bound for \(S_{\alpha}^{(\delta)}.\) The advantage of (3.24) is that \(\{\tilde{T}_{i}\}\) are i.i.d. geometric random variables with success parameter \(p=1-e^{-A^{2}\delta}\) and so all moments of \(\tilde{T}_{i}^{\alpha}\) exist. Letting \(\beta_{i}=\left(\tilde{T}_{i}^{\alpha}-\mathbb{E}_{\delta}\tilde{T}_{i}^{ \alpha}\right)\) and \(\beta_{tot}=\sum_{i=1}^{\frac{n}{A^{2}}+1}\beta_{i}\) we obtain for an even integer constant \(m\) that \[\mathbb{E}_{\delta}\beta_{tot}^{m}=\mathbb{E}_{\delta}\sum_{(i_{1},\ldots,i_{m}) }\beta_{i_{1}}\ldots\beta_{i_{m}}. \tag{3.25}\] For a tuple \((i_{1},\ldots,i_{m})\) let \(\{j_{1},\ldots,j_{w}\}\) be the distinct integers in \(\{i_{1},\ldots,i_{m}\}\) with corresponding multiplicities \(l_{1},\ldots,l_{w}\) so that \[\mathbb{E}_{\delta}\beta_{i_{1}}\ldots\beta_{i_{m}}=\mathbb{E}_{\delta}\beta _{j_{1}}^{l_{1}}\ldots\beta_{j_{w}}^{l_{w}}=\prod_{k=1}^{w}\mathbb{E}_{\delta }\beta_{j_{k}}^{l_{k}}.\] If \(l_{k}=1\) for some \(1\leq k\leq w,\) then \(\mathbb{E}_{\delta}\beta_{i_{1}}\ldots\beta_{i_{m}}=0\) and so for any non zero term in the summation in (3.25), there are at most \(\frac{m}{2}\) distinct terms in \(\{i_{1},\ldots,i_{m}\}.\) This implies that \[\mathbb{E}_{\delta}\beta_{tot}^{m}\leq D(m)\binom{n}{m/2}\leq D(m)n^{m/2}\] for some constant \(D(m)>0.\) For \(\epsilon>0\) we therefore get from Chebychev's inequality that \[\mathbb{P}_{\delta}\left(|\beta_{tot}|>\epsilon\left(\frac{n}{A^{2}}+1\right) \mathbb{E}_{\delta}\tilde{T}_{1}^{\alpha}\right)\leq D_{1}\frac{\mathbb{E}_{ \delta}(\beta_{tot}^{m})}{n^{m}\epsilon^{m}}\leq\frac{D_{2}}{n^{m/2}\epsilon^{ m}}\] for some constants \(D_{1},D_{2}>0.\) Setting \(\epsilon=\frac{1}{n^{1/16}}\) and using \(\epsilon\left(\frac{n}{A^{2}}+1\right)\leq\left(1+\frac{1}{n^{1/16}}\right) \frac{n}{A^{2}}\) for all \(n\) large, we then get \[\mathbb{P}_{\delta}\left(\sum_{i=1}^{\frac{n}{A^{2}}+1}\tilde{T}_{i}^{\alpha} \leq\left(1+\frac{1}{n^{1/16}}\right)\frac{n}{A^{2}}\mathbb{E}\tilde{T}_{1}^{ \alpha}\right)\geq 1-\frac{D_{2}}{n^{7m/16}}. \tag{3.26}\] From the upper bound for \(S_{\alpha}^{(\delta)}\) in (3.24) and the fact that there is at least one node of \(\mathcal{P}_{\delta}\) in the unit square \(S\) with probability \(1-e^{-\delta n}\) we further get \[\mathbb{P}_{\delta}\left(S_{\alpha}^{(\delta)}\leq\left(1+\frac{1}{n^{1/16}} \right)\frac{n}{A^{2}}\mathbb{E}\tilde{T}_{1}^{\alpha}\right)\geq 1-\frac{D_{2}}{ n^{7m/16}}-e^{-\delta n}\geq 1-\frac{2D_{2}}{n^{7m/16}}\] for all \(n\) large. Using the coupling relation (3.23) we finally get (3.18) proving Lemma 3. Variance upper bound for the MST In this section, we study variance upper bound estimates for MSTs with location dependent edge weights. Recalling the distribution parameters \(\epsilon_{1},\epsilon_{2}\) (see (1.1)) and the edge weight distribution parameters \(c_{1},c_{2}\) (see (1.2), we have the following result regarding the variance of \(MST_{n}\) as defined in (1.3). **Theorem 4**.: _There is a constant \(D_{1}=D_{1}(\alpha,c_{1},c_{2},\epsilon_{1},\epsilon_{2})>0\) such that the variance \(var(MST_{n})\leq D_{1}n^{1-\alpha}\) for all \(n\) large._ As before, we have that the variance upper bound in the location dependent case is of the same order as that of the location independent case (Kesten and Lee (1996)). _Remarks_: We derive the variance estimate in Theorem 4 using the standard procedure (see Steele (1988), Kesten and Lee (1996)) of first obtaining one node difference estimates and then using the martingale difference method. Because of the large degree property described in Proposition 1, we use a slightly different method in obtaining one node difference estimates in the location dependent case. We explain this in more detail in the paragraph following Lemma 5 below. ### One node difference estimates As in (1.3), let \(MST_{n+1}=W(\mathcal{T}_{n+1})\) be the weight of the MST \(\mathcal{T}_{n+1}\) formed by the nodes \(\{X_{k}\}_{1\leq k\leq n+1}\) and for \(1\leq j\leq n+1\), let \(MST_{n}(j)=W(\mathcal{T}_{n}(j))\) be the weight of the MST \(\mathcal{T}_{n}(j)\) formed by the nodes \(\{X_{k}\}_{1\leq k\neq j\leq n+1}.\) We are interested in estimates for \(|MST_{n+1}-MST_{n}(j)|,\) the change in the weight of the MST upon adding or removing a single node. Consider the MST \(\mathcal{T}_{n}(j)\) with vertex set \(\{X_{i}\}_{1\leq i\neq j\leq n+1}\) and suppose \(X_{i_{0}}\), \(1\leq i_{0}\neq j\leq n+1\) is the node closest to \(X_{j}\) in terms of the Euclidean distance. The union \(\mathcal{T}_{n}(j)\cup\{(X_{j},X_{i_{0}})\}\) is a spanning tree containing all the nodes \(\{X_{k}\}_{1\leq k\leq n+1}\) and has weight \(MST_{n}(j)+h^{\alpha}(X_{j},X_{i_{0}}).\) Therefore \(MST_{n+1}\leq MST_{n}(j)+h^{\alpha}(X_{j},X_{i_{0}})\) and using the bounds for the weight function \(h(.)\) in (1.2), we have \[h(X_{j},X_{i_{0}})\leq c_{2}d(X_{j},X_{i_{0}})=c_{2}d\left(X_{j},\{X_{k}\}_{1 \leq k\neq j\leq n+1}\right),\] where \(d(X_{j},\{X_{k}\}_{1\leq k\neq j\leq n+1})\) denotes the minimum distance between \(X_{j}\) and the rest of the nodes. Thus \[MST_{n+1}\leq MST_{n}(j)+c_{2}^{\alpha}d^{\alpha}\left(X_{j},\{X_{k}\}_{1\leq k \neq j\leq n+1}\right). \tag{4.1}\] For getting an estimate in the reverse direction, we let \(d_{j}\) be the degree of the node \(X_{j}\) in the MST \({\cal T}_{n+1}\) and let \({\cal N}(X_{j},{\cal T}_{n+1})=\{v_{1},\ldots,v_{d_{j}}\}\) be the set of neighbours of \(X_{j}\) in \({\cal T}_{n+1}.\) We remove the node \(X_{j}\) and add the edges \((v_{i},v_{i+1}),1\leq i\leq d_{j}-1\) to get a spanning tree containing all the nodes \(\{X_{k}\}_{1\leq k\neq j\leq n+1}.\) Again using \(h(v_{i},v_{i+1})\leq c_{2}d(v_{i},v_{i+1})\) from (1.2), we get \[MST_{n}(j)\leq MST_{n+1}+\sum_{i=1}^{d_{j}-1}h^{\alpha}(v_{i},v_{i+1})\leq MST _{n+1}+c_{2}^{\alpha}\sum_{i=1}^{d_{j}-1}d^{\alpha}(v_{i},v_{i+1}) \tag{4.2}\] For \(1\leq i\leq d_{j}-1,\) we have by triangle inequality that \(d(v_{i},v_{i+1})\leq d(X_{j},v_{i})+d(X_{j},v_{i+1})\) and so using \((a+b)^{\alpha}\leq 2^{\alpha}(a^{\alpha}+b^{\alpha})\) for all \(a,b,\alpha>0\) we get that \(d^{\alpha}(v_{i},v_{i+1})\leq 2^{\alpha}\left(d^{\alpha}(X_{j},v_{i})+d^{ \alpha}(X_{j},v_{i+1})\right).\) Using this estimate in (4.2), we get \[MST_{n}(j) \leq MST_{n+1}+(2c_{2})^{\alpha}\sum_{i=1}^{d_{j}}d^{\alpha}(X_{j},v _{i}) \tag{4.3}\] \[= MST_{n+1}+(2c_{2})^{\alpha}\sum_{v\in{\cal N}(X_{j},{\cal T}_{n+ 1})}d^{\alpha}(X_{j},v).\] Summarizing we get from (4.1) and (4.3) that \[|MST_{n+1}-MST_{n}(j)|\leq f_{1}(X_{j})+f_{2}(X_{j}), \tag{4.4}\] where \[f_{1}(X_{j}):=c_{2}^{\alpha}d^{\alpha}\left(X_{j},\{X_{k}\}_{1\leq k\neq j\leq n +1}\right)\] and \[f_{2}(X_{j}):=(2c_{2})^{\alpha}\sum_{v\in{\cal N}(X_{j},{\cal T}_{n+1})}d^{ \alpha}(X_{j},v). \tag{4.5}\] For future use, we prove the following Lemma. **Lemma 5**.: _There is a constant \(D>0\) such that for all \(n\) large and any \(1\leq j\leq n+1\) we have_ \[\left(\mathbb{E}f_{1}(X_{j})\right)^{2} \leq \mathbb{E}f_{1}^{2}(X_{j})\leq\frac{D}{n^{\alpha}},\] \[\sum_{j=1}^{n+1}\mathbb{E}f_{1}^{2}(X_{j}) = (n+1)\mathbb{E}f_{1}^{2}(X_{1})\leq 2Dn^{1-\alpha}, \tag{4.6}\] \[f_{2}^{2}(X_{j}) \leq D\sum_{v\in{\cal N}(X_{j},{\cal T}_{n+1})}d^{2\alpha}(X_{j},v),\] \[\sum_{j=1}^{n+1}{\mathbb{E}}f_{2}^{2}(X_{j}) = (n+1){\mathbb{E}}f_{2}^{2}(X_{1})\leq 2Dn^{1-\alpha} \tag{4.7}\] _and_ \[{\mathbb{E}}|MST_{n+1}-MST_{n}|\leq{\mathbb{E}}f_{1}(X_{n})+{\mathbb{E}}f_{2}(X _{n})\leq\left(\frac{D}{\sqrt{n}}\right)^{\alpha}. \tag{4.8}\] The proof of (4.8) follows from (4.6) and (4.7). Estimate (4.6) is standard and does not depend on the structure of the MST since it requires estimating the minimum distance between \(X_{j}\) and the rest of the nodes. In case the edge weights are location independent, estimate (4.7) is true (see Kesten and Lee (1996)) since the maximum degree of any node of MST is at most 6 (see Aldous and Steele (1992) for example) and so the first estimate in (4.7) follows from the definition of \(f_{2}\) in (4.5) and the identity \(\left(\sum_{i=1}^{6}a_{i}\right)^{2}\leq 6\sum_{i=1}^{6}a_{i}^{2}.\) The second estimate in (4.7) then follows from the \(\alpha-\)invariance property of MST (Kesten and Lee (1996)) which states that the MST remains the same whatever the value of the edge weight exponent \(\alpha.\) This invariance property is also a consequence of the Kruskal's construction of the minimum spanning tree. In our case where the edge weights are location dependent, we directly estimate the sum \(f_{2}(X_{j})\) by showing that the ratio of length of closely spaced edges having a common vertex is bounded above by a constant strictly less than one. Splitting the plane into sectors then allows us to estimate the sum of weighted length of edges within each sector and then use the \(\alpha-\)invariance property as described above to obtain (4.7) (see proof of (4.7) below). For completeness we begin with the proof of (4.6). _Proof of (4.6) in Lemma 5_: Letting \(d_{min}(X_{j})=d\left(X_{j},\{X_{k}\}_{1\leq k\neq j\leq n+1}\right)\) we condition on \(X_{j}\ =\ x\) and get from Fubini's theorem that \[{\mathbb{E}}f_{1}^{2}(X_{j})=c_{2}^{2\alpha}{\mathbb{E}}d_{min}^{2\alpha}(X_{ j})=c_{2}^{2\alpha}\int{\mathbb{E}}d_{min}^{2\alpha}(x)f(x)dx, \tag{4.9}\] where \[{\mathbb{E}}d_{min}^{2\alpha}(x)=2\alpha\int y^{2\alpha-1}{\mathbb{P}}\left( d_{min}(x)>y\right)dy. \tag{4.10}\] The minimum distance from \(x\) to \(\{X_{k}\}_{1\leq k\neq j\leq n+1}\) is at least \(y\) if and only if \(B(x,y)\cap S\) contains no node of \(\{X_{k}\}_{1\leq k\neq j\leq n+1},\) where \(B(x,y)\) is the ball of radius \(y\) centred at \(x\) and we recall that \(S\) is the unit square. The area of \(B(x,y)\cap S\) is at least \(\frac{\pi y^{2}}{4}\) no matter the location of \(x\) and so using \(f(x)\geq\epsilon_{1}\) from (1.1), we get \[\mathbb{P}(d_{min}(x)>y)=\left(1-\int_{B(x,y)\cap S}f(x)dx\right)^{n}\leq\left( 1-\frac{\epsilon_{1}\pi y^{2}}{4}\right)^{n}\leq e^{-\frac{n\epsilon_{1}\pi y ^{2}}{4}}.\] For integer \(l\geq 0\) we therefore have that \(\mathbb{P}(\sqrt{n}\cdot d_{min}(x)>l)\leq e^{-C\cdot l^{2}}\) for some constant \(C>0\) not depending on \(X.\) Thus \[\mathbb{E}\left(\sqrt{n}\cdot d_{min}(x)\right)^{\alpha}\leq\sum_{l\geq 0} \mathbb{P}\left(\sqrt{n}\cdot d_{min}(x)\geq l^{\frac{1}{\alpha}}\right)\leq \sum_{l\geq 0}e^{-C\cdot l^{\frac{2}{\alpha}}}, \tag{4.11}\] where the final summation in (4.11) does not depend on \(x.\) Substituting (4.11) into (4.9) completes the proof of (4.6). _Proof of (4.7) in Lemma 5_: To prove the first estimate in (4.7), we let \(K\) be a large integer satisfying \[\left(2-2\cos\left(\frac{2\pi}{K}\right)\right)^{\frac{1}{2}}<\frac{c_{1}}{c_{ 2}}<1,\] where \(c_{1}\) and \(c_{2}\) are the bounds for the weight function \(h\) as in (1.2). Defining \[g(x):=\left(1+x^{2}-2\cdot x\cdot\cos\left(\frac{2\pi}{K}\right)\right)^{\frac {1}{2}},0\leq x\leq 1\] we have that \(g(0)=1\) and since \(g\) is continuous in \([0,1],\) there exists a positive number \(r_{0}<1\) such that \(g(x)\geq\frac{c_{1}}{c_{2}}\) for \(0<x\leq r_{0}\) and \(g(x)<\frac{c_{1}}{c_{2}}\) for \(r_{0}<x\leq 1.\) Draw \(K\) rays starting from \(X_{j}\) equally spaced at an angle of \(\frac{2\pi}{K}\) apart; i.e., let \(l_{1},\ldots,l_{K}\) be \(K\) rays starting from \(X_{j}\) encountered in the clockwise order such that the angle between \(l_{i}\) and \(l_{i+1}\) is \(\frac{2\pi}{K}.\) Let \(v_{i_{1}},\ldots,v_{i_{w}}\) be the neighbours of \(X_{j}\) present in the \(i^{th}\) sector formed by the rays \(l_{i}\) and \(l_{i+1}\) and suppose that \(d(X_{j},v_{i_{1}})>d(X_{j},v_{i_{2}})>\ldots>d(X_{j},v_{i_{w}}).\) We first show that the edge length ratio \(r:=\frac{d(X_{j},v_{i_{k+1}})}{d(X_{j},v_{i_{k}})}\leq r_{0}\) for \(1\leq k\leq w-1\) where \(r_{0}\) is as in the previous paragraph. If \(\theta\) denotes the angle between the edges \((X_{j},v_{i_{k}})\) and \((X_{j},v_{i_{k+1}})\), then \(\theta<\frac{2\pi}{K}\) and so \[d^{2}(v_{i_{k}},v_{i_{k+1}}) = d^{2}(X_{j},v_{i_{k}})+d^{2}(X_{j},v_{i_{k+1}})-2d(X_{j},v_{i_{k}} )\cdot d(X_{j},v_{i_{k+1}})\cdot\cos(\theta)\] \[= d^{2}(X_{j},v_{i_{k}})(1+r^{2}-2r\cos(\theta))\] \[\leq d^{2}(X_{j},v_{i_{k}})\left(1+r^{2}-2r\cos\left(\frac{2\pi}{K} \right)\right).\] Thus \(d(v_{i_{k}},v_{i_{k+1}})\leq d(X_{j},v_{i_{k}})g(r)\) and if \(r_{0}<r\leq 1\), then using (1.2) we have \[h(v_{i_{k}},v_{i_{k+1}})\leq c_{2}d(v_{i_{k}},v_{i_{k+1}})\leq c_{2}d(X_{j},v _{i_{k}})g(r)<c_{1}d(X_{j},v_{i_{k}})\leq h(X_{j},v_{i_{k}}),\] by our choice of \(r_{0}\) in the first paragraph. This is a contradiction because removing the edge \((X_{j},v_{i_{k}})\) from the tree \(\mathcal{T}_{n+1}\) and adding the edge \((v_{i_{k}},v_{i_{k+1}})\), we get a spanning tree with weight less than \(MST_{n+1}\). Using the ratio property iteratively, we get \(d(X_{j},v_{i_{k}})\leq r_{0}^{k-1}d(X_{j},v_{i_{1}})\) and so \(\sum_{k=1}^{w}d^{\alpha}(X_{j},v_{i_{k}})\leq Cd^{\alpha}(X_{j},v_{i_{1}})\), where \(C=\sum_{k\geq 1}r_{0}^{\alpha(k-1)}\). This estimate holds for each of the \(K\) sectors and so \[\sum_{v\in\mathcal{N}_{n+1}(X_{j},\mathcal{T}_{n+1})}d^{\alpha}(X_{j},v)\leq \sum_{i=1}^{K}\sum_{v}d^{\alpha}(X_{j},v)\leq C\sum_{i=1}^{K}d^{\alpha}(X_{j}, u_{i}),\] where the middle summation is over all nodes \(v\) present in sector \(i\) and \(u_{i}\) in the final summation is the node farthest in Euclidean distance from \(X_{j}\) in the \(i^{th}\) sector. Therefore using \((\sum_{i=1}^{K}b_{i})^{2}\leq K\sum_{i=1}^{K}b_{i}^{2}\) for positive numbers \(\{b_{i}\}\) we get \[\left(\sum_{v\in\mathcal{N}_{n+1}(X_{j},\mathcal{T}_{n+1})}d^{ \alpha}(X_{j},v)\right)^{2} \leq C^{2}\left(\sum_{i=1}^{K}d^{\alpha}(X_{j},u_{i})\right)^{2}\] \[\leq C^{2}K\sum_{i=1}^{K}d^{2\alpha}(X_{j},u_{i})\] \[\leq C^{2}K\sum_{v\in\mathcal{N}_{n+1}(X_{j},\mathcal{T}_{n+1})}d^{ 2\alpha}(X_{j},v),\] proving the first estimate in (4.7). Consequently \[\sum_{j=1}^{n+1}\mathbb{E}f_{2}^{2}(X_{j}) \leq C\mathbb{E}\sum_{j=1}^{n+1}\sum_{v\in\mathcal{N}(X_{j},\mathcal{T} _{n+1})}d^{2\alpha}(X_{j},v) \tag{4.12}\] \[\leq \frac{C}{c_{1}^{2\alpha}}\mathbb{E}\sum_{j=1}^{n+1}\sum_{v\in \mathcal{N}(X_{j},\mathcal{T}_{n+1})}h^{2\alpha}(X_{j},v),\] again using (1.2). The double summation in the final term is simply twice the weight of the MST with edge weight exponent \(2\alpha.\) This is because, given the location of the nodes \(\{X_{i}\}_{1\leq i\leq n+1},\) the edge weights \(h(X_{i},X_{j})\) are fixed and so the Kruskal's algorithm gives the same MST irrespective of the value of the edge weight exponent \(\alpha\) (Kesten and Lee (1996), Yukich (2000)); i.e., if we denote \(\mathcal{T}_{n+1}(\beta)\) to be the MST for edge weight exponent \(\beta>0\) as in (1.3) so that \[W(\mathcal{T}_{n+1}(\beta))=\sum_{e\in\mathcal{T}_{n+1}(\beta)}h^{\beta}(e)= \min_{\mathcal{T}}W(\mathcal{T})=\min_{\mathcal{T}}\sum_{f\in\mathcal{T}}h^{ \beta}(f),\] where the minimum is taken over all spanning trees \(\mathcal{T}\) containing all the \(n\,+\,1\) nodes \(\{X_{i}\}_{1\leq i\leq n+1},\) then \(\mathcal{T}_{n+1}(\beta)=\mathcal{T}_{n+1}(1)\) for any \(\beta>0.\) Therefore using (4.12) and the expectation upper bound (3.4) with \(2\alpha\) instead of \(\alpha,\) we also get the second estimate in (4.7). ### Proof of Theorem 4 We use one node difference estimate (4.4) together with the martingale difference method to obtain a bound for the variance. For \(1\leq j\leq n\ +\ 1,\) let \(\mathcal{F}_{j}=\sigma\left(\{X_{k}\}_{1\leq k\leq j}\right)\) denote the sigma field generated by the node positions \(\{X_{k}\}_{1\leq k\leq j}.\) Defining the martingale difference \[H_{j}=\mathbb{E}(MST_{n+1}|\mathcal{F}_{j})-\mathbb{E}(MST_{n+1}|\mathcal{F}_{ j-1}), \tag{4.13}\] we then have that \(MST_{n+1}-\mathbb{E}MST_{n+1}=\sum_{j=1}^{n+1}H_{j}\) and so by the martingale property \[var(MST_{n+1})=\mathbb{E}\left(\sum_{j=1}^{n+1}H_{j}\right)^{2}=\sum_{j=1}^{n +1}\mathbb{E}H_{j}^{2}. \tag{4.14}\] To evaluate \(\mathbb{E}H_{j}^{2}\) we rewrite the martingale difference \(H_{j}\) in a more convenient form. Letting \(X_{j}^{\prime}\) be independent copy of \(X_{j}\) which is also independent of \(\{X_{k}\}_{1\leq k\neq j\leq n+1}\) we rewrite \[H_{j}=\mathbb{E}(MST_{n+1}(X_{j})-MST_{n+1}(X_{j}^{\prime})|\mathcal{F}_{j}), \tag{4.15}\] where \(MST_{n+1}(X_{j})\) is the weight of the MST formed by the nodes \(\{X_{i}\}_{1\leq i\leq n+1}\) and \(MST_{n+1}(X_{j}^{\prime})\) is the weight of the MST formed by the nodes \(\{X_{i}\}_{1\leq i\neq j\leq n+1}\,\cup\,\{X_{j}^{\prime}\}.\) Using the triangle inequality and the one node difference estimate (4.4), we have that \(|MST_{n+1}(X_{j})-MST_{n+1}(X_{j}^{\prime})|\) is bounded above as \[|MST_{n+1}(X_{j})-MST_{n}(j)|+|MST_{n+1}(X_{j}^{\prime})-MST_{n}(j)|\] \[\leq f_{1}(X_{j})+f_{2}(X_{j})+f_{1}(X_{j}^{\prime})+f_{2}(X_{j}^{ \prime}),\] where \(f_{1}\) and \(f_{2}\) are as in (4.4) and we recall that \(MST_{n}(j)\) is the weight of the MST formed by the nodes \(\{X_{k}\}_{1\leq k\neq j\leq n+1}.\) Thus \[|H_{j}| \leq \mathbb{E}(|MST_{n+1}(X_{j})-MST_{n+1}(X_{j}^{\prime})||\mathcal{ F}_{j})\] \[\leq \mathbb{E}(f_{1}(X_{j})|\mathcal{F}_{j})+\mathbb{E}(f_{2}(X_{j}) |\mathcal{F}_{j})+\mathbb{E}(f_{1}(X_{j}^{\prime})|\mathcal{F}_{j})+\mathbb{E }(f_{2}(X_{j}^{\prime})|\mathcal{F}_{j})\] \[= \mathbb{E}(f_{1}|\mathcal{F}_{j})+\mathbb{E}(f_{2}|\mathcal{F}_{j })+\mathbb{E}(f_{1}|\mathcal{F}_{j-1})+\mathbb{E}(f_{2}|\mathcal{F}_{j-1}).\] Using \((a_{1}+a_{2}+a_{3}+a_{4})^{2}\leq 4(a_{1}^{2}+a_{2}^{2}+a_{3}^{2}+a_{4}^{2}),\) we then get \[H_{j}^{2} \leq 4\left(\left(\mathbb{E}(f_{1}|\mathcal{F}_{j})\right)^{2}+\left( \mathbb{E}(f_{2}|\mathcal{F}_{j})\right)^{2}+\left(\mathbb{E}(f_{1}|\mathcal{F }_{j-1})\right)^{2}+\left(\mathbb{E}(f_{2}|\mathcal{F}_{j-1})\right)^{2}\right)\] \[\leq 4\left(\mathbb{E}(f_{1}^{2}|\mathcal{F}_{j})+\mathbb{E}(f_{2}^{ 2}|\mathcal{F}_{j})+\mathbb{E}(f_{1}^{2}|\mathcal{F}_{j-1})+\mathbb{E}(f_{2}^ {2}|\mathcal{F}_{j-1})\right)\] since \((\mathbb{E}(X|\mathcal{F}))^{2}\leq\mathbb{E}(X^{2}|\mathcal{F}).\) Thus \(\mathbb{E}H_{j}^{2}\leq 8\left(\mathbb{E}f_{1}^{2}(X_{j})+\mathbb{E}f_{2}^{2}(X_{j})\right)\) and plugging this in (4.14), we have \[var(MST_{n+1})=\sum_{j=1}^{n+1}\mathbb{E}H_{j}^{2}\leq 8\left(\sum_{j=1}^{n+1} \mathbb{E}f_{1}^{2}(X_{j})+\sum_{j=1}^{n+1}\mathbb{E}f_{2}^{2}(X_{j})\right),\] which in turn is at most a constant multiple of \(n^{1-\alpha}\) using the estimates in (4.6) and (4.7). Variance lower bound for MST Regarding the variance lower bound for \(MST_{n}\) as defined in (1.3), we have the following result. **Theorem 6**.: _Suppose there is a square \(S_{0}\subseteq S\) with constant side length \(s_{0}\) such that the weight function \(h(u,v)=d(u,v)\) if either \(u\) or \(v\) is in \(S_{0}.\) There is a constant \(D_{2}=D_{2}(\alpha,c_{1},c_{2},\epsilon_{1},\epsilon_{2})>0\) such that \(var(MST_{n})\geq D_{2}n^{1-\alpha}\) for all \(n\) large._ Combining with Theorem 4, we then find that \(var(MST_{n})\) is of the _order_ of \(n^{1-\alpha}\) even in the location dependent case. As before, the behaviour is analogous as in the location independent case and in this aspect we refer to Kesten and Lee (1996) who use martingale methods to study central limit theorems for \(MST_{n},\) appropriately scaled and centred. The technical assumption of weight function being equal to the Euclidean length in a small subsquare within the unit square allows us to bound probabilities of predetermined nice configurations via martingale difference estimates (see proof of Theorem 6 below). We begin with some preliminary definitions and computations. Suppose that \(s_{0}\) is the side length of the square \(S_{0}.\) As in the proof of the deviation estimates in Section 3, we divide \(S_{0}\) into small \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares \(\{R_{i}\}_{1\leq i\leq\frac{s_{0}^{2n}}{A^{2}}}\) where \(A=A(n)\in\left[1,1+\frac{1}{\log n}\right]\) is chosen such that \(\frac{s_{0}\sqrt{n}}{A}\) is an integer. This is possible by the argument preceding (3.11). For a square \(R_{i}\) we let \(\mathcal{N}_{1}(R_{i})\) be the squares in \(\{R_{k}\}\) sharing a corner with \(R_{i}\) and for \(l\geq 2\) we let \(\mathcal{N}_{l}(R_{j})\) be the set of all squares in \(\{R_{k}\}\) sharing a corner with \(\mathcal{N}_{l-1}(R_{j}).\) Thus \(\mathcal{N}_{l}(R_{i})\) contains \((2l+1)^{2}\) of \(\{R_{k}\}\) including the square \(R_{i}.\) For an integer \(g\geq 5,\) we depict \(\mathcal{N}_{3g}(R_{i})\) and \(\mathcal{N}_{15g}(R_{i})\) in Figure 5\((a)\) as the \((6g+1)\times(6g+1)\) square and \((30g+1)\times(30g+1)\) square, respectively. In all the figures in this subsection, the dimensions are to be multiplied by the scaling factor \(\frac{A}{\sqrt{n}}.\) The central square labelled \(E\) is the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) square \(R_{i}\) and the twelve \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares numbered \(1,2,\ldots,12\) are all spaced \((2g-1)\frac{A}{\sqrt{n}}\) apart. **Definition 1**.: _For \(1\leq j\leq n+1\) we say that the square \(R_{i}\) is a \((g,j)-\)good square if each of the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares numbered one to twelve in Figure 5\((a)\) contain exactly one node of \(\{X_{k}\}_{1\leq k\neq j\leq n+1}\) and the rest of the big square \(ABCD\) (containing all the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares of \(\mathcal{N}_{15}(R_{i})\)) contains no node of \(\{X_{k}\}_{1\leq k\neq j\leq n+1}.\)_ The advantage of defining the \((g,j)-\)good square as above is that we can determine exactly the change in the MST length when the node \(X_{j}\) is "added" to the \((g,j)-\)good square \(R_{i}.\) Indeed, suppose that\(R_{i}\) is a \((g,j)-\)good square and the node \(X_{j}\in R_{i}.\) For \(1\leq l\leq 12,\) let \(v_{l}\) be the node present in the \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) square numbered \(l\) in Figure 5 and let \(v_{min}\in\{v_{l}\}_{1\leq l\leq 12}\) be the node closest to \(X_{j}\) in terms of Euclidean length. As before we denote \(\mathcal{T}_{n}(j)\) and \(\mathcal{T}_{n+1}\) to be the MSTs formed by the nodes \(\{X_{i}\}_{1\leq i\neq j\leq n+1}\) and \(\{X_{i}\}_{1\leq i\leq n+1},\) respectively, with edge weight function \(h(.,.)\) and edge weight exponent \(\alpha>0.\) Letting \(MST_{n}(j)\) denote the weight of \(\mathcal{T}_{n}(j),\) we have the following result. **Lemma 7**.: _If the event \(\{R_{i}\) is \((g,j)-good\}\cap\{X_{j}\in R_{i}\}\) occurs then_ \[\mathcal{T}_{n+1}=\mathcal{T}_{n}(j)\cup\{(X_{j},v_{min})\}=:\mathcal{T}_{new}. \tag{5.1}\] _and_ \[(3g-1)^{\alpha}\cdot\left(\frac{A}{\sqrt{n}}\right)^{\alpha}\leq MST_{n+1}- MST_{n}(j)\leq(5g-1)^{\alpha}\cdot\left(\frac{A}{\sqrt{n}}\right)^{\alpha}. \tag{5.2}\] In words, the new MST \(\mathcal{T}_{n+1}\) is simply obtained by adding the edge \((X_{j},v_{min})\) to the old MST \(\mathcal{T}_{n}(j).\) Estimate (5.2) obtains explicit bounds on the difference of MST weight upon adding or removing the node \(X_{j}.\) This allows us to Figure 5: \((a)\) The square \(E\) is a \(g-\)good square. \((b)\) The edge \(e\) has longer Euclidean length than the edge \((v,v_{3}).\) All dimensions are to be multiplied by \(\frac{A}{\sqrt{n}}\) to get the actual length. use martingale difference estimates and obtain a lower bound on the variance in the proof of Theorem 6 below. Before we do so, we state the following companion Lemma that shows that the event mentioned in the statement of Lemma 7 occurs with positive probability. For \(1\leq j\leq n+1\) let \(X_{j}^{\prime}\) be an independent copy of \(X_{j}\) that is also independent of \(\{X_{k}\}_{1\leq k\neq j\leq n+1}\) and let \(F_{j}(g)\) and \(F_{j}^{\prime}(g)\) respectively denote the events that the nodes \(X_{j}\) and \(X_{j}^{\prime}\) belong to a \((g,j)-\)good square in \(\{R_{k}\}\) and let \(F_{j}^{tot}(g)=F_{j}(2g)\cap F_{j}^{\prime}(g).\) **Lemma 8**.: _For every \(g\geq 5,\) there is a constant \(\theta=\theta(g,s_{0},\epsilon_{1},\epsilon_{2})>0\) such that_ \[\min_{1\leq j\leq n+1}\mathbb{P}(F_{j}^{tot}(g))\geq\theta.\] Assuming Lemmas 7 and 8, we prove Theorem 6 below. We then prove Lemmas 7 and 8 separately. _Proof of Theorem 6 (assuming Lemmas 7 and 8)_: Letting \(F_{j}^{tot}(g)\) be the event defined prior to Lemma 8, we get from Lemma 7 that if \(F_{j}^{tot}(g)\) occurs, then \[MST_{n+1}(X_{j}) \geq MST_{n}(j)+\left(\frac{A}{\sqrt{n}}\right)^{\alpha}(6g-1)^{\alpha}\] \[\geq MST_{n+1}(X_{j}^{\prime})+\left(\frac{A}{\sqrt{n}}\right)^{ \alpha}((6g-1)^{\alpha}-(5g-1)^{\alpha})\] Letting \(\Delta_{\alpha}:=(6g-1)^{\alpha}-(5g-1)^{\alpha}>0\) and recalling the martingale difference \(H_{j}\) in (4.15), we get \[\mathbb{E}|H_{j}|\geq\mathbb{E}|H_{j}|\mathbf{1}(F_{j}^{tot}(g))=\mathbb{E}H_ {j}\mathbf{1}(F_{j}^{tot}(g))\geq\Delta_{\alpha}n^{-\frac{\alpha}{2}}\mathbb{ P}(F_{j}^{tot}(g))\geq c\Delta_{\alpha}n^{-\frac{\alpha}{2}},\] where \(c>0\) is the constant in Lemma 8. Consequently from (4.14), we get \[var(MST_{n+1})=\sum_{j=1}^{n+1}\mathbb{E}H_{j}^{2}\geq\sum_{j=1}^{n+1}\left( \mathbb{E}|H_{j}|\right)^{2}\geq c^{2}\Delta_{\alpha}^{2}n^{1-\alpha},\] proving the desired lower bound. In the rest of this section, we prove Lemmas 7 and 8 starting with the former. ### Proof of Lemma 7 To prove Lemma 7, we denote the edges in \(\{(v_{i},v_{i+1})\}_{1\leq i\leq 11}\cup\{(v_{12},v_{1})\}\) to be _short edges_ and collect the following properties, assuming that the event defined in Lemma 7 holds. \((p1)\) The length of any short edge is at most \((2g+5)\cdot\frac{A}{\sqrt{n}}.\) The subgraph \(\mathcal{T}_{loc}\) of \(\mathcal{T}_{n}(j)\) induced by the vertices \(\{v_{i}\}_{1\leq i\leq 12}\) is a tree consisting of exactly eleven short edges. \((p2)\) Let \(v\) be any point outside the square \(ABCD\) as in Figure 5\((b)\) so that the perpendicular \(vZ\) from \(v\) to the line containing \(XY\) crosses the line segment \(XY.\) The Euclidean distance \(d(X_{j},v)\geq 15g\cdot\frac{A}{\sqrt{n}}\) and \(d(X_{j},v)>d(v,v_{3}),\) strictly. _Proof of \((p1)\)_: To prove that \(\mathcal{T}_{loc}\subset\mathcal{T}_{n}(j)\) is a tree we assume otherwise and suppose for example that nodes \(v_{4}\) and \(v_{10}\) are joined by a path \(P_{4,10}\) in \(\mathcal{T}_{old}\) containing at least one vertex not in \(\{v_{i}\}_{1\leq i\leq 12}\) as in Figure 5\((a).\) This means that \(P_{4,10}\) contains at least two "long" edges \(f_{1},f_{2}\) containing endvertices outside \(\mathcal{N}_{15g}(R_{u_{1}}).\) The short edges and the edges \(\{f_{1},f_{2}\}\) all have an endvertex in the square \(S_{0}\) and so the weights of these edges are simply their Euclidean lengths raised to the power \(\alpha.\) The length of the edge \(f_{1}\) is at least \(12g\cdot\frac{A}{\sqrt{n}},\) the width of the annulus between the big squares (see Figure 5\((a)\)) but the length of the edge \((v_{4},v_{10})\) is at most \((6g+1)\sqrt{2}\cdot\frac{A}{\sqrt{n}}.\) Since \(g\geq 2\) we have \((6g+1)\sqrt{2}<10g\) and so removing the edge \(f_{1}\) and adding the edge \((v_{4},v_{10}),\) we get an MST formed by the nodes \(\{X_{i}\}_{1\leq i\neq j\leq n+1}\) with weight strictly that \(\mathcal{T}_{n}(j),\) a contradiction. This proves that \(\mathcal{T}_{loc}\) is a tree and the edge set of \(\mathcal{T}_{loc}\) must contain only short edges, since any other edge with both endvertices in \(\{v_{i}\}_{1\leq i\leq 12}\) has length strictly larger than longest short edge. _Proof of \((p2)\)_: From Figure 5\((b),\) we have \(d(X_{j},v)\geq 15g\cdot\frac{A}{\sqrt{n}}.\) We show below that the angle \(\theta\) between the edges \((X_{j},v)\) and \((v,v_{3})\) is less than \(60\) degrees and so the Euclidean length \(d(X_{j},v)<d(v,v_{3}),\) strictly. To estimate \(\theta\) we let \(d(v,Z)\) be the length of the perpendicular segment \(vZ\) so that the area of the triangle formed by the vertices \(v,X\) and \(Y\) is \[\frac{1}{2}\cdot d(v,Z)\cdot d(X,Y)=\frac{1}{2}d(v,X)\cdot d(v,Y)\cdot\sin( \theta_{0}),\] where \(\theta_{0}>\theta\) is the angle between the edges \((v,X)\) and \((v,Y).\) Thus \(\sin(\theta_{0})=\theta_{0}\). \(\frac{d(v,Z)\cdot d(X,Y)}{d(v,X)\cdot d(v,Y)}\) and using \(\min(d(v,X),d(v,Y))\geq d(v,Z)\) we further get \(\sin(\theta_{0})\leq\frac{d(X,Y)}{d(v,Z)}.\) But \(d(v,Z)\geq 12g\cdot\frac{A}{\sqrt{n}}\) and from Figure 5\((a),\) we have \(d(X,Y)\leq(2g+1)\cdot\frac{A}{\sqrt{n}}\) and so \(\sin(\theta_{0})\leq\frac{2g+1}{12g}\leq\frac{1}{4},\) since \(g\geq 1.\) _Proof of (5.1) in Lemma 7_: To prove (5.1), we first identify for every edge \(e=(u,v)\notin{\cal T}_{new},\) the unique path \(P(e)\subseteq{\cal T}_{new}\) with endvertices \(u,v.\) By Lemma 1 of Kesten and Lee (1996), we then have that \({\cal T}_{new}\) is the MST of the nodes \(\{X_{i}\}_{1\leq i\leq n+1}\) if and only if for every edge \(e\notin{\cal T}_{new}\) the following holds: \[\mbox{for every edge $f\in P(e)$, the weight $h(f)<h(e)$} \tag{5.3}\] where we recall that \(h(e)=h(x,y)\) is the weight of the edge \(e\) with endvertices \(x\) and \(y.\) The condition (5.3) does not depend on the value of the edge weight exponent \(\alpha\) and as mentioned before, this obtains that the MST is the same irrespective of the value of \(\alpha.\) We now prove that (5.3) holds for each edge \(e\notin{\cal T}_{new}.\) For notational convenience, we refer to \({\cal T}_{n}(j)\) as \({\cal T}_{old}.\) Suppose first that \(e=(u,v)\notin{\cal T}_{new}\) does not contain \(X_{j}\) as an endvertex. There is a unique path \(P(u,v)\subseteq{\cal T}_{new}\) with endvertices \(u\) and \(v\) and since \(X_{j}\) is a leaf of \({\cal T}_{new},\) no edge of \(P(u,v)\) contains \(X_{j}\) as an endvertex. Thus \(P(u,v)\subseteq{\cal T}_{old}\) and so applying (5.3) to the MST \({\cal T}_{old},\) we get that the weight of every edge in \(P(u,v)\) is no more than the weight of \(e,\) proving (5.3) for \(e\in{\cal T}_{new}.\) Suppose now that \(e=(v,X_{j})\) for some vertex \(v\) as in Figure 5\((b).\) The MST \({\cal T}_{old}\) does not contain both the edges \((v,v_{3})\) and \((v,v_{4})\) since \({\cal T}_{old}\) has eleven of the twelve short edges (property \((p1)\)) and so if both \((v,v_{3})\) and \((v,v_{4})\) were to belong to \({\cal T}_{old},\) this would create a cycle. Suppose \((v,v_{3})\notin{\cal T}_{old}\) so that there is a unique path \(P(v,v_{3})\subseteq{\cal T}_{old}\subset{\cal T}_{new}\) satisfying (5.3). Also let \(P(v_{3},v_{min})\subseteq{\cal T}_{old}\subset{\cal T}_{new}\) be the unique path formed by the short edges with endvertices \((v_{3},v_{min})\) so that \(P(v,v_{3})\cup P(v_{3},v_{min})\cup\{(v_{min},X_{j})\}\subseteq{\cal T}_{new}\) contains the unique path \(P(v,X_{j})\) in \({\cal T}_{new}\) with endvertices \(v\) and \(X_{j}.\) Applying (5.3) to the path \(P(v,v_{3})\subseteq{\cal T}_{old}\) we have that every edge in \(P(v,v_{3})\) has weight less than \[h(v,v_{3})=d(v,v_{3})<d(v,X_{j})=h(v,X_{j}) \tag{5.4}\] where the first and the last equalities in (5.4) follow from the fact that \(v_{3},X_{j}\) belong to the square \(S_{0}.\) The middle inequality in (5.4) is true by property \((p2).\) Also, every short edge has Euclidean length at most (property \((p1)\)) and \(d(v,X_{j})\geq 15g\cdot\frac{A}{\sqrt{n}}\) (property \((p2)\)) and so the weight of every edge in \(P(v_{3},v_{min})\) is also less than the weight of \((X_{j},v).\) Finally, the length of \((X_{j},v_{min})\) is at most the length of the diagonal of the inner big square in Figure 5\((a)\) and so \(h(X_{j},v_{min})=d(X_{j},v_{min})<(6g+1)\sqrt{2}\cdot\frac{A}{\sqrt{n}}<15g\cdot \frac{A}{\sqrt{n}}\leq d(v,X_{j})=h(v,X_{j})\) by property \((p2).\) Thus the weight of \((X_{j},v_{min})\) is also less than that of \((v,X_{j})\) and so (5.3) is true for \(e=(v,X_{j}).\) _Proof of (5.2) in Lemma 7_: From (5.1), we get that if the event mentioned in the statement of the Lemma occurs, then the weight \[W({\cal T}_{n+1})=MST_{n+1}=MST_{n}(j)+h^{\alpha}(X_{j},v_{min})=MST_{n}(j)+d^{ \alpha}(X_{j},v_{min})\] since \(X_{j}\in S_{0}.\) The vertex \(v_{min}\) is necessarily one of the eight vertices in \(\{v_{i}\}_{1\leq i\leq 12}\setminus\{v_{1},v_{4},v_{7},v_{10}\}.\) By construction, the distance between any node within the square \(R_{i}\) (labelled \(E\) in Figure 5) and any node within the square labelled \(3,\) for example, is at least \((3g-1)\frac{A}{\sqrt{n}}\) and at most \((5g-1)\frac{A}{\sqrt{n}}.\) ### Proof of Lemma 8 For a constant \(\theta>0\) let \(E_{j}(g,\theta)\) be the event that the number of \((g,j)-\)good squares in \(\{R_{k}\}\) is at least \(\theta\cdot n.\) Recalling the node distribution parameters \(\epsilon_{1}\) and \(\epsilon_{2}\) from (1.1), we first show that there are positive constants \(\theta_{1}=\theta_{1}(g,s_{0},\epsilon_{1},\epsilon_{2})\) and \(\theta_{2}=\theta_{2}(g,s_{0},\epsilon_{1},\epsilon_{2})\) such that for any \(1\leq j\leq n+1,\) \[\mathbb{P}\left(E_{j}(g,\theta_{1})\right)\geq 1-e^{-\theta_{2}n}. \tag{5.5}\] We then let \[\theta_{0}:=\min(\theta_{1}(2g,s_{0},\epsilon_{1},\epsilon_{2}),\theta_{1}(g, s_{0},\epsilon_{1},\epsilon_{2})) \tag{5.6}\] and define \[E_{tot}(j):=E_{j}(2g,\theta_{0})\cap F_{j}(2g)\cap E_{j}(g,\theta_{0})\cap F_{ j}^{\prime}(g),\] where we recall that \(F_{j}(2g)\) denotes the event that the node \(X_{j}\) belongs to a \((2g,j)-\)good square in \(\{R_{i}\}\) and an analogous definition holds for \(F_{j}^{\prime}(g)\) with the node \(X_{j}^{\prime}.\) We show that there exists a constant \(D=D(g,s_{0},\epsilon_{1},\epsilon_{2})>0\) such that for any \(1\leq j\leq n+1\) \[\mathbb{P}(E_{tot}(j))\geq D, \tag{5.7}\] completing the proof of Lemma 8. We prove (5.5) and (5.7) in that order below. _Proof of (5.5)_: We use Poissonization and let \({\cal P}\) be a Poisson process of intensity \(nf(.)\) in the unit square, with corresponding probability measure \({\mathbb{P}}_{0}.\) Analogous to the definition of \((g,j)-\)good square, we define \(R_{i}\) to be \(g-\)good square if the twelve \(\frac{A}{\sqrt{n}}\times\frac{A}{\sqrt{n}}\) squares as in Figure 5\((a)\) each contain exactly one node of \({\cal P}\) and the rest of big square of size \(\frac{(30g+1)A}{\sqrt{n}}\times\frac{(30g+1)A}{\sqrt{n}}\) is empty. The number of nodes \(N(R_{i})\) of \({\cal P}\) in the square \(R_{i}\) is Poisson distributed with mean \(n\int_{R_{i}}f(x)dx\in[\epsilon_{1}A^{2},\epsilon_{2}A^{2}]\) (see the bounds for \(f(.)\) in (1.1)) and so \({\mathbb{P}}_{0}(R_{i}\mbox{ is }g-\mbox{good})\geq p_{0}\) for some constant \(p_{0}=p_{0}(\epsilon_{1},\epsilon_{2},g)>0\) not depending on \(i.\) If \(R_{i}\) and \(R_{k}\) are two squares such that the corresponding neighbourhoods \({\cal N}_{15g}(R_{i})\cap{\cal N}_{15g}(R_{k})=\emptyset,\) then the events \(\{R_{i}\mbox{ is }g-\mbox{good}\}\) and \(\{R_{k}\mbox{ is }g-\mbox{good}\}\) are independent since the Poisson process is independent on disjoint sets. We therefore pick a maximal set of squares \(\{R_{l}\}_{l\in{\cal Q}}\) whose \(15g-\)neighbourhoods are empty. Since the area of the square \(S_{0}\) is \(s_{0}^{2},\) a constant and \({\cal N}_{15}(R_{i})\) contains \((30g+1)^{2}\) squares in \(\{R_{l}\},\) each of area \(\frac{A^{2}}{n},\) we have that \(\#{\cal Q}\geq\frac{1}{2}\cdot\frac{s_{0}^{2}n}{A^{2}(30g+1)^{2}}\geq\frac{s_{ 0}^{2}n}{8(30g+1)^{2}}\) using \(A\leq 1+\frac{1}{\log n}\leq 2.\) If \(N_{good}:=\sum_{i\in{\cal Q}}{\bf 1}(R_{i}\mbox{ is }g-\mbox{good})\) denotes the number of \(g-\)good squares in the collection \({\cal Q},\) then \(N_{good}\) is a sum of independent Bernoulli random variables, each with mean at least \(p_{0}.\) Therefore \({\mathbb{E}}_{0}(N_{good})\geq\frac{s_{0}^{2}p_{0}n}{8(30g+1)^{2}}=:2\theta_{1}n\) and using the standard deviation estimate (7.2) (see Appendix) with \(m=\#{\cal Q},\mu_{1}=p_{0}\) and \(\epsilon=\frac{1}{2},\) we get \[{\mathbb{P}}_{0}\left(N_{good}\geq\theta_{1}n\right)\geq 1-e^{-2\theta_{2}n}\] for some positive constant \(\theta_{2}=\theta_{2}(g,s_{0},\epsilon_{1},\epsilon_{2}).\) Using the dePoissonization formula (3.14), there exists a constant \(D>0\) not depending on \(j\) such that \[{\mathbb{P}}(E_{j}(g,\theta_{1}))\geq 1-D\sqrt{n}\cdot e^{-2\theta_{2}n}\geq 1-e^ {-\theta_{2}n}\] for all \(n\) large. _Proof of (5.7)_: We let \({\cal S}_{good}(2g)\) and \({\cal S}_{good}(g)\) respectively denote the random set of all \((2g,j)-\)good squares and the random set of all \((g,j)-\)good squares in \(\{R_{i}\}.\) If \(E_{j}(2g,\theta_{0})\cap E_{j}(g,\theta_{0})\) occurs where \(\theta_{0}\) is as defined prior to (5.7), then \(\#{\cal S}_{good}(2g)\geq\theta_{0}\cdot n\) and \(\#{\cal S}_{good}(g)\geq\theta_{0}\cdot n.\) The event \(E_{j}(2g,\theta_{0})\cap\) \(E_{j}(g,\theta_{0})\) is independent of \(X_{j}\) and \(X_{j}^{\prime}\) and so \[\mathbb{P}(E_{tot}(j)) = \sum_{\mathcal{S}_{1}:\#\mathcal{S}_{1}\geq\theta_{0}n}\sum_{ \mathcal{S}_{2}:\#\mathcal{S}_{2}\geq\theta_{0}n}\mathbb{P}\left(\{X_{j}\in \mathcal{S}_{1}\}\cap\{X_{j}^{\prime}\in\mathcal{S}_{2}\}\right. \tag{5.8}\] \[\left.\cap\{\mathcal{S}_{good}(2g)=\mathcal{S}_{1}\}\cap\{ \mathcal{S}_{good}(g)=\mathcal{S}_{2}\}\right)\] \[= \sum_{\mathcal{S}_{1}:\#\mathcal{S}_{1}\geq\theta_{0}n}\sum_{ \mathcal{S}_{2}:\#\mathcal{S}_{2}\geq\theta_{0}n}\mathbb{P}(X_{j}\in\mathcal{ S}_{1})\mathbb{P}(X_{j}^{\prime}\in\mathcal{S}_{2})\] \[\qquad\mathbb{P}\left(\{\mathcal{S}_{good}(2g)=\mathcal{S}_{1}\} \cap\{\mathcal{S}_{good}(g)=\mathcal{S}_{2}\}\right).\] For any collection \(\mathcal{S}_{1}\) containing \(\mathcal{S}_{1}\geq\theta_{0}n\) squares from \(\{R_{i}\}\), we have that \[\mathbb{P}(X_{j}\in\mathcal{S}_{1})\geq\int_{\mathcal{S}_{1}}f(x)dx\geq \epsilon_{1}\cdot\frac{A^{2}}{n}\cdot\theta_{0}n\geq\epsilon_{1}\theta_{0},\] since \(A\geq 1.\) Similarly \(\mathbb{P}(X_{j}^{\prime}\in\mathcal{S}_{2})\geq\epsilon_{1}\theta_{0}\) and so from (5.8) we get that \[\mathbb{P}(E_{tot}(j)) \geq (\epsilon_{1}\theta_{0})^{2}\sum_{\mathcal{S}_{1}:\#\mathcal{S}_ {1}\geq\theta_{0}n}\sum_{\mathcal{S}_{2}:\#\mathcal{S}_{2}\geq\theta_{0}n} \mathbb{P}\left(\{\mathcal{S}_{good}(2g)=\mathcal{S}_{1}\}\cap\{\mathcal{S}_{ good}(g)=\mathcal{S}_{2}\}\right)\] \[= (\epsilon_{1}\theta_{0})^{2}\mathbb{P}\left(E_{j}(2g,\theta_{0}) \cap E_{j}(g,\theta_{0})\right)\] \[\geq (\epsilon_{1}\theta_{0})^{2}(1-e^{-n\cdot\theta_{2}(2g,s_{0}, \epsilon_{1},\epsilon_{2})}-e^{-n\cdot\theta_{2}(g,s_{0},\epsilon_{1}, \epsilon_{2})})\] using (5.5) and the definition of \(\theta_{0}\) in (5.6). ## 6 Convergence properties for MST In this section, we study convergence properties for the MST weight \(MST_{n}\) as defined in (1.3), appropriately scaled and centred. The following is the main result of this section. **Theorem 9**.: _For every \(\alpha>0\) we have that_ \[\frac{1}{n^{1-\frac{\alpha}{2}}}(MST_{n}-\mathbb{E}MST_{n})\longrightarrow 0\text{ a.s.}\] _as \(n\rightarrow\infty.\)_ The proof is standard and follows from subsequence arguments (Steele (1988)). For completeness we provide a proof below. _Proof of Theorem 9_: We prove the almost sure convergence via a subsequence argument using the variance upper bound for \(MST_{n}\) obtained in Theorem 4. Indeed, from Theorem 4 we have that \[var\left(\frac{MST_{n}}{n^{1-\frac{\alpha}{2}}}\right)\leq\frac{C}{n}\] and so an application of the Borel-Cantelli Lemma gives us that \[\frac{1}{n^{2-\alpha}}(MST_{n^{2}}-\mathbb{E}MST_{n^{2}})\longrightarrow 0\text{ a.s.}\] as \(n\rightarrow\infty.\) To prove convergence along the subsequence \(a_{n}=n,\) we let \[D_{n}:=\max_{n^{2}\leq k<(n+1)^{2}}|MST_{k}-MST_{n^{2}}| \tag{6.1}\] and show that \(\mathbb{E}D_{n}^{2}\leq Cn^{2-2\alpha}\) for some constant \(C>0.\) This would then imply that \(\left(\frac{\mathbb{E}D_{n}}{n^{2-\alpha}}\right)^{2}\leq\frac{\mathbb{E}D_{n }^{2}}{n^{4-2\alpha}}\leq\frac{C}{n^{2}}\longrightarrow 0\) as \(n\rightarrow\infty\) and we also get from Borel-Cantelli Lemma that \(\frac{D_{n}}{n^{2-\alpha}}\longrightarrow 0\) a.s. as \(n\rightarrow\infty.\) For \(n^{2}\leq k<(n+1)^{2}\) we then write \[\frac{|MST_{k}-\mathbb{E}MST_{k}|}{k^{1-\frac{\alpha}{2}}} \leq \frac{|MST_{k}-MST_{n^{2}}|}{k^{1-\frac{\alpha}{2}}}+\frac{ \mathbb{E}|MST_{k}-MST_{n^{2}}|}{k^{1-\frac{\alpha}{2}}}\] \[\leq \frac{D_{n}}{k^{1-\frac{\alpha}{2}}}+\frac{\mathbb{E}D_{n}}{k^{1- \frac{\alpha}{2}}}\] \[\leq \frac{D_{n}}{n^{2-\alpha}}+\frac{\mathbb{E}D_{n}}{n^{2-\alpha}}\] and get that \(\frac{MST_{k}-\mathbb{E}MST_{k}}{k^{1-\frac{\alpha}{2}}}\longrightarrow 0\) a.s. as \(n\rightarrow\infty.\) To estimate \(D_{n}\) we use the one node difference estimate (4.4) to get that \(|MST_{k+1}-MST_{k}|\leq f_{1,k}+f_{2,k}\) where \(f_{1,k}\) and \(f_{2,k}\) are such that \[\left(\mathbb{E}f_{1,k}\right)^{2}\leq\mathbb{E}f_{1,k}^{2}\leq\frac{C}{k^{ \alpha}}\text{ and }\left(\mathbb{E}f_{2,k}\right)^{2}\leq\mathbb{E}f_{2,k}^{2}\leq\frac{C}{k^{ \alpha}} \tag{6.2}\] for some constant \(C>0\) (see (4.6) and (4.7)). Thus telescoping we get \[|MST_{k}-MST_{n^{2}}|\leq\sum_{l=n^{2}}^{k}f_{1,l}+f_{2,l}\] and so \[D_{n}=\max_{n^{2}\leq k<(n+1)^{2}}|MST_{k}-MST_{n^{2}}|\leq\sum_{l=n^{2}}^{(n+1)^{ 2}}f_{1,l}+f_{2,l}.\] Using \((\sum_{i=1}^{t}a_{i})^{2}\leq t\sum_{i=1}^{t}a_{i}^{2}\) we get that \(\mathbb{E}D_{n}^{2}\) is bounded above by \[((n+1)^{2}-n^{2})\sum_{l=n^{2}}^{(n+1)^{2}}\mathbb{E}(f_{1,l}+f_{2,l})^{2}\leq 2 ((n+1)^{2}-n^{2})\sum_{l=n^{2}}^{(n+1)^{2}}(\mathbb{E}f_{1,l}^{2}+\mathbb{E}f_{ 2,l}^{2})\] and plugging the estimates from (6.2) we finally get \[\mathbb{E}D_{n}^{2}\leq 2((n+1)^{2}-n^{2})\sum_{l=n^{2}}^{(n+1)^{2}}\frac{2C}{l ^{\alpha}}\leq 2((n+1)^{2}-n^{2})^{2}\frac{2C}{n^{2\alpha}}\leq 8Cn^{2-2\alpha}, \tag{6.3}\] proving the desired estimate for \(\mathbb{E}D_{n}^{2}\). ## 7 Uniform MSTs In this Section, we assume that the nodes \(\{X_{i}\}_{1\leq i\leq n}\) are uniformly distributed in the unit square and obtain bounds on the asymptotic values of the expected weight, appropriately scaled and centred. We assume that the positive edge weight function \(h:\mathbb{R}^{2}\times\mathbb{R}^{2}\rightarrow\mathbb{R}\) satisfies (1.2) along with the following two properties: \((b1)\) For every \(a>0\) we have \[h(au,av)=a\cdot h(u,v)\text{ for all }u,v\in\mathbb{R}^{2}\] \((b2)\) There exists a constant \(h_{0}>0\) such that for all \(b\in\mathbb{R}^{2}\) we have \[h(b+u,b+v)\leq h_{0}\cdot h(u,v). \tag{7.1}\] For example, recalling that \(d(u,v)\) denotes the Euclidean distance between \(u\) and \(v\), we have that the function \[h(u,v)=d(u,v)+\frac{1}{2}|d(u,0)-d(v,0)| \tag{7.2}\] is a metric since \[d(u,0)\leq d(0,v)+d(v,u)\text{ and }d(v,0)\leq d(0,u)+d(u,v)\] by triangle inequality and so \[d(u,v)\leq h(u,v)\leq d(u,v)+\frac{1}{2}d(u,v).\] This implies that \(h(u,v)\) satisfies (1.2) and by definition \(h\) also satisfies \((b1)\). Moreover using the triangle inequality we have \[h(b+u,b+v) = d(b+u,b+v)+\frac{1}{2}|d(u+b,0)-d(v+b,0)|\] \[= d(u,v)+\frac{1}{2}|d(u+b,0)-d(v+b,0)|\] \[\leq d(u,v)+\frac{1}{2}d(u+b,v+b)\] \[= d(u,v)+\frac{1}{2}d(u,v)\] \[\leq \frac{3}{2}h(u,v)\] by definition of \(h\) in (7.2). Thus \((b2)\) is also satisfied with \(h_{0}=\frac{3}{2}\). We have the following result. **Theorem 10**.: _Suppose the distribution function \(f(.)\) is uniform, i.e., \(\epsilon_{1}=\epsilon_{2}=1\) in (1.1) and the edge weight function \(h(u,v)\) satisfies (1.2) and properties \((b1)-(b2)\) above. We then have_ \[0<\liminf_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\leq\limsup_{n} \frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\leq h_{0}^{\alpha}\cdot \liminf_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}<\infty. \tag{7.3}\] Thus the scaled weight of the minimum spanning tree remains bounded within a factor of \(h_{0}^{\alpha}\). We begin with the following Lemma. **Lemma 11**.: _There is a constant \(D>0\) such that for any positive integers \(n_{1},n_{2}\geq 1\) we have_ \[\mathbb{E}MST_{n_{1}+n_{2}}\leq\mathbb{E}MST_{n_{1}}+n_{2}\left(\frac{D}{n_{1} }\right)^{\frac{\alpha}{2}}. \tag{7.4}\] _Moreover, for any fixed integer \(m\geq 1\) we have_ \[\limsup_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\leq\limsup_{k}\frac{ \mathbb{E}MST_{km}}{(km)^{1-\frac{\alpha}{2}}}. \tag{7.5}\] _Proof of (7.4) in Lemma 11_: Let \(\mathcal{T}_{1}\) be the MST formed by the \(n_{1}\) nodes \(\{X_{i}\}_{1\leq i\leq n_{1}}.\) We join each \(X_{i},n_{1}+1\leq i\leq n_{1}+n_{2}\) to the node in \(\{X_{j}\}_{1\leq j\leq n_{1}}\) closest to \(X_{i}\) in Euclidean distance by an edge \(e_{i},\) whose length is \(d^{\alpha}(X_{i},\{X_{j}\}_{1\leq j\leq n_{1}}),\) where we recall that \(d(X_{i},\{X_{j}\}_{1\leq j\leq n_{1}})\) is the minimum distance between \(X_{i}\) and the nodes in \(\{X_{j}\}_{1\leq j\leq n_{1}}.\) From (1.2), we have that the weight of \(e_{i}\) is at most \(c_{2}^{\alpha}d^{\alpha}(X_{i},\{X_{j}\}_{1\leq j\leq n_{1}}).\) The tree \(\mathcal{T}_{1}\cup\{e_{i}\}_{n_{1}+1\leq i\leq n_{1}+n_{2}}\) contains all the \(n_{1}+n_{2}\) nodes and so we have \[\mathbb{E}MST_{n_{1}+n_{2}} \leq \mathbb{E}MST_{n_{1}}+c_{2}^{\alpha}\sum_{i=n_{1}+1}^{n_{1}+n_{2} }\mathbb{E}d^{\alpha}(X_{i},\{X_{j}\}_{1\leq j\leq n_{1}})\] \[= \mathbb{E}MST_{n_{1}}+n_{2}\cdot c_{2}^{\alpha}\cdot\mathbb{E}d^{ \alpha}(X_{n_{1}+1},\{X_{j}\}_{1\leq j\leq n_{1}}).\] From the estimate (4.6), we have that \(\mathbb{E}d^{\alpha}(X_{n_{1}+1},\{X_{j}\}_{1\leq j\leq n_{1}})\leq\left( \frac{D_{1}}{n_{1}}\right)^{\frac{\alpha}{2}}\) for some constant \(D_{1}>0\) not depending on \(n_{1}\) or \(n_{2}\) and this proves (7.4). _Proof of (7.5) in Lemma 11_: Fix an integer \(m\geq 1\) and write \(n=qm+s\) where \(q=q(n)\geq 1\) and \(0\leq s=s(n)\leq m-1\) are integers. As \(n\to\infty,\) \[q(n)\longrightarrow\infty\text{ and }\frac{n}{q(n)}\longrightarrow m. \tag{7.6}\] Using property \((t1)\) with \(n_{1}=qm\) and \(n_{2}=s\) to get that \[\mathbb{E}MST_{n}=\mathbb{E}MST_{qm+s}\leq\mathbb{E}MST_{qm}+s\left(\frac{D}{ qm}\right)^{\frac{\alpha}{2}}\leq\mathbb{E}MST_{qm}+m\left(\frac{D}{qm}\right)^{ \frac{\alpha}{2}},\] since \(s<m\) and so \[\limsup_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\leq\limsup_{n} \left(\frac{qm}{n}\right)^{1-\frac{\alpha}{2}}\frac{\mathbb{E}MST_{qm}}{(qm)^{ 1-\frac{\alpha}{2}}}+\limsup_{n}\frac{m}{n^{1-\frac{\alpha}{2}}}\left(\frac{D} {qm}\right)^{\frac{\alpha}{2}}. \tag{7.7}\] Since \(\frac{n}{q(n)}\longrightarrow m,q(n)\longrightarrow\infty\) as \(n\to\infty,\) (see (7.6)) and \(m\) is fixed, we have that \[\frac{m}{n^{1-\frac{\alpha}{2}}}\left(\frac{D}{qm}\right)^{\frac{\alpha}{2}}= \frac{m^{1-\frac{\alpha}{2}}D^{\frac{\alpha}{2}}}{n}\cdot\left(\frac{n}{q} \right)^{\frac{\alpha}{2}}\longrightarrow 0\] as \(n\to\infty.\) Thus the second term in the right side of (7.7) is zero and the first term in the right side of (7.7) equals \(\limsup_{n}\frac{\mathbb{E}MST_{qm}}{(qm)^{1-\frac{\alpha}{2}}}.\) But since \(q(n)\geq\frac{n-m}{m}\geq\frac{l-m}{m}\) for \(n\geq l,\) we have that \[\sup_{n\geq l}\frac{\mathbb{E}MST_{qm}}{(qm)^{1-\frac{\alpha}{2}}}=\sup_{n\geq l }\frac{\mathbb{E}MST_{q(n)m}}{(q(n)m)^{1-\frac{\alpha}{2}}}\leq\sup_{k\geq\frac {l-m}{m}}\frac{\mathbb{E}MST_{km}}{(km)^{1-\frac{\alpha}{2}}} \tag{7.8}\] and as \(l\uparrow\infty,\) the final term in (7.8) converges to the second term in (7.5). If \(\lambda:=\liminf_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\) then from (3.4), we have that \(\lambda>0.\) We show below that for every \(\epsilon>0,\) there exists \(m\) sufficiently large such that \[\limsup_{k}\frac{\mathbb{E}MST_{km}}{(km)^{1-\frac{\alpha}{2}}}\leq h_{0}^{ \alpha}\cdot\lambda+\epsilon, \tag{7.9}\] where \(h_{0}\) is as in (7.1). Since \(\epsilon>0\) is arbitrary, Theorem 10 then follows from (7.9) and (7.5). To prove (7.9), let \(k\) and \(m\) be positive integers and distribute \(km\) nodes \(\{X_{i}\}_{1\leq i\leq km}\) independently and uniformly in the unit square \(S.\) Also divide \(S\) into \(k\) disjoint squares \(\{W_{j}\}_{1\leq j\leq\frac{k}{A_{k}^{2}}}\) each of size \(\frac{A_{k}}{\sqrt{k}}\times\frac{A_{k}}{\sqrt{k}}\) where \(A_{k}\in\left[1,1+\frac{1}{\log k}\right]\) is such that \(\frac{\sqrt{k}}{A_{k}}\) is an integer for all large \(k\geq K_{0},\) not depending on \(m.\) This is possible by an analogous argument as in (3.11). Further, we label the squares \(\{W_{j}\}\) as in Figure 4 so that the top left most square is labelled \(W_{1},\) the square below \(W_{1}\) is \(W_{2}\) and so on. For \(1\leq j\leq\frac{k}{A_{k}^{2}},\) let \(N(j)\) be the number of nodes of \(\{X_{i}\}_{1\leq i\leq km}\) present in the square \(W_{j}\) and let \(\mathcal{T}(N(j))\) be the MST containing all the \(N(j)\) nodes with corresponding length \(MST_{j}(N(j)).\) Also, for \(1\leq j\leq\frac{k}{A_{k}^{2}}-1,\) we let \(j\ +\ T_{j}^{next}:=\min\{i\geq j+1:W_{i}\) is not empty\(\}\) be the next nonempty square after \(W_{j}\) and let \(e_{j}\) be the edge with one endvertex in \(W_{j}\) and the other endvertex in \(W_{j+T_{j}^{next}},\) having the smallest Euclidean length. We denote the Euclidean length of \(e_{j}\) to be \(d(e_{j})\) and set \(MST_{j}(N(j))=d(e_{j})=0\) if \(W_{j}\) is empty. The union \(\cup_{1\leq j\leq\frac{k}{A_{k}^{2}}}\mathcal{T}(j)\cup\cup_{1\leq j\leq\frac {k}{A_{k}^{2}}-1}\{e_{j}\}\) is a spanning tree containing all the \(km\) nodes and so \[MST_{km}\leq\sum_{j=1}^{\frac{k}{A_{k}^{2}}}MST_{j}(N(j))+c_{2}^{\alpha}\sum_{ j=1}^{\frac{k}{A_{k}^{2}}-1}d^{\alpha}(e_{j}), \tag{7.10}\] using (1.2). In Appendix, we use the translation property \((b2)\) to get that \[MST_{j}(N(j))\leq h_{0}^{\alpha}\cdot MST(N(j)),\] where \(MST(N(j))\) is the MST weight of the configuration of nodes present in \(W_{j}\) with the centre of the square \(W_{j}\) shifted to the origin: i.e., if \(u_{1},\ldots,u_{w}\) are the nodes of \(\{X_{i}\}\) present in the square \(W_{j}\) whose centre is \(s_{j}\), then \(MST(N(j))\) is the MST weight of complete graph formed by the nodes \(u_{1}-s_{j},\ldots,u_{w}-s_{j}\). Taking expectations we get \[\mathbb{E}MST_{km} \leq h_{0}^{\alpha}\sum_{j=1}^{\frac{k}{A_{k}^{2}}}\mathbb{E}MST(N(j) )+c_{2}^{\alpha}\cdot\frac{k}{A_{k}^{2}}\max_{1\leq j\leq\frac{k}{A_{k}^{2}}-1 }\mathbb{E}d^{\alpha}(e_{j}) \tag{7.11}\] \[= h_{0}^{\alpha}\cdot\frac{k}{A_{k}^{2}}\mathbb{E}MST(N(1))+c_{2} ^{\alpha}\cdot\frac{k}{A_{k}^{2}}\max_{1\leq j\leq\frac{k}{A_{k}^{2}}-1} \mathbb{E}d^{\alpha}(e_{j})\] and for convenience we write \[\mathbb{E}MST(N(1))=I_{1}+I_{2},\] where \[I_{1}=\mathbb{E}MST(N(1))\mathbf{1}(F_{1}),I_{2}=\mathbb{E}MST(N(1))\mathbf{1 }(F_{1}^{c})\] and \[F_{1}:=\{mA_{k}^{2}-\sqrt{m}\log m\leq N_{1}\leq mA_{k}^{2}+\sqrt{m}\log m\}\] to get \[\mathbb{E}MST_{km}\leq h_{0}^{\alpha}\cdot\frac{k}{A_{k}^{2}}(I_{1}+I_{2})+c_{2 }^{\alpha}\cdot\frac{k}{A_{k}^{2}}\max_{1\leq j\leq\frac{k}{A_{k}^{2}}-1} \mathbb{E}d^{\alpha}(e_{j}) \tag{7.12}\] The following Lemma estimates each sum in (7.12) starting with the second term. **Lemma 12**.: _There are positive constants \(D_{1},D_{2}\) not depending on \(k\) or \(m\) such that_ \[\max_{1\leq j\leq\frac{k}{A_{k}^{2}}-1}\mathbb{E}d^{\alpha}(e_{j})\leq D_{1} \left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\left(\left(\frac{\log m}{\sqrt{m }}\right)^{\alpha}+e^{-2\sqrt{m}\log m}\right), \tag{7.13}\] \[I_{1}\leq\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\left(\mathbb{E}MST_{m}+ \frac{D_{1}\cdot m^{1-\frac{\alpha}{2}}}{\log k}+D_{1}\cdot m^{\frac{1-\alpha}{ 2}}(\log m)\right) \tag{7.14}\] _and_ \[I_{2}\leq D_{1}\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\left(e^{-m\cdot D_{ 2}}+m^{1-\frac{\alpha}{2}}\cdot\frac{1}{(\log m)^{2}}\right). \tag{7.15}\] Substituting the estimates (7.13), (7.14) and (7.15) of Lemma 12 into (7.12), we get that \[\mathbb{E}MST_{km}\leq k^{1-\frac{\alpha}{2}}A_{k}^{\alpha-2}\left(h_{0}^{ \alpha}\cdot\mathbb{E}MST_{m}+D_{1}\cdot R_{m}(k)\right)\] where \(R_{m}(k)\) equals \[\left(\frac{m^{1-\frac{\alpha}{2}}}{\log k}+m^{\frac{1-\alpha}{2}}(\log m) \right)+\left(\frac{m^{1-\frac{\alpha}{2}}}{(\log m)^{2}}+e^{-D_{2}\cdot m} \right)+\left(\left(\frac{\log m}{\sqrt{m}}\right)^{\alpha}+e^{-\sqrt{m}\log m }\right).\] Using \(1\leq A_{k}\leq 1+\frac{1}{\log k}\longrightarrow 1\) as \(k\rightarrow\infty\) and absorbing \(e^{-D_{2}\cdot m}\) into \(e^{-\sqrt{m}\log m}\), we then get that \[\limsup_{k}\frac{\mathbb{E}MST_{km}}{(km)^{1-\frac{\alpha}{2}}}\leq h_{0}^{ \alpha}\cdot\frac{\mathbb{E}MST_{m}}{m^{1-\frac{\alpha}{2}}}+2D_{1}\cdot R_{m} ^{\prime} \tag{7.16}\] where \[R_{m}^{\prime}:=\frac{\log m}{\sqrt{m}}+\frac{1}{(\log m)^{2}}+\frac{(\log m) ^{\alpha}}{m}+m^{\frac{\alpha}{2}-1}e^{-\sqrt{m}\log m}\leq\epsilon\] for any \(\epsilon>0\) and all \(m\) large. Letting \(\{m_{j}\}\) be any sequence such that \(\frac{\mathbb{E}MST_{m_{j}}}{m_{j}^{1-\frac{\alpha}{2}}}\longrightarrow\lambda =\liminf_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\) and allowing \(m\rightarrow\infty\) through the sequence \(\{m_{j}\}\) we get from (7.5), (7.16) and the above discussion that \[\limsup_{n}\frac{\mathbb{E}MST_{n}}{n^{1-\frac{\alpha}{2}}}\leq h_{0}^{\alpha }\cdot\lambda+2D_{1}\epsilon.\] Since \(\epsilon>0\) is arbitrary, we get (7.9). _Proof of (7.13) in Lemma 12_: We let \(1\leq j\leq\frac{k}{A_{k}^{2}}-1\) and suppose that the right edge of \(W_{j}\) is the left edge of \(W_{j+1}.\) Let \(L\) be the largest integer less than or equal to \(\sqrt{m}\) and for \(1\leq i\leq L\) let \(W_{j}^{small}(i)\) and \(W_{j+1}^{small}(i)\) be disjoint \(\frac{4A_{k}\log m}{\sqrt{km}}\times\frac{A_{k}}{\sqrt{km}}\) rectangles contained in \(W_{j}\) and \(W_{j+1}\) respectively, and sharing a common edge as shown in Figure 6. Here the two \(a\times a\) squares represent \(W_{j}\) and \(W_{j+1}\) with \(a=\frac{A_{k}}{\sqrt{k}},b=\frac{A_{k}}{\sqrt{km}}\) and \(c=\frac{4A_{k}\log m}{\sqrt{km}}.\) The \(c\times b\) rectangle labelled \(i\) represents \(W_{j}^{small}(i)\) for \(1\leq i\leq L\) and represents \(W_{j+1}^{small}(i-L)\) for \(L+1\leq i\leq 2L.\) Let \(E_{nice}(j)\) be the event that there exists a pair of rectangles \(W_{j}^{small}(i)\) and \(W_{j+1}^{small}(i)\) both of which contain at least one of the nodes of \(\{X_{u}\}_{1\leq u\leq km}.\) If \(E_{nice}(j)\) occurs, then the minimum length of an edge with one endvertex in \(W_{j}\) and other endvertex in \(W_{j+1}\) is at most \(2c+2b\leq 4c=\frac{16A_{k}\log m}{\sqrt{km}}.\) If \(E_{nice}(j)\) does not occur, we use the upper bound \(d(e_{j})\leq\frac{2A_{k}T_{j}^{next}}{\sqrt{k}}\) and so \[d(e_{j})\leq\frac{16A_{k}\log m}{\sqrt{km}}\mathbf{1}(E_{nice}(j))+\frac{2A_{k }T_{j}^{next}}{\sqrt{k}}\mathbf{1}(E_{nice}^{c}(j)).\] Using \((a+b)^{\alpha}\leq 2^{\alpha}(a^{\alpha}+b^{\alpha})\) for all \(a,b,\alpha>0\) we therefore get that \[\mathbb{E}d^{\alpha}(e_{j})\leq\left(\frac{4A_{k}}{\sqrt{k}}\right)^{\alpha} \left(\left(\frac{8\log m}{\sqrt{m}}\right)^{\alpha}\mathbb{P}(E_{nice}(j))+ \mathbb{E}\left(T_{j}^{next}\right)^{\alpha}\mathbf{1}(E_{nice}^{c}(j))\right). \tag{7.17}\] To estimate the second term within the brackets in (7.17), we first use Cauchy-Schwarz inequality and write \[\mathbb{E}\left(T_{j}^{next}\right)^{\alpha}\mathbf{1}(E_{nice}^{c}(j))\leq \left(\mathbb{E}\left(T_{j}^{next}\right)^{2\alpha}\right)^{\frac{1}{2}}\left( \mathbb{P}(E_{nice}^{c}(j))\right)^{\frac{1}{2}}. \tag{7.18}\] Figure 6: Determining the length of the edge \(e_{j}\) with one endvertex in the left \(a\times a\) square \(W_{j}\) and other endvertex in right \(a\times a\) square \(W_{j+1}.\) Here \(a=\frac{A_{k}}{\sqrt{k}},b=\frac{A_{k}}{\sqrt{km}}\) and \(c=\frac{4A_{k}\log m}{\sqrt{km}}.\) For any \(l\geq 1\) we have \(T_{j}^{next}>l\) if and only if \(W_{j+1},\ldots,W_{j+l}\) are empty, which happens with probability \((1-l\cdot\frac{A_{k}^{2}}{k})^{km}\leq e^{-lA_{k}m}\leq e^{-lm}\) since \(A_{k}\geq 1\). Letting \(\alpha_{0}\) be the smallest integer greater than or equal to \(\alpha\) we use the moment estimate (7.6) with \(\theta=m\geq 1\) and \(r=2\alpha_{0}\) to get \[\mathbb{E}(T_{j}^{next})^{2\alpha}\leq\mathbb{E}(T_{j}^{next})^{2\alpha_{0}} \leq\frac{(2\alpha_{0})!}{(1-e^{-m})^{2\alpha_{0}}}\leq\frac{(2\alpha_{0})!}{( 1-e^{-1})^{2\alpha_{0}}}. \tag{7.19}\] Next, if the event \(E_{nice}^{c}(j)\) occurs, then in each pair \(\{W_{j}^{small}(i),W_{j+1}^{small}(i)\}\)\(1\leq i\leq L\) at least one rectangle is empty. If there are \(l\geq L\) such empty rectangles in total, the area formed by these rectangles is \(l\frac{4A_{k}^{2}\log m}{km}\) and since there are \(2L\) rectangles to choose from, this happens with probability at most \[\binom{2L}{l}\left(1-l\cdot\frac{4A_{k}^{2}\log m}{km}\right)^{km}\leq(2L)^{l }e^{-4lA_{k}^{2}\log m}\leq(2L)^{l}e^{-4l\log m}\] since \(A_{k}\geq 1.\) Using \(L\leq\sqrt{m},\) we further have \[(2L)^{l}e^{-4l\log m}\leq(2\sqrt{m})^{l}e^{-4l\log m}\leq e^{-3l\log m}\] since \(2\sqrt{m}e^{-4\log m}=\frac{2\sqrt{m}}{m^{4}}\leq\frac{1}{m^{3}}\) for all \(m\geq 4.\) Therefore \[\mathbb{P}(E_{nice}^{c}(j))\leq\sum_{l\geq L}e^{-3l\log m}\leq De^{-3L\log m} \leq De^{-2\sqrt{m}\log m},\] for some constant \(D>0\) not depending on \(j,k\) or \(m,\) since \(L\geq\frac{\sqrt{m}}{2}.\) Combining this and the estimate (7.19) for \(T_{j}^{next}\) from the previous paragraph, we have from (7.18) that \(\mathbb{E}\left(T_{j}^{next}\right)^{\alpha}\mathbf{1}(E_{dense}^{c}(j))\leq D _{3}e^{-2\sqrt{m}\log m}\) for some constant \(D_{3}>0\) not depending on \(j,k\) or \(m.\) Plugging this into (7.17), we get (7.13). _Proof of (7.14) in Lemma 12_: We write \[I_{1}=\sum_{j=j_{low}}^{j_{up}}\mathbb{E}MST(N(1))\mathbf{1}(N(1)=j),\] where \(j_{low}:=mA_{k}^{2}-\sqrt{m}\log m\leq mA_{k}^{2}+\sqrt{m}\log m=:j_{up}.\) Given \(N(1)=j,\) the nodes in \(W_{1}\) are independently and uniformly distributed in \(W_{1}\) and we define \(\mathbb{E}MST\left(j;\frac{A_{k}}{\sqrt{k}}\right)\) to be the expected length of the shifted MST of the \(j\) nodes independently and uniformly distributed in the \(\frac{A_{k}}{\sqrt{k}}\times\frac{1}{k}\) square \(W_{1}\). From the scaling relation (7.4) in Appendix we have that \[I_{1}=\sum_{j=j_{low}}^{j_{up}}\mathbb{E}MST\left(j;\frac{1}{k}\right)\mathbb{P }(N(1)=j)=\left(\frac{A_{k}}{\sqrt{k}}\right)^{\frac{\alpha}{2}}\sum_{j=j_{low }}^{j_{up}}\left(\mathbb{E}MST_{j}\right)\mathbb{P}(N(1)=j), \tag{7.20}\] by (7.4). From the one node difference estimate (4.8), we have for \(j_{low}\leq u\leq j_{up}\) that \(\mathbb{E}|MST_{u+1}-MST_{u}|\leq\left(\frac{D}{u}\right)^{\frac{\alpha}{2}}\) for some constant \(D>0\) not depending on \(u,k\) or \(m.\) Since \(u\geq j_{low}=mA_{k}^{2}-\sqrt{m}\log m\geq m-\sqrt{m}\log m\) using \(A_{k}\geq 1\) we have for any \(j_{low}\leq j_{1},j_{2}\leq j_{up}\) that \(\mathbb{E}|MST_{j_{2}}-MST_{j_{1}}|\leq\sum_{u=j_{low}}^{j_{up}-1}\mathbb{E}| MST_{u+1}-MST_{u}|\) is bounded above by \[\sum_{u=j_{low}}^{j_{up}-1}\left(\frac{D}{u}\right)^{\frac{\alpha}{2}}\leq(j _{up}-j_{low})\left(\frac{D_{1}}{m}\right)^{\frac{\alpha}{2}}\leq D_{2}(\log m )m^{\frac{1-\alpha}{2}}, \tag{7.21}\] where \(D_{1},D_{2}>0\) are constants not depending on \(j_{1},j_{2},k\) or \(m.\) Setting \(j_{1}=mA_{k}^{2}\) and \(j_{2}=j\) and using (7.21) we get \(MST_{j}\leq MST_{mA_{k}^{2}}+D_{2}m^{\frac{1-\alpha}{2}}(\log m)\) for all \(j_{low}\leq j\leq j_{up}.\) From the expression for \(I_{1}\) in (7.20) we therefore have that \[I_{1}\leq\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\left(\mathbb{E}MST_{mA_ {k}^{2}}+D_{2}m^{\frac{1-\alpha}{2}}(\log m)\right). \tag{7.22}\] Since \(A_{k}\leq 1+\frac{1}{\log k},\) we have that \(mA_{k}^{2}-m\leq\frac{3m}{\log k}\) for all \(k\geq 4\) and so using the estimate (7.4) with \(n_{1}=m,n_{2}=m(A_{k}^{2}-1)\leq\frac{3m}{\log k},\) we get that \(\mathbb{E}MST_{mA_{k}^{2}}\leq\mathbb{E}MST_{m}+\frac{D_{3}m^{1-\frac{\alpha} {2}}}{\log k}\) for some constant \(D_{3}>0\) not depending on \(m\) or \(k\) and this obtains (7.14). _Proof of (7.15) in Lemma 12_: There are \(N(1)\) nodes in the square \(W_{1}\) and given \(N(1)=l,\) the \(l\) nodes are uniformly distributed within \(W_{1}\) and so arguing as in the previous two paragraphs and using the scaling relation (7.4) in Appendix, we have \[\mathbb{E}\left(MST(N(1))|N(1)=l\right) = \mathbb{E}MST\left(l;\frac{A_{k}}{\sqrt{k}}\right) \tag{7.23}\] \[= \left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\mathbb{E}MST_{l}\] \[\leq D_{1}\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\cdot l^{1- \frac{\alpha}{2}}\] for some constant \(D_{1}>0\) using the expectation upper bound in (3.4). Thus \[\mathbb{E}MST(N(1))\mathbf{1}(N(1)=l) = \mathbb{E}(MST(N(1))|N(1)=l)\mathbb{P}(N(1)=l)\] \[\leq D_{1}\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\cdot l^{1- \frac{\alpha}{2}}\mathbb{P}(N(1)=l)\] and consequently \(I_{2}=\mathbb{E}MST(N(1))\mathbf{1}(F_{1}^{c})\) equals \[\sum_{l\leq mA_{k}^{2}-\sqrt{m}\log m}+\sum_{l\geq mA_{k}^{2}+ \sqrt{m}\log m}\mathbb{E}MST(N(1))\mathbf{1}(N(1)=l) \tag{7.24}\] \[\leq \sum_{l\leq mA_{k}^{2}-\sqrt{m}\log m}+\sum_{l\geq mA_{k}^{2}+ \sqrt{m}\log m}D_{1}\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\cdot l^{1- \frac{\alpha}{2}}\mathbb{P}(N(1)=l)\] \[= D_{1}\left(\frac{A_{k}}{\sqrt{k}}\right)^{\alpha}\mathbb{E}(N( 1))^{1-\frac{\alpha}{2}}\mathbf{1}(F_{1}^{c}).\] Each node \(X_{i},1\leq i\leq km\) has a probability \(\frac{A_{k}^{2}}{k}\) of being present in the \(\frac{A_{k}}{\sqrt{k}}\times\frac{A_{k}}{\sqrt{k}}\) square \(W_{1}.\) Therefore the number of nodes \(N(1)\) in the square \(W_{1}\) is binomially distributed with \[m\leq\mathbb{E}N(1)=mA_{k}^{2}\leq 4m\text{\ \ \ and \ \ }var(N(1))\leq km \cdot\frac{A_{k}^{2}}{k}=mA_{k}^{2}\leq 4m, \tag{7.25}\] since \(1\leq A_{k}\leq 1+\frac{1}{\log k}\leq 2.\) We recall from discussion prior to (7.12) that \(F_{1}^{c}=\{N(1)\leq mA_{k}^{2}-\sqrt{m}\log m\}\cup\{N(1)\geq mA_{k}^{2}+ \sqrt{m}\log m\}\) and so from Chebychev's inequality we get that \(\mathbb{P}(F_{1}^{c})\leq\frac{var(N(1))}{(\sqrt{m}\log m)^{2}}\leq\frac{4}{( \log m)^{2}}.\) We show below that \[\mathbb{E}(N(1))^{1-\frac{\alpha}{2}}\mathbf{1}(F_{1}^{c})\leq D_{2}e^{-D_{3} m}+m^{1-\frac{\alpha}{2}}\mathbb{P}(F_{1}^{c})\leq D_{2}e^{-D_{3}m}+m^{1-\frac{ \alpha}{2}}\frac{4}{(\log m)^{2}} \tag{7.26}\] for some constants \(D_{2},D_{3}\) not depending on \(k\) or \(m\) and substituting (7.26) into (7.24), we then get (7.15). To prove (7.26) we use \(1\leq A_{k}\leq 2\) to get that \[\frac{m}{2}\leq mA_{k}^{2}-\sqrt{m}\log m\leq mA_{k}^{2}+\sqrt{m}\log m\leq 4m +\sqrt{m}\log m\leq 5m\] and so \(\mathbf{1}(F_{1}^{c})\leq\mathbf{1}(H_{1})+\mathbf{1}(H_{2})+\mathbf{1}(H_{3} )+\mathbf{1}(H_{4}),\) where \[H_{1}:=\left\{N(1)\leq\frac{m}{2}\right\},H_{2}:=\left\{\frac{m}{2}\leq N(1) \leq mA_{k}^{2}-\sqrt{m}\log m\right\},\] \[H_{3}:=\left\{mA_{k}^{2}+\sqrt{m}\log m\leq N(1)\leq 5m\right\}\text{ and }H_{4}:= \{N(1)\geq 5m\}.\] Using the bounds \(m\leq\mathbb{E}N(1)\leq 4m\) from (7.25) and the standard deviation estimates (7.1) and (7.2) in Appendix we have that \(\max\left(\mathbb{P}(H_{1}),\mathbb{P}(H_{4})\right)\leq e^{-2Dm}\) for some constant \(D>0\) not depending on \(k\) or \(m\) and so \[\mathbb{E}N(1)^{1-\frac{\alpha}{2}}\mathbf{1}(H_{1})\leq\mathbb{E}N(1) \mathbf{1}(H_{1})\leq\frac{m}{2}\mathbb{P}(H_{1})\leq me^{-2Dm}. \tag{7.27}\] Using Cauchy-Schwarz inequality and the bound \(\mathbb{E}N^{2}(1)=var(N(1))+(\mathbb{E}N(1))^{2}\leq 4m+(4m)^{2}\leq 20m^{2}\) (see (7.25)) we also get \[\mathbb{E}N(1)^{1-\frac{\alpha}{2}}\mathbf{1}(H_{4})\leq\mathbb{E}N(1) \mathbf{1}(H_{4})\leq\left(\mathbb{E}N^{2}(1)\right)^{\frac{1}{2}}\left( \mathbb{P}(H_{4})\right)^{\frac{1}{2}}\leq m\sqrt{20}e^{-Dm}. \tag{7.28}\] Finally, for the range of \(N(1)\) in the events \(H_{2}\) and \(H_{3}\) we have \[\mathbb{E}N(1)^{1-\frac{\alpha}{2}}\mathbf{1}(H_{2}\cup H_{3})\leq D_{1}m^{1- \frac{\alpha}{2}}\mathbb{P}(H_{2}\cup H_{3}) \tag{7.29}\] for some constant \(D_{1}>0\) not depending on \(k\) or \(m.\) Adding (7.27), (7.28) and (7.29) and using the fact that \(H_{2}\cup H_{3}\subseteq F_{1}^{c},\) we get (7.26). ## Appendix : Miscellaneous results _Standard deviation estimates_ We use the following standard deviation estimates for sums of independent Poisson and Bernoulli random variables (see Alon and Spencer (2008)). **Lemma 13**.: _Suppose \(W_{i},1\leq i\leq m\) are independent Bernoulli random variables satisfying \(\mu_{1}\leq\mathbb{P}(W_{1}=1)=1-\mathbb{P}(W_{1}\ =\ 0)\leq\mu_{2}.\) For any \(0<\epsilon<\frac{1}{2},\)_ \[\mathbb{P}\left(\sum_{i=1}^{m}W_{i}>m\mu_{2}(1+\epsilon)\right)\leq\exp\left(- \frac{\epsilon^{2}}{4}m\mu_{2}\right) \tag{7.1}\] _and_ \[\mathbb{P}\left(\sum_{i=1}^{m}W_{i}<m\mu_{1}(1-\epsilon)\right)\leq\exp\left(- \frac{\epsilon^{2}}{4}m\mu_{1}\right) \tag{7.2}\] _Estimates (7.1) and (7.2) also hold if \(\{W_{i}\}\) are independent Poisson random variables with \(\mu_{1}\leq\mathbb{E}W_{1}\leq\mu_{2}.\)_ ### Proof of the monotonicity property (3.23) For \(\alpha\leq 1\) we couple the original Poisson process \(\mathcal{P}\) and the homogenous process \(\mathcal{P}_{\delta}\) in the following way. Let \(V_{i},i\geq 1\) be i.i.d. random variables each with density \(f(.)\) and let \(N_{V}\) be a Poisson random variable with mean \(n,\) independent of \(\{V_{i}\}.\) The nodes \(\{V_{i}\}_{1\leq i\leq N_{V}}\) form a Poisson process with intensity \(nf(.)\) which we denote as \(\mathcal{P}\) and colour green. Let \(U_{i},i\geq 1\) be i.i.d. random variables each with density \(\epsilon_{2}-f(.)\) where \(\epsilon_{2}\geq 1\) is as in (1.1) and let \(N_{U}\) be a Poisson random variable with mean \(n(\epsilon_{2}-1).\) The random variables \((\{U_{i}\},N_{U})\) are independent of \((\{V_{i}\},N_{V})\) and the nodes \(\{U_{i}\}_{1\leq i\leq N_{U}}\) form a Poisson process with intensity \(n(\epsilon_{2}-f(.))\) which we denote as \(\mathcal{P}_{ext}\) and colour red. The nodes of \(\mathcal{P}\) and \(\mathcal{P}_{ext}\) together form a homogenous Poisson process with intensity \(n\epsilon_{2},\) which we denote as \(\mathcal{P}_{\delta}\) and define it on the probability space \((\Omega_{\delta},\mathcal{F}_{\delta},\mathbb{P}_{\delta}).\) Let \(\omega_{\delta}\in\Omega_{\delta}\) be any configuration and as above let \(\{i_{j}^{(\delta)}\}_{1\leq j\leq Q_{\delta}}\) be the indices of the squares in \(\{R_{j}\}\) containing at least one node of \(\mathcal{P}_{\delta}\) and let \(\{i_{j}\}_{1\leq j\leq Q}\) be the indices of the squares in \(\{R_{j}\}\) containing at least one node of \(\mathcal{P}.\) The indices in \(\{i_{j}^{(\delta)}\}\) and \(\{i_{j}\}\) depend on \(\omega_{\delta}.\) Defining \(S_{\alpha}=S_{\alpha}(\omega_{\delta})\) and \(S_{\alpha}^{(\delta)}=S_{\alpha}^{(\delta)}(\omega_{\delta})\) as before, we have that \(S_{\alpha}\) is determined only by the green nodes of \(\omega_{\delta}\) while \(S_{\alpha}^{(\delta)}\) is determined by both green and red nodes of \(\omega_{\delta}.\) From the monotonicity property, we therefore have that \(S_{\alpha}(\omega_{\delta})\leq S_{\alpha}^{(\delta)}(\omega_{\delta})\) and so for any \(x>0\) we have \[\mathbb{P}_{\delta}(S_{\alpha}^{(\delta)}<x)\leq\mathbb{P}_{\delta}(S_{\alpha }<x)=\mathbb{P}_{0}(S_{\alpha}<x), \tag{7.3}\] proving (3.23). If \(\alpha>1\), we perform a slightly different analysis. Letting \(\epsilon_{1}\leq 1\) be as in (1.1), we construct a Poisson process \({\cal P}_{ext}\) with intensity \(n(f(.)-\epsilon_{1})\) and colour nodes of \({\cal P}_{ext}\) red. Letting \({\cal P}_{\delta}\) be another independent Poisson process with intensity \(n\epsilon_{1}\), we colour nodes of \({\cal P}_{\delta}\) green. The superposition of \({\cal P}_{ext}\) and \({\cal P}_{\delta}\) is a Poisson process with intensity \(nf(.)\), which we define on the probability space \((\Omega_{\delta},{\cal F}_{\delta},\mathbb{P}_{\delta}).\) In this case, the sum \(S_{\alpha}\) is determined by both green and red nodes while \(S_{\alpha}^{(\delta)}\) is determined only by the green nodes. Again using the monotonicity property of \(S_{\alpha}\), we get (7.3). ### Scaling and translation property of MSTs For a set of nodes \(\{x_{1},\ldots,x_{n}\}\) in the unit square \(S,\) recall from Section 1 that \(K_{n}(x_{1},\ldots,x_{n})\) is the complete graph formed by joining all the nodes by straight line segments and the edge \((x_{i},x_{j})\) is assigned a weight of \(h^{\alpha}(x_{i},x_{j}),\) where \(h(x_{i},x_{j})\) is the weight of the edge \((x_{i},x_{j})\) satisfying (1.2) and properties \((b1)-(b2).\) We denote \(MST(x_{1},\ldots,x_{n})\) to be the length of the minimum spanning tree of \(K_{n}(x_{1},\ldots,x_{n})\) with edge weights obtained as in (1.3). _Scaling_: For any \(a>0,\) consider the graph \(K_{n}(ax_{1},\ldots,ax_{n}).\) Using the scaling property \((b1),\) the weight \(h(ax_{1},ax_{2})\) of the edge between the vertices \(ax_{1}\) and \(ax_{2}\) is simply \(a\cdot h(x_{1},x_{2}),\) where \(h(x_{1},x_{2})\) is the weight of the edge between \(x_{1}\) and \(x_{2}.\) Using the definition of MST in (1.3) we therefore have \(MST(ax_{1},\ldots,ax_{n})=a^{\alpha}MST(x_{1},\ldots,x_{n})\) and so if \(Y_{1},\ldots,Y_{n}\) are \(n\) nodes uniformly distributed in the square \(aS\) of side length \(a,\) then \[MST(n;a):=MST(Y_{1},\ldots,Y_{n})=a^{\alpha}MST(X_{1},\ldots,X_{n}),\] where \(X_{i}=\frac{Y_{i}}{a},1\leq i\leq n\) are i.i.d. uniformly distributed in \(S.\) Recalling the notation \(MST_{n}=MST(X_{1},\ldots,X_{n})\) from (1.3) we therefore get \[\mathbb{E}MST(n;a)=a^{\alpha}\mathbb{E}MST_{n}. \tag{7.4}\] _Translation_: For \(b\in\mathbb{R}^{2}\) consider the graph \(K_{n}(x_{1}+b,\ldots,x_{n}+b).\) Using the translation property \((b2),\) the weight \(h(x_{1}+b,x_{2}+b)\leq h_{0}\cdot h(x_{1},x_{2}),\) the weight of the edge between \(x_{1}\) and \(x_{2}.\) Using the definition of MST in (1.3) we therefore have \(MST(x_{1}+b,\ldots,x_{n}+b)\leq h_{0}^{\alpha}\cdot MST(x_{1},\ldots,x_{n}).\) ### Moments of random variables Let \(X\geq 1\) be any integer valued random variable such that \[\mathbb{P}(X\geq l)\leq e^{-\theta(l-1)} \tag{7.5}\] for all integers \(l\geq 1\) and some constant \(\theta>0\) not depending on \(l.\) For every integer \(r\geq 1,\) \[\mathbb{E}X^{r}\leq r\sum_{l\geq 1}l^{r-1}\mathbb{P}(X\geq l)\leq r\sum_{l\geq 1 }l^{r-1}e^{-\theta(l-1)}\leq\frac{r!}{(1-e^{-\theta})^{r}} \tag{7.6}\] _Proof of (7.6)_: For \(r\geq 1\) we have \[\mathbb{E}X^{r}=\sum_{l\geq 1}l^{r}\mathbb{P}(X=l)=\sum_{l\geq 1}l^{r}\mathbb{P} (X\geq l)-l^{r}\mathbb{P}(X\geq l+1) \tag{7.7}\] and substituting the \(l^{r}\) in the final term of (7.7) with \((l+1)^{r}-((l+1)^{r}-l^{r})\) we get \[\mathbb{E}X^{r} = \sum_{l\geq 1}\left(l^{r}\mathbb{P}(X\geq l)-(l+1)^{r}\mathbb{P}(X \geq l+1)\right) \tag{7.8}\] \[\qquad+\ \ \sum_{l\geq 1}((l+1)^{r}-l^{r})\mathbb{P}(X\geq l+1)\] \[= 1+\sum_{l\geq 1}((l+1)^{r}-l^{r})\mathbb{P}(X\geq l+1)\] \[= \sum_{l\geq 0}((l+1)^{r}-l^{r})\mathbb{P}(X\geq l+1)\] where the second equality is true since \(l^{r}\mathbb{P}(X\geq l)\leq l^{r}e^{-\theta(l-1)}\longrightarrow 0\) as \(l\ \to\ \infty.\) Using \((l+1)^{r}-l^{r}\leq r\cdot(l+1)^{r-1}\) in (7.8), we get the first relation in (7.6). We prove the second relation in (7.6) by induction as follows. Let \(\gamma=e^{-\theta}<1\) and \(J_{r}:=\sum_{l\geq 1}l^{r-1}\gamma^{l-1}\) so that \[J_{r+1}(1-\gamma)=\sum_{l\geq 1}l^{r}\gamma^{l-1}-\sum_{l\geq 1}l^{r}\gamma^{l} =\sum_{l\geq 1}\left(l^{r}-(l-1)^{r}\right)\gamma^{l-1}.\] Using \(l^{r}-(l-1)^{r}\leq r\cdot l^{r-1}\) for \(l\geq 1\) we therefore get that \[J_{r+1}(1-\gamma)\leq r\sum_{l\geq 1}l^{r-1}\gamma^{l-1}=rJ_{r}\] and so the second relation in (7.6) follows from induction. ### Acknowledgement I thank Professors Rahul Roy, C. R. Subramanian and Federico Camia for crucial comments that led to an improvement of the paper. I also thank Professors Rahul Roy, C. R. Subramanian, Federico Camia and IMSc for my fellowships.
2307.09760
On the Tractability of Defensive Alliance Problem
Given a graph $G = (V, E)$, a non-empty set $S \subseteq V$ is a defensive alliance, if for every vertex $v \in S$, the majority of its closed neighbours are in $S$, that is, $|N_G[v] \cap S| \geq |N_G[v] \setminus S|$. The decision version of the problem is known to be NP-Complete even when restricted to split and bipartite graphs. The problem is \textit{fixed-parameter tractable} for the parameters solution size, vertex cover number and neighbourhood diversity. For the parameters treewidth and feedback vertex set number, the problem is W[1]-hard. \\ \hspace*{2em} In this paper, we study the defensive alliance problem for graphs with bounded degree. We show that the problem is \textit{polynomial-time solvable} on graphs with maximum degree at most 5 and NP-Complete on graphs with maximum degree 6. This rules out the fixed-parameter tractability of the problem for the parameter maximum degree of the graph. We also consider the problem from the standpoint of parameterized complexity. We provide an FPT algorithm using the Integer Linear Programming approach for the parameter distance to clique. We also answer an open question posed in \cite{AG2} by providing an FPT algorithm for the parameter twin cover.
Sangam Balchandar Reddy, Anjeneya Swami Kare
2023-07-19T05:43:30Z
http://arxiv.org/abs/2307.09760v1
# On the Tractability of Defensive Alliance Problem+ ###### Abstract Given a graph \(G=(V,E)\), a non-empty set \(S\subseteq V\) is a Defensive Alliance, if for every vertex \(v\in S\), the majority of its closed neighbours are in \(S\), that is, \(|N_{G}[v]\cap S|\geq|N_{G}[v]\setminus S|\). The decision version of the problem is known to be NP-Complete even when restricted to split and bipartite graphs. The problem is _fixed-parameter tractable_ for the parameters solution size, vertex cover number and neighbourhood diversity. For the parameters treewidth and feedback vertex set number, the problem is W[1]-hard. In this paper, we study the Defensive Alliance problem for graphs with bounded degree. We show that the problem is _polynomial-time solvable_ on graphs with maximum degree at most 5 and NP-Complete on graphs with maximum degree 6. This rules out the fixed-parameter tractability of the problem for the parameter maximum degree of the graph. We also consider the problem from the standpoint of parameterized complexity. We provide an FPT algorithm using the Integer Linear Programming approach for the parameter distance to clique. We also answer an open question posed in [9] by providing an FPT algorithm for the parameter twin cover. Keywords:Defensive Alliance Bounded Degree Graphs Twin cover Distance to clique NP-Complete FPT ## 1 Introduction We, humans, form alliances for the sake of mutual benefit. This can often be seen in politics, businesses, trades, etc. The main agenda behind the alliance is to achieve a common goal between the parties. Based on this idea, the alliances are classified as follows. An alliance is formed between the parties of a group to defend against an attack from other parties (defensive alliance) or to be able to attack other parties (offensive alliance). The concept of alliances in graphs was first introduced by Kristiansen, Hedetniemi and Hedetniemi [17]. The initial algorithmic results of the problem were given by Jamieson [13]. Alliances in graphs have been well studied [2] and various generalizations are also studied [7, 21]. A defensive alliance is a non-empty set \(S\subseteq V\), such that for every vertex \(v\in S:|N_{G}[v]\cap S|\geq|N_{G}[v]\setminus S|\). We say that a vertex \(v\) is protected if it has at least as many closed neighbours inside \(S\) as it has outside \(S\). The boundary of a set \(S\subseteq V\) is denoted by \(\partial S\) and represents the vertices in the neighbourhood of \(S\), excluding \(S\), i.e., \(\partial S\) = \(N[S]\setminus S\). An offensive alliance is a non-empty set \(S\subseteq V\), such that for every vertex \(v\in\partial S:|N_{G}[v]\cap S|\geq|N_{G}[v]\setminus S|\). A powerful alliance is both defensive and offensive simultaneously. An alliance is global if it is also a dominating set. In this paper, we confine our study to the Defensive alliance problem. We define the decision version of the problem as follows: Defensive alliance: **Input:** A simple, undirected graph \(G=(V,E)\), and a positive integer \(k\). **Question:** Is there a defensive alliance \(S\subseteq V\) such that \(|S|\leq k\)? The optimization version of the problem asks to compute the defensive alliance with minimum cardinality. Known results.There are polynomial time algorithms for finding minimum alliances* in trees [1; 13]. Kiyomi and Otachi [16] have provided an XP algorithm on graphs with bounded treewidth. There is a polynomial time algorithm for finding a minimum defensive alliance in series-parallel graphs [4]. Jamieson et al. [12] showed that the defensive alliance is NP-Complete even when restricted to split and bipartite graphs. Gaikwad and Maity [9] proved that the defensive alliance is NP-Complete on circle graphs. Fernau and Raible [6] proved that the alliance problems, including their global versions, are _fixed-parameter tractable_ when parameterized by the solution size. Enciso [4] proved that defensive and global defensive alliances are _fixed-parameter tractable_ parameterized by domino treewidth. Alliances are _fixed-parameter tractable_ when parameterized by vertex cover number of the input graph [16]. Recently, both the defensive and offensive alliances were also shown to be _fixed-parameter tractable_ parameterized by the neighborhood diversity of the input graph [8]. Defensive alliance is W[1]-hard parameterized by a wide range of parameters such as the feedback vertex set, treewidth, cliquewidth, treedepth and pathwidth [9]. Footnote *: By alliances, we mean defensive, offensive, and powerful alliance problems Our results.We investigate the complexity of the Defensive alliance problem on graphs with bounded degree. We also study the fixed-parameter tractability of the problem for the parameters distance to clique and twin cover. Our main findings are as follows: 1. We show that the Defensive alliance problem is _polynomial-time solvable_ on graphs with maximum degree at most 5. 2. We prove that the Defensive alliance problem is NP-Complete on graphs with maximum degree 6. We give a reduction from the well-known NP-Complete problem, Dominating set on cubic graphs. 3. We also show that the Defensive alliance problem is FPT parameterized by distance to clique. 4. We provide an FPT algorithm for the Defensive alliance problem parameterized by twin cover, which answers an open question posed in [9]. ## 2 Preliminaries Notation and terminology.We consider only simple, finite, connected and undirected graphs. Let \(G=(V,E)\) be a graph with \(V\) as the vertex set and \(E\) as the edge set such that \(n=|V|\) and \(m=|E|\). \(\Delta(G)\) represents the maximum degree of \(G\). We denote the open neighbourhood of a vertex \(v\) by \(N_{G}(v)\) and the closed neighbourhood by \(N_{G}[v]\). The set of vertices that belong to \(N_{G}(v)\) and \(N_{G}[v]\), respectively, are referred to as the neighbours and closed neighbours of a vertex \(v\). The open neighbourhood of a set \(S\) is denoted by \(N_{G}(S)\) and the closed neighbourhood by \(N_{G}[S]\). \(N_{G}(S)\) = \(\bigcup\limits_{v\in S}N_{G}(v)\) and \(N_{G}[S]\) = \(\bigcup\limits_{v\in S}N_{G}[v]\). The degree of a vertex \(v\) is represented by \(d(v)\) and \(d(v)=|N(v)|\). \(dist(a,b)\) represents the shortest-path distance between the vertices \(a\) and \(b\). \(SP(a,b)\) represents the set of vertices in the shortest path between \(a\), \(b\) including \(a\), \(b\). The girth of a graph \(G\), is the length of the shortest cycle in \(G\) and is denoted by \(g\). A graph is \(r\)-regular if each vertex in the graph has a degree exactly \(r\). A cubic graph is a 3-regular graph. Parameterized Complexity.A problem is considered to be _fixed-parameter tractable_ w.r.t. a parameter \(k\), if there exists an algorithm with running time \(\mathcal{O}(f(k)\cdot n^{\mathcal{O}(1)})\), where \(f\) is a computable function. We use \(\mathcal{O}^{*}(f(k))\) to denote the time complexity of the form \(\mathcal{O}(f(k)\cdot n^{\mathcal{O}(1)})\). Similarly, the term W[1]-hard is used to express the hardness of a problem w.r.t. a parameter. In parameterized complexity, the hierarchy of complexity classes is defined as follows: \(\text{FPT}\subseteq\text{W[1]}\subseteq\text{W[2]}\subseteq...\subseteq\text{ XP}\). In general, \(\text{FPT}\neq\text{W[1]}\) under the Exponential Time Hypothesis [11]. Class W[1] is the analog of NP in parameterized complexity. For more information on _graph theory_ and _parameterized complexity_, we refer the reader to [22] and [3], respectively. ## 3 Defensive Alliance on graphs with maximum degree at most 5 In this section, we show that the Defensive alliance problem is _polynomial-time solvable_ on graphs with maximum degree at most 5. For the rest of the section, we assume that \(\Delta(G)\leq 5\). Theorem 3.1: _The Defensive alliance problem on graphs with \(\Delta(G)\leq 5\) is polynomial-time solvable._ Lemma 1: A set containing only the vertices of a cycle in \(G\), forms a defensive alliance. Proof: Let \(S\) be a set containing only the vertices of a cycle. All the vertices in will have three closed neighbours in \(S\) and at most three neighbours outside \(S\). Therefore, all the vertices in \(S\) are protected, making it a defensive alliance. Consider a graph \(G=(V,E)\), where \(|V|=n\) and \(V=\{v_{1},v_{2},...,v_{n}\}\). We solve \(n\) independent subproblems: \(P_{1},P_{2},...,P_{n}\), where \(P_{i}\) denotes the problem of computing the smallest defensive alliance of \(G\) containing the vertex \(v_{i}\). **Observation 1:** Consider the graph shown in Figure 1. The union of the vertices of the path \(v_{4}\), \(v_{5}\) and the cycle \(v_{5},v_{6}\) and \(v_{7}\) forms a defensive alliance for the subproblem \(P_{4}\) which includes the vertex \(v_{4}\). However, the cycle \(v_{5},v_{6}\) and \(v_{7}\) forms a defensive alliance of smaller cardinality in subproblem \(P_{5}\). From observation 1, it is clear that in order to compute the optimal solution to the original problem, we can ignore a case where we consider the vertices of a cycle and the path joining the cycle to obtain the defensive alliance for subproblems. For the rest of this section, we compute the solutions to the subproblems by avoiding such a scenario. **Observation 2:** If \(d(v_{i})\leq 1\), then \(v_{i}\) itself forms a defensive alliance. So, we consider only the subproblems in which \(d(v_{i})\geq 2\). Lemma 2: Let \(d(v_{i})=3\) [or 2], the size of the optimal solution for \(P_{i}\), will be minimum among the following: 1. \(\min_{x\in W}dist(v_{i},x)+1\), where \(W=\{u\in V(G)\setminus v_{i}|d(u)\leq 3\}\). 2. length of the shortest cycle containing \(v_{i}\). Proof: Let \(u_{1},u_{2}\) and \(u_{3}\) [or \(u_{1}\) and \(u_{2}\)] are the neighbours of \(v_{i}\). Case 1: Let us assume that exactly one neighbour of \(v_{i}\) (say \(u_{1}\)) is picked into the solution. As neither \(u_{2}\) nor \(u_{3}\) [or \(u_{2}\)] can be a part of the solution, it is clear that the defensive alliance cannot be a set of vertices from a cycle containing \(v_{i}\). We cannot add any more vertices that would form a cycle outside \(v_{i}\), as explained earlier in Observation 1. Then the only way we could form a defensive alliance containing \(v_{i}\) and \(u_{1}\) is to find the closest vertex (say \(x\)) from \(v_{i}\), with \(d(x)\leq 3\) and add all the vertices in the shortest path joining \(u_{1}\) and \(x\) to the set. Except \(x\) and \(v_{i}\) all the other vertices of the path has at least three closed neighbours Figure 1: Illustration of _Observation 1_, with an instance of the subproblem \(P_{4}\) on the left and an instance of the subproblem \(P_{5}\) on the right. in the set and both \(x_{i}\) and \(v_{i}\) with the degree of at most three has at least two closed neighbours in the set. Hence, the vertices of the path joining \(x\) and \(v_{i}\) forms a defensive alliance. If no such \(x\) exists, then \(P_{i}\) cannot lead to an optimal solution. Case 2: Let us assume that exactly two neighbours of \(v_{i}\) (say \(u_{1}\) and \(u_{2}\)) are picked into the solution. The only way to form a defensive alliance set containing the vertices \(v_{i},u_{1}\) and \(u_{2}\) is to find the shortest cycle containing \(v_{i},u_{1}\) and \(u_{2}\) and add all the vertices in the cycle to the set. If no such cycle exists, then \(P_{i}\) cannot lead to an optimal solution. Case 3: Let us assume that all three neighbours of \(v_{i}\) are picked into the solution. If the defensive alliance forms a cycle containing the three neighbours of \(v_{i}\) then we can infer that there is a smaller cycle that contains just the two neighbours of \(v_{i}\). Hence, this case would lead to a larger set than either case 1 or case 2. We will have three [or two] combinations in case 1 and three [or one] combinations in case 2. The minimum among them will return the optimal solution for the subproblem \(P_{i}\). Lemma 3: Let \(d(v_{i})=5\) [or 4], the size of the optimal solution for \(P_{i}\) will be the minimum among the following: 1. \(\min_{x\neq y\in W}(dist(v_{i},x)+dist(v_{i},y)+1)\), \(W=\{u\in V(G)\backslash v_{i}|d(u)\leq 3\}\), \(\{SP(v_{i},x)\cap SP(v_{i},y)\}\setminus v_{i}=\emptyset\). 2. length of the shortest cycle containing \(v_{i}\). Proof: Let \(u_{1},u_{2},u_{3},u_{4}\) and \(u_{5}\) [or \(u_{1},u_{2},u_{3}\) and \(u_{4}\)] are the neighbours of \(v_{i}\). Case 1: Let us assume that exactly one neighbour of \(v_{i}\) (say \(u_{1}\)) is picked into the solution. Then, it is easy to see that \(v_{i}\) is not protected. Therefore, no defensive alliance is possible by picking only one neighbour of \(v_{i}\) as part of the solution. Case 2: Let us assume that exactly two neighbours of \(v_{i}\) (say \(u_{1}\) and \(u_{2}\)) are picked into the solution. The defensive alliance containing the vertices \(v_{i},u_{1}\) and \(u_{2}\) can be formed either with the vertices of the shortest cycle containing \(v_{i},u_{1}\) and \(u_{2}\) or with the vertices along the two vertex disjoint shortest paths from \(u_{1}\) and \(u_{2}\) to vertices with degrees at most three. The minimum among the two sets forms a defensive alliance. Case 3: Let us assume that more than two neighbours of \(v_{i}\) are picked into the solution, then as explained in case 3 of Lemma 2, this forms a defensive alliance with a larger cardinality than case 2. We will have a total of twenty [or twelve] combinations in case 2. The minimum among them will give the optimal solution for the subproblem \(P_{i}\). Given a vertex \(v\), 1. Computing the closest vertex to \(v\) with degree at most three can be done in \(O(m+n)\) time. 2. Using BFS, we can compute the shortest cycle containing \(v\) in \(O(m+n)\) time. 3. Finding two vertex disjoint shortest paths to vertices with degree at most three from \(v\), is also solvable in \(O(m+n)\) time. Therefore, we can compute the optimal solution (if it exists) of the subproblem \(P_{i}\) in linear time. Hence, the Defensive alliance problem on graphs with \(\Delta(G)\leq 5\) is polynomial-time solvable. This concludes the proof of Theorem 3.1. ## 4 Defensive Alliance on graphs with maximum degree 6 In this section, we prove that the Defensive alliance problem is NP-Complete on graphs with maximum degree 6. Theorem 4.1: Defensive alliance _on graphs with \(\Delta(G)=6\) is NP-Complete._ It is easy to see that the problem is in NP. To prove the NP-Hardness, we reduce from the following problem: Dominating set on cubic graphs: Given a cubic graph \(G=(V,E)\), a set \(S\subseteq V\) is a dominating set if every vertex \(v\in V\setminus S\) has a neighbour in \(S\). **Input:** A simple, undirected cubic graph \(G=(V,E)\), and a positive integer \(k\). **Question:** Is there a dominating set \(S\subseteq V\) such that \(|S|\leq k\)? In 1980, Kikuno et al. [15] proved that the Dominating set problem on cubic graphs is NP-Complete. Given an instance \(I=(G,k)\) of the Dominating set problem with \(G\) being a cubic graph, we construct an instance of \(I^{\prime}=(G^{\prime},k^{\prime})\) of the Defensive alliance problem. We need a special type of vertices in \(G^{\prime}\) that cannot be a part of any defensive alliance of size at most \(k^{\prime}\), we call them the forbidden vertices. The forbidden vertices are indicated using square-shaped nodes. We make use of the following gadget to generate the forbidden vertices. **Gadget to generate the forbidden vertices:** Consider a 6-regular graph. Note that all the vertices have a closed neighbourhood of seven. To protect a vertex of the defensive alliance from a 6-regular graph, it should have at least four closed neighbours in the set. Therefore, the optimal defensive alliance can be obtained by finding the _minimum induced subgraph of minimum degree \(\geq\) 3_(from here on referred to as MSMD(3)). As we do not want any vertex from the gadget to be a part of the defensive alliance, we construct a 6-regular graph with no MSMD(3) of size at most \(k^{\prime}\). We make use of Ramanujan graphs to construct the gadget. Definition 1: A _Ramanujan graph_ is a \(r\)-regular graph whose non-trivial eigenvalues lie in the interval \([-2\sqrt{r-1},2\sqrt{r-1}]\). For more information on Ramanujan graphs, we refer the reader to [19; 20]. From [19], we have that \(r\)-regular Ramanujan graphs have girth, \(g\geq\frac{4}{3}\log_{r-1}|V|\). Given the minimum degree \(r\) and girth \(g\), we have the following lower bound on the graph size \(|V(r,g)|\). \[|V(r,g)|\geq\frac{r(r-1)^{\frac{g-1}{2}}-2}{r-2}\text{ for odd }g\] \[|V(r,g)|\geq\frac{2(r-1)^{\frac{g}{2}}-2}{r-2}\text{ for even }g\] Lemma 4: The size of the 6-regular Ramanujan graph that has an MSMD(3) of size \(k^{\prime}+1\) is polynomial in \(k^{\prime}\). Proof: Consider a 6-regular Ramanujan graph \(R\). \(R\) has girth at least \(\frac{4}{3}\log_{5}n\). Let \(M\) be an MSMD(3) of \(R\). As \(M\) is an induced subgraph of \(R\), it also has a girth of at least \(\frac{4}{3}\log_{5}n\). From the lower bound on the size of the graph \(|V(r,g)|\), we have that the size of \(M\) with girth at least \(\frac{4}{3}\log_{5}n\) is \(\Omega(2^{\frac{4}{3}\log_{5}n}{2})\) which is \(\Omega(n^{\frac{2}{3}\log_{5}2})\). The bound can be represented as \(\Omega(n^{c})\) where \(c=0.2871\). \[\begin{array}{l}k^{\prime}+1\geq c_{1}\cdot 2^{\frac{4}{3}\log_{5}n} \implies k^{\prime}+1\geq c_{1}\cdot n^{\frac{2}{3}\log_{5}2}\implies k^{ \prime}+1\geq c_{1}\cdot n^{0.2871}\\ \implies n\leq(\frac{k^{\prime}+1}{c})^{3.484}.\end{array}\] The size of \(R\) with \(M\) of size \(k^{\prime}+1\) is \(\mathcal{O}((k^{\prime}+1)^{3.484})\). This proves that the order of the 6-regular Ramanujan graph with MSMD(3) of size at least \(k^{\prime}+1\) is a polynomial function of \(k^{\prime}\). Lemma 5: There is no defensive alliance of size at most \(k^{\prime}\) from the 6-regular Ramanujan graph of size polynomial in \(k^{\prime}\). Proof: Let \(R\) be the 6-regular Ramanujan graph. Each vertex of \(R\) has a degree of six with a closed neighbourhood of seven. To obtain the optimal defensive alliance from \(R\), we compute an induced subgraph with each vertex having a closed neighbourhood of four, hence the MSMD(3). We know that the size of MSMD(3) is at least \(k^{\prime}+1\). This concludes that there is no defensive alliance of size at most \(k^{\prime}\) from \(R\). **Note:** The construction of Ramanujan graphs by Lubotzky et al. can be done in polynomial time [19]. One can construct \(r+1\)-regular Ramanujan graph using this method when \(r\) is a prime and \(r\equiv 1\pmod{4}\). Therefore, the 6-regular Ramanujan graph can be constructed in polynomial time. Let us consider a _6-regular Ramanujan graph with one missing edge_ with the same girth as that of a 6-regular Ramanujan graph. We can obtain the graph by removing an edge which is not part of a shortest cycle. In a _6-regular Ramanujan graph with one missing edge_, the vertices of _MSMD(3) with at most one missing edge_ can also form a defensive alliance. So, we look to find an _MSMD(3) with at most one missing edge_. The size of _MSMD(3) with at most one missing edge_ of girth \(g\), is at least the size of MSMD(3) of girth \(g\). Therefore, Lemma 4 and Lemma 5 also hold for _6-regular Ramanujan graph with one missing edge_. Hence, we obtain the following lemma. **Lemma 6**.: There is no defensive alliance of size at most \(k^{\prime}\) from the _6-regular Ramanujan graph with one missing edge_ of size polynomial in \(k^{\prime}\). We use the _6-regular Ramanujan graph with one missing edge_ as the gadget. The vertices corresponding to the missing edge will be used in place of the forbidden vertices. From Lemma 6, we conclude that there is no defensive alliance of size at most \(k^{\prime}\) from the gadget. #### 3.1.2 Reduction from the Dominating set on cubic graphs: We construct an instance \(I^{\prime}=(G^{\prime},k^{\prime})\) of the Defensive alliance problem in the following way. See Figure 2 for an illustration. 1. For every vertex \(v_{i}\in V\) in \(G\), we introduce four copies in \(G^{\prime}\), which are represented by \(v_{i}^{0},v_{i}^{1},v_{i}^{2},v_{i}^{3}\). 2. For every vertex \(v_{i}^{j}\in G^{\prime},1\leq i\leq n,0\leq j\leq 3\), we introduce two vertices \(u_{i}^{j},w_{i}^{j}\). We make all the three vertices \(v_{i}^{j},u_{i}^{j}\) and \(w_{i}^{j}\) adjacent to each other. 3. We make each \(v_{i}^{0}\) adjacent to \(u_{i}^{1}\), each \(w_{i}^{1}\) adjacent to \(u_{i}^{2}\) and each \(w_{i}^{2}\) adjacent to \(u_{i}^{3}\). Additionally, we also make \(w_{i}^{0}\) adjacent to \(u_{i+1}^{0}\) for each \(i\in\{1,...,n-1\}\). 4. For every vertex \(v_{i}^{0}\), we introduce a vertex \(s_{i}\), and make it adjacent to \(v_{i}^{0}\). We make \(s_{i}\) also adjacent to \(v_{k}^{j}\) for all \(k\in N_{G}(v_{i})\), with the smallest possible value of \(j\in\{1,2,3\}\) such that \(v_{k}^{j}\) has no other \(s_{k}\) adjacent to it. For example, \(v_{1}\) is adjacent to \(v_{2},v_{3}\) and \(v_{6}\) in \(G\), so we make \(s_{1}\) adjacent to \(v_{2}^{1}\), \(v_{3}^{1}\) and \(v_{6}^{1}\) in \(G^{\prime}\) as all three of them do not have any other \(s_{k}\) adjacent to them at this point. Similarly, \(v_{2}\) is adjacent to \(v_{1},v_{3}\) and \(v_{4}\) in \(G\), we make \(s_{2}\) adjacent to \(v_{1}^{1},v_{3}^{2}\) and \(v_{4}^{1}\). Here \(v_{3}^{1}\) is adjacent to \(s_{1}\), hence we go for \(v_{3}^{2}\). We add these edges lexicographically, starting from \(s_{1}\). From hereon, we add the forbidden vertices that are constructed using the gadget. 1. We make every vertex of \(v_{i}^{j},1\leq i\leq n,0\leq j\leq 3\), adjacent to two forbidden vertices. 2. We make every vertex of \(\{u_{i}^{j}\cup w_{i}^{j}\},1\leq i\leq n,0\leq j\leq 3\), adjacent to three forbidden vertices. The vertices corresponding to the missing edge in a 6-regular Ramanujan graph can be used as the forbidden vertices. We use multiple gadgets to represent all the forbidden vertices in the reduction. **Lemma 7**.: If \(G\) has a dominating set of size at most \(k\) then \(G^{\prime}\) has a defensive alliance of size at most \(k^{\prime}\), where \(k^{\prime}=4n+8k\). Proof.: Let the set \(S\) be a dominating set of size at most \(k\) then we claim that the set \(S^{\prime}=X\cup Y\cup Z\) is a defensive alliance of \(G^{\prime}\) of size at most \(k^{\prime}\), where \(X=\bigcup\limits_{i=1}^{n}\ \{u_{i}^{0}\cup v_{i}^{0}\cup w_{i}^{0}\}\) Figure 2: A 6-vertex cubic graph \(G\) and its corresponding \(G^{\prime}\) \(Y=\bigcup\limits_{i|v_{i}\in S}\bigcup\limits_{j\in\{1,2,3\}}\ \{u_{i}^{j}\cup v_{i}^{j}\cup w_{i}^{j}\}\) \(Z=\bigcup\limits_{i|v_{i}\notin S}s_{i}\) For \(S^{\prime}\) to be a defensive alliance, every vertex from the sets \(X,Y\) and \(Z\) should be protected. **Set \(X\):** * Consider a vertex \(v\in\{v_{i}^{0},1\leq i\leq n\}\). \(v\) has seven vertices in its closed neighbourhood including the two forbidden vertices. \(v\) has three of its neighbours, \(u_{i}^{0},w_{i}^{0}\) and one among \(u_{i}^{1}\) or \(s_{i}\) in \(S^{\prime}\). Including itself, \(v\) has four closed neighbours in \(S^{\prime}\), which makes \(v\) protected. * Similarly, it can be seen that a vertex \(v\in\{u_{i}^{0}\cup w_{i}^{0},1\leq i\leq n\}\) has the majority of its neighbours in \(S^{\prime}\). Hence, \(v\) is protected. **Set \(Y\):** * Consider a vertex \(v\in\{u_{i}^{j}\cup w_{i}^{j},i|v_{i}\in S,1\leq j\leq 3\}\). \(v\) is adjacent to three forbidden vertices. As all the other neighbours of \(v\) are in \(S^{\prime}\), it is easy to verify that \(v\) is protected. * A vertex \(v\in\{v_{i}^{j},i|v_{i}\in S,1\leq j\leq 3\}\) has six vertices in its closed neighbourhood which includes two forbidden vertices. As the two neighbours of \(v\), \(u_{i}^{j}\) and \(w_{i}^{j}\) are in \(S^{\prime}\), \(v\) is protected. **Set \(Z\):** Consider a vertex \(v\in\{s_{i},i|v_{i}\notin S\}\). \(v\) has a total of five vertices in its closed neighbourhood. \(v\) has two of its neighbours, \(v_{i}^{0}\) and one of its other three neighbours in \(S^{\prime}\). Including itself, \(v\) has three closed neighbours in \(S^{\prime}\). Hence, \(v\) is protected. As all the vertices of the sets \(X,Y\) and \(Z\) are protected. As \(|X|=3n,|Y|=9k,|Z|=n-k\) and \(|X|+|Y|+|Z|=4n+8k=k^{\prime}\). Therefore, \(S^{\prime}\) is a defensive alliance of size at most \(k^{\prime}\). This concludes the proof of Lemma 7. Lemma 8: If \(G^{\prime}\) has a defensive alliance of size at most \(k^{\prime}\) then \(G\) has a dominating set of size at most \(k\). Proof: Let \(S^{\prime}\) be a defensive alliance in \(G^{\prime}\) of size at most \(k^{\prime}\). We define the sets \(X,Y_{i}\) and \(Z_{i}\) as follows. \(X=\bigcup\limits_{i=1}^{n}\ \{u_{i}^{0}\cup v_{i}^{0}\cup w_{i}^{0}\}\) \(Y_{i}=\bigcup\limits_{j\in\{1,2,3\}}\ \{u_{i}^{j}\cup v_{i}^{j}\cup w_{i}^{j}\}\), for \(1\leq i\leq n\) \(Z_{i}=s_{i}\), for \(1\leq i\leq n\) 1. If \(v\in X\) is a part of the defensive alliance \(S^{\prime}\), then \(X\subseteq S^{\prime}\). * Picking any vertex from \(\{u_{i}^{0},w_{i}^{0}\},1\leq i\leq n\) would lead to \(X\subseteq S^{\prime}\). * Let \(v\in v_{i}^{0},1\leq i\leq n\) be a vertex in \(S^{\prime}\). In the neighbourhood of \(v\), even after picking both the vertices from \(Y_{i},Z_{i}\) to be a part of \(S^{\prime}\), there is still a deficiency of one that needs to be filled by one among \(u_{i}^{0},w_{i}^{0}\). This creates a chain reaction that pushes all of \(X\) to \(S^{\prime}\). 2. If \(v\in\{Y_{i}\cup Z_{i}\}\) is a part of the defensive alliance \(S^{\prime}\), then \(X\subseteq S^{\prime}\). * Let \(v\in Y_{i}\) is in \(S^{\prime}\). For \(v\) to be protected, all the neighbours of \(v\) (excluding the forbidden vertices) must be in \(S^{\prime}\). This triggers a chain reaction that pushes all of \(Y_{i}\) and also \(v_{i}^{0}\) to \(S^{\prime}\). As \(v_{i}^{0}\) is in \(S^{\prime}\), \(X\subseteq S^{\prime}\). * Let \(v\in Z_{i}\) is a part of \(S^{\prime}\). For \(v\) to be protected, one of its neighbours from \(Y_{k}\), \(k\in N_{G}[v]\) must be in \(S\). This would again lead to all of \(Y_{k}\) and \(v_{k}^{0}\) being in \(S^{\prime}\) and hence \(X\subseteq S^{\prime}\). 3. It is clear that if \(S^{\prime}\) is non-empty then \(X\) must be a part of \(S^{\prime}\). Here, we have consumed \(3n\) vertices of \(k^{\prime}\) and are left with \(n+8k\) vertices that can still go into \(S^{\prime}\). 4. Note that by now, vertex \(v\in\{u_{i}^{0}\cup w_{i}^{0}\},1\leq i\leq n\) is covered. Each vertex \(v\in v_{i}^{0},1\leq i\leq n\) needs one more neighbour either from \(Y_{i}\) (or) \(Z_{i}\) to be included in \(S^{\prime}\) for it to be protected. * If we choose to include the vertex \(u_{i}^{1}\) from \(Y_{i}\), then this would trigger a chain reaction that pushes \(Y_{i}\) to \(S^{\prime}\). * If we choose to pick the neighbour \(s_{i}\) from \(Z_{i}\) then this would lead to pushing only one vertex to \(S^{\prime}\) before encountering a copy of \(v_{j}\), which also needs to be a part of \(S^{\prime}\). 5. We can't pick neighbours to all \(v_{i}^{0}\)'s from \(Y_{i}\) as this will force us to go beyond the remaining capacity of \(k^{\prime}\). Based on the remaining threshold of \(k^{\prime}\), which is \(n+8k\), it is clear that we pick neighbours from \(Y_{i}\) to be a part of \(S^{\prime}\) for \(k\) number of \(v_{i}^{0}\)'s and from \(Z_{i}\) for \(n-k\) number of \(v_{i}^{0}\)'s. 6. Consider a vertex \(v_{i}^{0}\) for which we pick the neighbour from \(s_{i}\), we finally encounter a copy of \(v_{k}\) which is an adjacent vertex of \(v_{i}\) in \(G\) and it needs to be a part of \(S^{\prime}\) for \(v_{i}^{0}\) to be protected. This also indicates that if \(v_{i}\notin S\) in \(G\), then one of its neighbours \(v_{k}\) must be in \(S\). Basically, if \(s_{i}\notin S^{\prime}\) then the corresponding \(v_{i}(G)\in S\) and \(S\) forms a dominating set. Therefore, it can be inferred that the vertices part of \(S\) must form a dominating set of size at most \(k\). This concludes the proof of Lemma 8. In Figure 2, let \(\{v_{2},v_{5}\}\) be a dominating set of \(G\) of size 2, then the corresponding defensive alliance in \(G^{\prime}\) is \(\bigcup\limits_{i=1}^{6}\ \{u_{i}^{0}\cup v_{i}^{0}\cup w_{i}^{0}\}\ \cup\bigcup\limits_{i\in\{2,5\}}\ \bigcup\limits_{j\in\{1,2,3\}}\ \{u_{i}^{j}\cup v_{i}^{j}\cup w_{i}^{j}\}\ \cup\bigcup \limits_{i\in\{1,3,4,6\}}s_{i}\) of size 40. We have a _6-regular Ramanujan graph with one missing edge_ of size polynomial in \(k^{\prime}\). By using the vertices of the missing edge in place of the forbidden vertices, the closed neighbourhood grows to seven. Even if the new adjacent vertex from \(I^{\prime}\) is in the defensive alliance, we still compute MSMD(3) with one missing edge from the gadget, whose cardinality will be at least \(k^{\prime}+1\). \(G^{\prime}\) has a maximum degree of six. Hence, the Defensive alliance is NP-Complete on \(\Delta(G)=6\) graphs. This concludes the Proof of Theorem 4.1. Theorem 4.1: _The Defensive alliance problem is para-NP-hard for the parameter maximum degree (\(\Delta\))._ Proof: From Theorem 4.1, we have that the Defensive alliance problem is NP-Complete on \(\Delta(G)=6\) graphs. As the problem is NP-complete for a constant value of the parameter, it implies that the Defensive alliance problem is para-NP-hard for the parameter maximum degree (\(\Delta\)). ## 5 Defensive alliance parameterized by distance to clique In this section, we show that the Defensive alliance problem is FPT parameterized by distance to clique. We reduce the given problem to the integer linear programming problem (ILP), which is known to be FPT when parameterized by the number of variables. Definition 2: For a graph \(G=(V,E)\), the parameter _distance to clique_ is the cardinality of the smallest set \(D\subseteq V\) such that \(V\setminus D\) is a clique. We can use a simple branching algorithm to compute set \(D\) of size at most \(k\) in \(\mathcal{O}^{*}(2^{k})\) time, if such a set exists. #### 5.0.1 ILP formulation Integer Linear Programming is a framework used to formulate a given problem using a finite number of variables. The problem definition is given as follows: **Problem.**\(p\)-Opt-ILP _Instance:_ A matrix \(A\in\mathbb{Z}^{msp}\), and vectors \(b\in\mathbb{Z}^{m}\) and \(c\in\mathbb{Z}^{p}\). _Objective:_ Find a vector \(x\in\mathbb{Z}^{p}\) that minimizes \(c^{\top}x\) and satisfies that \(Ax\geq b\). _Parameter:_\(p\), the number of variables. Lenstra [18] showed that deciding the feasibility of a \(p\)-ILP is fixed-parameter tractable with running time doubly exponential in \(p\), where \(p\) is the number of variables. Later, Kannan [14] gave a \(p^{p}\) algorithm for \(p\)-ILP. Fellows et al. [5] proved that \(p\)-Opt-ILP, the optimization version of the problem, is also fixed-parameter tractable. Theorem 5.1: [5] The optimization version of \(p\)-variable Integer Linear Programming can be solved using \(\mathcal{O}(p^{2.5p+o(p)}\cdot L\cdot log(MN))\) arithmetic operations and space polynomial in \(L\), where \(L\) is the number of bits in the input, \(N\) is the maximum absolute value any variable can take, and \(M\) is an upper bound on the absolute value of the minimum taken by the objective function. Theorem 5.2: Given a graph \(G=(V,E)\) and \(D\subseteq V\) such that \(V\setminus D\) is a clique, the Defensive alliance problem can be solved in \(\mathcal{O}^{*}(f(|D|))\) time. Consider a graph \(G=(V,E)\) and \(D\subseteq V\) such that \(|D|\) is the distance to clique of \(G\) and \(C=V\setminus D\). We partition the vertices of \(C\) into \(t\) twin classes, which are represented by \(C_{1},C_{2},...,C_{t}\) (\(t\leq 2^{|D|}\)), such that all the vertices in a twin class \(C_{i}\) have same adjacency in \(D\). Let \(S\subseteq V\) be a defensive alliance of \(G\). We guess the vertex sets \(P=S\cap D\) and compute \(S_{C}=S\cap C\). We also guess a subset of twin classes \(C_{N}\subseteq C\) from which no vertices are picked in the solution. See Figure 3 for an illustration. After the guess of \(P\) and \(C_{N}\), we compute \(S_{C}\) using integer linear programming. For each of \(u\in D\), we define \(demand(u)\) = \(\lceil\frac{1}{2}(d(u)+1)\rceil-|N[u]\cap P|\). For each \(u\in D\), we denote by \(M(u)\) the set of indices \(i\) such that \(C_{i}\subseteq N(u)\). In other words, \(M(u)\) represents the indices of the twin classes from \(C\) that \(u\) is adjacent to. Let \(x_{i}\) represent the number of vertices in \(C_{i}\cap S\). In our ILP formulation, there are \(t\) variables that are \(x_{1},x_{2},...,x_{t}\). **Lemma 9**.: The set \(S\) is a defensive alliance if and only if 1. For \(\forall C_{i}\in C_{N}\), \(x_{i}\) = 0. 2. For each \(u\in P\), \(\sum_{i\in M(u)}x_{i}\geq demand(u)\). 3. Each \(v\in C\setminus C_{N}\) has to satisfy \(|N(v)\cap P|+\sum_{i\in\{1,2,...,t\}}x_{i}\geq|N[v]\setminus P|-\sum_{i\in\{1,2,...,t\}}x_{i}-1\). Proof.: 1. As \(C_{N}\) represents the set of twin classes with no vertices in \(S\), we have \(x_{i}\) = 0 for all \(C_{i}\in C_{N}\). 2. For each \(u\in P\), \(d(u)\) = \(|N(u)\cap S|+|N(u)\setminus S|\) and \(|N(u)\cap S|\geq|N(u)\setminus S|-1\) holds if and only if \(2*|N(u)\cap S|\geq d(u)-1\), which is equivalent to \(|N(u)\cap S_{C}|\geq demand(u)\) implies \(\sum_{i\in M(u)}x_{i}\geq demand(u)\). Figure 3: Partitioning of the vertex set \(V\) into sets \(D\) and \(C\), where \(|D|\) is the distance to clique and \(C\) is a clique. 3. For each \(v\in C\setminus C_{N}\), \(|N(v)\cap S|=|N(v)\cap P|+|S_{C}|\), which is indeed \(|N(v)\cap P|+\sum_{i\in\{1,2,\ldots,t\}}x_{i}\); \(|N(v)\setminus S|-1=|(N(v)\cap D)\setminus P|+|C|-\sum_{i\in\{1,2,\ldots,t\}}x_ {i}-1\), which equals \(|N[v]\setminus P|-\sum_{i\in\{1,2,\ldots,t\}}x_{i}-1\). The ILP formulation for the defensive alliance is given as Minimize \[\sum_{i\in\{1,2,\ldots,t\}}x_{i}\] Subject to * \(x_{i}=0\), for \(\forall C_{i}\in C_{N}\). * \(\sum_{i\in M(u)}x_{i}\geq demand(u)\), for each \(u\in P\). * \(|N(v)\cap P|+\sum_{i\in\{1,2,\ldots,t\}}x_{i}\geq|N[v]\setminus P|-\sum_{i\in\{ 1,2,\ldots,t\}}x_{i}-1\), for every \(v\in C\setminus C_{N}\). * \(x_{i}\leq|C_{i}|\), for each \(i\in\{1,2,...,t\}\). The ILP will output the optimal values of \(x_{i}\) for all \(i\in\{1,2,...,t\}\). If \(x_{i}>0\), we need to pick \(x_{i}\) vertices from \(C_{i}\). As all the vertices in \(C_{i}\) have the same neighbourhood, we can pick any \(x_{i}\) vertices. Hence, we obtain the vertex set \(S_{C}\). In our ILP formulation, we have at most \(2^{|D|}\) variables. The values of all the variables and the objective function are bounded by \(n\). The constraints can be represented using \(\mathcal{O}(4^{|D|}\cdot logn)\) bits. With the help of Theorem 3.1, we will be able to solve the problem with the guess \((P,\;C_{N})\) in FPT time. There are \(2^{|D|}\) candidates for \(P\) and \(2^{2^{|D|}}\) candidates for \(C_{N}\). To obtain \(S_{C}\), we solve \(8^{|D|}\) ILP formulas where each formula can be computed in \(\mathcal{O}^{*}(f(|D|))\) time. This concludes the proof of Theorem 3.1. ## 6 Defensive alliance parameterized by twin cover In this section, we show that the Defensive alliance problem is FPT parameterized by twin cover. We give an ILP formulation for the combined parameter twin cover, the size of the largest clique outside the twin cover. Using this, we show that the Defensive alliance problem is FPT for the parameter twin cover. Definition 3: For a graph \(G=(V,E)\), the parameter _twin cover_ is the cardinality of the smallest set \(T\subseteq V\) such that \(V\setminus T\) is a disjoint union of cliques wherein all the vertices in each clique have the same adjacency in the twin cover. Theorem 6.1: [10] If a minimum twin cover in \(G\) has size at most \(k\), then it is possible to compute a twin cover of size at most \(k\) in time \(\mathcal{O}(|E||V|+k|V|+1.2738^{k})\). Theorem 6.2: Given a graph \(G=(V,E)\), \(T\subseteq V\) is a twin cover of \(G\) and \(z\) is the size of the largest clique outside the twin cover, the Defensive alliance problem can be solved in \(\mathcal{O}^{*}(f(|T|,z))\) time. Consider a graph \(G=(V,E)\). Let \(T\subseteq V\) be a twin cover of \(G\) and \(C=V\setminus T\). We partition the vertices of \(C\) into \(t\) clique sets which are represented by \(C_{1},C_{2},...,C_{t}\) (\(t\leq 2^{|T|}\)), such that all the vertices in a clique set \(C_{i}\) have same adjacency in \(T\). Let \(S\subseteq V\) be a defensive alliance of \(G\). We guess the vertex set \(P=S\cap T\) and compute \(S_{C}=S\cap C\). For each of \(u\in P\), we define \(demand(u)\) = \(\lceil\frac{1}{2}(d(u)+1)\rceil-|N[u]\cap P|\). For each \(u\in P\), we denote by \(M(u)\) the set of indices \(i\) such that \(C_{i}\subseteq N(u)\). In other words, \(M(u)\) represents the indices of the clique sets from \(C\) that \(u\) is adjacent to. We have at most \(z\) different size cliques in each clique set whose sizes range from \(1\) to \(z\). We represent the cliques of size \(l\) in the clique set \(C_{i}\) as \(C_{i}^{l}\). We place cliques of all sizes from each clique set into one of the following three types: _full_, _partial_ and _null_. _full_ clique has all of its vertices picked in the solution, _partial_ clique has some of its vertices picked, whereas a _null_ clique has no vertices picked. \(C_{i}^{l,F}\) represents the union of all _full_ cliques in \(C_{i}^{l}\). \(C_{i}^{l,P},C_{i}^{l,N}\) represent the union of all _partial_ cliques and union of all _null_ cliques in \(C_{i}^{l}\) respectively. We denote each _partial_ clique in \(C_{i}^{l,P}\) by \(C_{i}^{l,P_{j}}\), where \(j\) denotes the index of a partial clique. See Figure 4 for an illustration of _full_, _partial_ and _null_ cliques of length two. Let \(x_{i}^{l,P_{j}}\) represent the number of vertices in \(C_{i}^{l,P_{j}}\cap S\). In the ILP formulation, we need an individual variable for every _partial_ clique in \(C_{i}^{l,P}\). Here, the idea is to limit the number of partial cliques in each clique set \(C_{i}\), which results in formulating the ILP using the desired number of variables. Lemma 10: Consider a set \(C_{i}^{l}\) from a clique set \(C_{i}\), there exists an optimal solution with at most \(l-1\)_partial_ cliques from \(C_{i}^{l}\). Proof: Consider an optimal solution \(S\) with \(p\) partial cliques from \(C_{i}^{l}\). We construct another optimal solution \(S^{\prime}\) from \(S\) with at most \(l-1\)_partial_ cliques as follows. We arrange the _partial_ cliques in \(S\) in ascending order based on their number of vertices in \(S\). Let the order be \(C_{i}^{l,P_{1}},C_{i}^{l,P_{2}},...,C_{i}^{l,P_{p}}\). We perform the following operation repeatedly to obtain \(S^{\prime}\). In any iteration, let the first clique in the list be \(C^{*}\). We replace each vertex of \(C^{*}\) that is in \(S\) with a vertex that is not in \(S\) in each of the last \(|C^{*}|\) cliques in the list. We push \(C^{*}\) to _null_ clique set. If there are any other cliques that become _full_ by the recent addition of vertices, then we simply move them into _full_ cliques set. We perform this until we can place all the vertices of \(C^{*}\) into the cliques of larger sizes. It is easy to see that all the vertices in the resultant instance are protected as the number of vertices from any clique that belongs to the solution only grows. There is no change in the number of vertices that go into the solution, therefore \(S^{\prime}\) is also optimal. The maximum number of _partial_ cliques that would remain when this process comes to a halt is \(l\)-\(1\)_partial_ cliques with \(l-1\) vertices in \(S\) from each clique. Hence, we conclude that there exists an optimal solution with at most \(l-1\)_partial_ cliques. Figure 4: Partitioning of the vertex set \(V\) into sets \(T\) and \(C\), where \(T\) is the twin cover and \(C\) is the union of clique sets outside \(T\). The figure also highlights the cliques of type _full_, _partial_ and _null_ of length two in each clique set. The blue vertices belong in the alliance set. Let \(y_{i}^{l}\) be the number of _partial_ cliques in \(C_{i}^{l}\). From Lemma 10, it is clear that there is an optimal solution with at most \(l-1\)_partial_ cliques from \(C_{i}^{l}\) and we have \(y_{i}^{l}\leq l-1\). We guess the cliques from \(C_{i}^{l}\) that go into \(C_{i}^{l,P}\) in \(y_{i}^{l}+1\) ways and from the remaining \(|C_{i}-C_{i}^{l,P}|\) cliques, \(C_{i}^{l,F}\) can be guessed in \(m-y_{i}^{l}+1\) ways, where \(m\) is the number of cliques in \(C_{i}^{l}\). As we have guessed \(P,C_{i}^{l,F}\) and \(C_{i}^{l,P}\), we compute \(S_{C}\) using integer linear programming. \(x_{i}^{l,P}\) represents the sum of \(x_{i}^{l,P_{1}},x_{i}^{l,P_{2}},...,x_{i}^{l,P_{u_{i}}^{l}}\). In our ILP formulation, there are at most \(\sum_{i=1}^{t}\sum_{l=1}^{z}y_{i}^{l}\) variables that are \(x_{1}^{1,P_{1}},...,x_{1}^{z,P_{y_{1}^{z}}},x_{2}^{1,P_{1}},...,x_{2}^{z,P_{y_ {2}^{z}}},...,x_{t}^{z,P_{y_{1}^{z}}}\). Lemma 11: The set \(S\) is a Defensive alliance if and only if 1. For each \(u\in P\), \(\sum_{i\in M(u)}\ \sum_{l=1}^{l=z}|C_{i}^{l,F}|+\sum_{i\in M(u)}\ \sum_{l=1}^{l=z}\ \sum_{j=1}^{j=y_{i}^{l}}x_{i}^{l,P_{j}}\geq demand(u)\). 2. Each \(v\in C_{i}^{l,P_{j}}\) has to satisfy \(|N(v)\cap P|+x_{i}^{l,P_{j}}\geq|N[v]\setminus P|-x_{i}^{l,P_{j}}-1\). Proof: 1. For each \(u\in P\), \(d(u)=|N(u)\cap S|+|N(u)\setminus S|\) and \(N(u\cap S_{C})=\sum_{i\in M(u)}\)\(\sum_{l=1}^{l=z}|C_{i}^{l,F}|+\sum_{i\in M(u)}\sum_{l=1}^{l=z}\sum_{j=1}^{j=y_{i}^ {l}}x_{i}^{l,P_{j}}\). \(|N(u)\cap S|\geq|N(u)\setminus S|-1\) holds if and only if \(2*|N(u)\cap S|\geq d(u)-1\), which implies \(\sum_{i\in M(u)}\sum_{l=1}^{l=z}|C_{i}^{l,F}|+\sum_{i\in M(u)}\sum_{l=1}^{l=z }\sum_{j=1}^{j=y_{i}^{l}}x_{i}^{l,P_{j}}\geq demand(u)\). 2. For each \(v\in C_{i}^{l,P_{j}}\), \(|N(v)\cap S|=|N(v)\cap P|+x_{i}^{l,P_{j}}\); \(|N(v)\setminus S|-1=|N[v]\setminus P|-x_{i}^{l,P_{j}}-1\). The ILP formulation for the Defensive alliance is given as Minimize \(\sum_{i=1}^{i=t}\sum_{l=1}^{l=z}\sum_{j=1}^{j=y_{i}^{l}}x_{i}^{l,P_{j}}\) Subject to * \(\sum_{i\in M(u)}\sum_{l=1}^{l=z}|C_{i}^{l,F}|+\sum_{i\in M(u)}\sum_{l=1}^{l=z }\sum_{j=1}^{j=y_{i}^{l}}x_{i}^{l,P_{j}}\geq demand(u)\), for each \(u\in P\). * \(|N(v)\cap P|+x_{i}^{l,P_{j}}\geq|N[v]\setminus P|-x_{i}^{l,P_{j}}-1\), for every \(v\in C_{i}^{l,P_{j}}\). * \(x_{i}^{l,P_{j}}<l\), for each \(i\in\{1,2,...,t\}\), \(l\in\{1,2,...,z\}\) and \(j\in\{1,2,...,y_{i}^{l}\}\). In our ILP formulation, we have at most \(\sum_{i=1}^{t}\sum_{l=1}^{z}y_{i}^{l}\) variables, where \(z\) is the size of the largest clique outside the twin cover and \(y_{i}^{l}\leq l-1\). The values of all the variables and the objective function are bounded by \(n\). The constraints can be represented using \(\mathcal{O}(\sum_{i=1}^{t}\sum_{l=1}^{z}y_{i}^{l}\cdot|T|\cdot logn)\) bits. With the help of Theorem 4, we will be able to solve the problem with the guess \((P,C_{i}^{l,F}\) and \(C_{i}^{l,P})\) in FPT time. There are \(2^{|T|}\) candidates for \(P\) and there are \(\sum_{i=1}^{t}\sum_{l=1}^{z}y_{i}^{l}\cdot\mathcal{O}(n)\) candidates for \((C_{i}^{l,F}\) and \(C_{i}^{l,P})\). To obtain \(S_{C}\), we solve \(2^{|T|}\cdot\sum_{i=1}^{t}\sum_{l=1}^{z}y_{i}^{l}\cdot\mathcal{O}(n)\) ILP formulas, where each formula can be computed in \(\mathcal{O}^{*}(f(|TC|,z))\) time. This concludes the proof of Theorem 6.1. Theorem 6.1: _The Defensive alliance problem is fixed-parameter tractable parameterized by twin cover._ Lemma 12: _If there exists a clique \(C^{*}\in C_{i}\) of size at least \(|N(C_{i})\cap T|\) then there exist a defensive alliance \(S\subseteq C^{*}\)._ Proof: Consider a clique set \(C_{i}\) outside the twin cover. Let \(C^{*}\) be the clique from \(C_{i}\) of size at least \(|N(C_{i})\cap T|\). Let \(v\) be the vertex from \(C^{*}\) that is also a part of \(S\). The closed neighbourhood of \(v\) is \(|C^{*}|+|N(C_{i})\cap T|\). In order to protect \(v\), we need to push at least \(\frac{|C^{*}|+|N(C_{i})\cap T|}{2}\) neighbours of \(v\) to \(S\). This can be done in multiple ways, but we obtain a defensive alliance with smaller cardinality by picking all the \(\frac{|C^{*}|+|N(C_{i})\cap T|}{2}\) vertices from \(C^{*}\) itself. This concludes that, if there is any clique \(C^{*}\) of size at least \(|N(C_{i})\cap T|\) has a vertex in \(S\) then there exists a defensive alliance, \(S\subseteq C^{*}\). We consider two cases: (1) There exists a vertex from a clique \(C^{*}\in C_{i}\) of size at least \(|N(C_{i})\cap T|\) that is part of \(S\). (2) No vertex from clique \(C^{*}\in C_{i}\) of size at least \(|N(C_{i})\cap T|\) is a part of \(S\). For case 1, we solve \(2^{|TC|}\) subproblems corresponding to each clique set. We find the optimal solution in each subproblem by considering a vertex from the shortest clique of size at least \(|N(C_{i})\cap S|\) to be in \(S\). From Lemma 12, each subproblem can be solved in linear time in \(n\). The minimum value among all the \(2^{|TC|}\) subproblems will be the optimal solution for this case. For case 2, we remove all the cliques from \(G\) that cannot be a part of the solution and obtain a new graph \(G^{\prime}\). For each vertex \(u\in P\), the value of \(demand(u)\) will remain the same as calculated in \(G\). The size of the largest clique outside the twin cover in \(G^{\prime}\) is at most \(|TC|\). With the help of ILP formulation given in Theorem 6.1 and a bound on the largest clique outside the twin cover \(z\leq|TC|\), the instance can be solved in FPT time. The optimal solution to the problem is the minimum value obtained between the two cases. This concludes the proof of Theorem 6.1. ## 7 Conclusions and Open Problems In this work, we have proved that the Defensive alliance problem is _polynomial-time solvable_ on graphs with maximum degree at most 5 and NP-Complete on graphs with maximum degree 6. The byproduct of our result is that the problem is para-NP-hard parameterized by the maximum degree of the input graph. Therefore, one could also work on larger structural parameters than the maximum degree, such as the bandwidth and the maximum leaf number. A study of the offensive and powerful alliance problems on bounded degree graphs can also be considered. We have also proved that the problem is fixed-parameter tractable parameterized by twin cover and distance to clique. The problem remains unsolved for the parameters modular width, and distance to cluster which is also an interesting direction to pursue. It is interesting to study the parameterized complexity of the offensive and powerful alliances for the parameter twin cover and distance to cluster. ## Acknowledgements The authors thank the anonymous reviewers for their valuable comments and suggestions.
2304.10761
Exact Method of Moments for multi-dimensional population balance equations
The unique properties of anisotropic and composite particles are increasingly being leveraged in modern particulate products. However, tailored synthesis of particles characterized by multi-dimensional dispersed properties remains in its infancy and few mathematical models for their synthesis exist. Here, we present a novel, accurate and highly efficient numerical approach to solve a multi-dimensional population balance equation, based on the idea of the exact method of moments for nucleation and growth \cite{pflug2020emom}. The transformation of the multi-dimensional population balance equation into a set of one-dimensional integro-differential equations allows us to exploit accurate and extremely efficient numerical schemes that markedly outperform classical methods (such as finite volume type methods) which is outlined by convergence tests. Our approach not only provides information about complete particle size distribution over time, but also offers insights into particle structure. The presented scheme and its performance is exmplified based on coprecipitation of nanoparticles. For this process, a generic growth law is derived and parameter studies as well as convergence series are performed.
Adeel Muneer, Tobias Schikarski, Lukas Pflug
2023-04-21T06:09:45Z
http://arxiv.org/abs/2304.10761v1
# Exact Method of Moments for multi-dimensional population balance equations ###### Abstract The unique properties of anisotropic and composite particles are increasingly being leveraged in modern particulate products. However, tailored synthesis of particles characterized by multi-dimensional dispersed properties remains in its infancy and few mathematical models for their synthesis exist. Here, we present a novel, accurate and highly efficient numerical approach to solve a multi-dimensional population balance equation, based on the idea of the exact method of moments for nucleation and growth [1]. The transformation of the multi-dimensional population balance equation into a set of one-dimensional integro-differential equations allows us to exploit accurate and extremely efficient numerical schemes that markedly outperform classical methods (such as finite volume type methods) which is outlined by convergence tests. Our approach not only provides information about complete particle size distribution over time, but also offers insights into particle structure. The presented scheme and its performance is exemplified based on coprecipitation of nanoparticles. For this process, a generic growth law is derived and parameter studies as well as convergence series are performed. keywords: multi-dimensional, exact method of moments, population balance equation, fixed point equation, integro-differential equation, growth kinetics, synthesis, nanoparticle, nonlocal conservation laws, method of characteristics + Footnote †: journal: - ## 1 Introduction and problem definition The modeling and efficient numerical approximations of multi-dimensional (MD) nanoparticle (NP) synthesis are increasingly required as the properties of anisotropic particles play a major role in various applications, including bio-nanosensors or catalysts [2; 3; 4]. In recent years, automated analysis of MD particle shape distributions - essential to validate and calibrate MD process models - has been demonstrated for a range of composite and anisotropic NPs [5; 6; 7], unlocking the possibility of using predictive modeling for MD-NP synthesis. To this end, we here extend the recently derived _exact Method of Moments_ (eMoM[1]) to MD population balance equations (PBEs). As an example, we study a class of balance laws describing coprecipitation of particles characterized by their size and composition, such as the seeded growth of nanoalloys. However, the scheme can be applied more generally and is also applicable to e.g., the modeling of shape anisotropic growth (see e.g., the kinetics analyzed in [8; 9]). In general, we study the following MD-PBE: **Definition 1.1** (Multi-dimensional population balance equation).: _The evolution of the particle population \(q\) can be macroscopically described by the following MD-PBE:_ \[\begin{split} q_{t}(t,\mathbf{x})+\nabla_{\mathbf{x}}(\mathbf{\mathcal{G}}( \mathbf{c}(t),\mathbf{x})q(t,\mathbf{x}))&=0,\\ q(0,\mathbf{x})&=q_{0}(\mathbf{x}),\end{split} \tag{1}\] _with \(\mathbf{x}\in\mathcal{X}\subset\mathbb{R}^{n}\), \(t\in[0,T]\). The concentration of the \(i\)-th educt species, i.e. \(\mathbf{c}_{i}\), is defined by:_ \[\mathbf{c}_{i}(t)=\tfrac{1}{V}\mathbf{m}_{i}(t)-\tfrac{\rho_{i}}{V}\iint\limits_{\mathbf{ x}}V_{i}(\mathbf{x})q(t,\mathbf{x})\,\mathrm{d}\mathbf{x}, \tag{2}\] _where \(q\) denotes the particle number density, \(t\) the process time, \(\mathbf{x}\) the shape parametrization of the dispersed phase, \(\mathbf{\mathcal{G}}\) the concentration-dependent growth rate in the first argument and shape in the second argument, \(q_{0}\) the initial particle number density, \(\mathbf{c}_{i}\) the concentration of the \(i\)-th educt species, \(\mathbf{m}_{i}\) the total mass of the \(i\)-th precipitated component, \(\mathcal{V}\) the reactor volume, \(\mathbf{\rho}_{i}\) the density of the \(i\)-th species in the NPs, \(\mathbf{\mathcal{X}}\) the set of admissible particle shapes, and \(\mathbf{V}_{i}(\mathbf{x})\) the volume of the \(i\)-th component in a particle of shape \(\mathbf{x}\)._ Conservation of mass (eq. (2)) for each component couples the PBE solution \(q\) and the concentrations \(c_{i}\), rendering the PBE eq. (1) and (2) a (multi-dimensional) nonlocal conservation law [10]. In this manuscript, we reformulate the MD-PBE purely in terms of the concentrations, i.e., the driving forces. This idea was recently applied to the one-dimensional case in [1]. The advantage of such a reformulation lies in reducing the MD-PBE eq. (1) to a set of one-dimensional integro-differential equations eq. (5). The resulting equations prescribing the time-evolution of the concentrations can then be efficiently discretized and numerically approximated. For the most simple case involving two-chemical components and the evolution of composite particles, this reformulation reduces the need for numerically approximating one three-dimensional function, here the number density function \(q\), to approximating two one-dimensional functions, i.e., the evolution of the two concentrations \(c_{i}\). ## 2 Multi-dimensional exact method of moments Our aim is to derive a solely concentration-dependent (\(\mathbf{c}\)) equation based on eq. (2). Using the method of characteristics, see e.g. [10], for a given concentration \(\mathbf{c}\), the solution of eq. (1) can be stated as follows: \[q(t,\mathbf{x})=q_{0}(\mathbf{\xi}^{\mathbf{c}}[t,\mathbf{x}](0))\text{det}(D_{2}\mathbf{\xi}^{\mathbf{ c}}[t,\mathbf{x}](0)), \tag{3}\] with the characteristics satisfying: \[\begin{split}\partial_{3}\mathbf{\xi}^{\mathbf{c}}[t,\mathbf{x}](\tau)& =\mathbf{\mathcal{G}}(\mathbf{c}(\tau),\mathbf{\xi}^{\mathbf{c}}[t,\mathbf{x}](\tau)), \\ \mathbf{\xi}^{\mathbf{c}}[t,\mathbf{x}](t)&=\mathbf{x},\end{split} \tag{4}\] for every \((\tau,t,\mathbf{x})\in[0,T]^{2}\times\mathbf{\mathcal{X}}\) for \(T\in\mathbb{R}_{>0}\) and with \(\mathbf{\mathcal{X}}\subset\mathbb{R}^{2}\). As the characteristics depend on the concentration, we have added the concentration \(\mathbf{c}\) as a superscript to illustrate this dependency. By plugging the solution formula eq. (3) into eq. (2), and integrating by substitution using \(\mathbf{x}=\mathbf{\xi}^{\mathbf{c}}[0,\mathbf{y}](t)\), we end up - assuming \(\mathbf{V}_{i}(\mathbf{x})=0\,\forall\mathbf{x}\notin\mathbf{\mathcal{X}}\) - with the following integro-differential equation: **Definition 2.1** (Integral fixed-point-problem for the concentrations \(\mathbf{c}\)).: \[\mathbf{c}_{i}(t)=\tfrac{1}{\mathcal{V}}\mathbf{m}_{i}(t)-\tfrac{\mathbf{\rho}_{i}}{ \mathcal{V}}\iint_{\mathbf{\mathcal{X}}}\mathbf{V}_{i}(\mathbf{\xi}^{\mathbf{c}}[0,\mathbf{y}](t ))q_{0}\left(\mathbf{y}\right)\,\mathrm{d}\mathbf{y}.\] (5) The idea of eMoM ([1]) is now to numerically approximate eq. (5) instead of eq. (1) and (2). With \(\mathbf{c}\) as a solution of eq. (5), we can _post hoc_ evaluate the PBE solution \(q\) using eq. (3). Thus, to obtain the full PBE solution \(q\), we only need to compute a small number (the number of components involved) of time-dependent scalar quantities, i.e., the concentration \(\mathbf{c}\). Thus, the reformulation eq. (5) of eq. (1) and (2) is highly advantageous whenever eq. (4) can be solved analytically for an arbitrary but given concentration \(\mathbf{c}\), which is indeed possible for a distinct class of growth law functions. In the following, we first introduce rather general growth kinetics for coprecipitation processes, and then derive the analytical solution of the corresponding characteristics equation. ## 3 Kinetics of coprecipitation In this section, we derive - based on generic physical assumptions - the growth kinetics of NPs in a coprecipitation process, in which two educt species simultaneously assemble in one particle ensemble. This approach is, e.g., in line with the current process understanding of synthesized nanoalloys [11], but also applies to the synthesis of multi-component battery cathode materials such as nickel-manganese-cobalt hydroxide [12]. We assume that the change in NP radius is given by the sum of two growth rates, which depend on the radius to the exponent \(n\) and the two concentrations \(\mathbf{c}_{1}\) and \(\mathbf{c}_{2}\), i.e.: \[\dot{r}(t)=\left(\mathbf{G}_{1}(\mathbf{c}_{1}(t))+\mathbf{G}_{2}(\mathbf{c}_{2}(t))\right)r( t)^{n}, \tag{6}\] where \(\mathbf{G}_{1}\) and \(\mathbf{G}_{2}\) denote the growth kinetics of the two species. For the change in NP size, we can thus model, e.g. size-independent growth, i.e. \(n=0\)[13], or diffusion-limited growth, i.e., \(n=-1\)[14; 15]. For a particle with a given radius (\(r>0\)), we further define the volume fraction or composition \(f\) and its derivative as: \[\begin{split} f(t)&=\tfrac{\mathbf{v}_{1}(t)}{\mathbf{v}_{1 }(t)+\mathbf{v}_{2}(t)},\\ \dot{f}(t)&=\tfrac{\dot{\mathbf{v}}_{1}(t)}{\mathbf{v}_{1}(t) +\mathbf{v}_{2}(t)}-f(t)\tfrac{\dot{\mathbf{v}}_{1}(t)+\dot{\mathbf{v}}_{2}(t)}{\mathbf{v}_{1}( t)+\mathbf{v}_{2}(t)}.\end{split} \tag{7}\] Here, \(\mathbf{v}_{1}(t)\) and \(\mathbf{v}_{2}(t)\) are the volumes of the first and second chemical components in the NP at time \(t\), respectively. Clearly, \(\mathbf{v}_{1}(t)+\mathbf{v}_{2}(t)=\tfrac{4}{3}\pi r(t)^{3}\), and thus \(\dot{\mathbf{v}}_{1}(t)+\dot{\mathbf{v}}_{2}(t)=4\pi r^{2}\dot{r}=4\pi r^{n+2}\left( \mathbf{G}_{1}(\mathbf{c}_{1}(t))+\mathbf{G}_{2}(\mathbf{c}_{2}(t))\right)\). As \(\mathbf{v}_{1}(t)\) and \(\mathbf{v}_{2}(t)\) can only change due to \(\mathbf{G}_{1}(\mathbf{c}_{1}(t))\) and \(\mathbf{G}_{2}(\mathbf{c}_{2}(t))\), respectively, we can assign \(\dot{\mathbf{v}}_{1}(t)=4\pi r^{n+2}\mathbf{G}_{1}(\mathbf{c}_{1}(t))\), and \(\dot{\mathbf{v}}_{2}(t)=4\pi r^{n+2}\mathbf{G}_{2}(\mathbf{c}_{2}(t))\). Now substituting the volume derivatives into eq. (7) with the identity \(\mathbf{v}_{1}(t)+\mathbf{v}_{2}(t)=\tfrac{4}{3}\pi r(t)^{3}\), we obtain the following growth kinetics describing the change in radius and volume composition for given a radius, composition, and concentrations: \[\mathbf{\mathcal{G}}(\mathbf{c}(t),\mathbf{x})=\begin{pmatrix}\tfrac{\mathbf{G}_{1}(\mathbf{c}_{1}( t))+\mathbf{G}_{2}(\mathbf{c}_{2}(t))}{\mathbf{v}_{1}^{-n}}\\ \tfrac{3\mathbf{G}_{1}(\mathbf{c}_{1}(t))-3(\mathbf{G}_{1}(\mathbf{c}_{1}(t))+\mathbf{G}_{2}(\mathbf{c} _{2}(t)))\mathbf{v}_{2}}{\mathbf{v}_{1}^{-n}}\end{pmatrix}. \tag{8}\] Figure 1: Sketch of the change in size (\(\dot{r}\)) and composition (\(\dot{f}\)) in the growth of alloy nanoparticles Note that \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) denote the radius and the volume fraction coordinates, respectively. The characteristics ODE eq. (4) with the growth kinetics eq. (8) can be solved analytically for given time-dependent concentrations \(\mathbf{c}\): \[\mathbf{\xi}_{1}^{\mathbf{\xi}}[t,\mathbf{x}](\tau)\] \[=\left(\mathbf{x}_{1}^{1-n}+(1-n)\int_{t}^{\tau}\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! with \[\Psi_{i,k}(x):=\exp\left(3\sum_{\ell=1}^{k-1}\frac{\left(\mathbf{G}_{1}(\mathbf{\mathrm{C}}_ {1,\ell})+\mathbf{G}_{2}(\mathbf{\mathrm{C}}_{2,\ell})\right)\left(\mathbf{t}_{\ell+1}-\mathbf{ t}_{\ell}\right)}{\mathbf{\xi}_{\mathbf{\mathrm{I}}}^{\mathbf{\mathrm{C}}}(i,\mathbf{x},\ell)^{1-n}}\right)\] The first solution formula allows us to evaluate the particle distribution at any time, and disperse property, while the second enables the initial datum to be easily tracked over time, as only the disperse properties \(\mathbf{x}\) within the support of the initial datum \(q_{0}\) have to be evaluated. It is worth mentioning that whatever scheme is used to approximate the evolution of the concentrations over time, the solution formula can then be utilized to evaluate the PBE solution with high accuracy - keeping in mind that this is only true if the concentrations are approximated Figure 2: Solute concentrations over process time for different growth kinetics (**left**); Numerical approximation of the maximal absolute error in the concentrations (\(L^{\infty}\)-error) for different time discretizations \(N_{t}\), and disperse property discretizations of the disperse property as a function of the number of degrees of freedom (DoF) \(N_{t}N_{x}\) (**middle**); and the total computational time (**right**). The error analysis is performed for \(\mathbf{G}_{1}(c)=c,\mathbf{G}_{2}(c)=5c\). The reference solution is given by \(N_{t}\approx 10^{6},\&\ N_{x}\approx 4\ 10^{5}\). For fixed \(N_{x}\), we obtain a clear first-order behavior with respect to both the DoF and to the computational time, provided the spatial resolution is large enough. accurately enough. In Figure 3, the evolution of a PSD is depicted for two growth rate ratios, showing a clear change in the composition over process time. ### Comparison with classical discretization schemes We compare the numerical scheme derived here with state-of-the-art finite volume method schemes (FVM) [17; 8]. To enforce boundedness and numerical stability of the FVM, we consider the total variation diminishing schemes (TVD) [17] with a _van-Leer_ limiter function enabling weighting between first and second order approximations of the growth term[18]. Figure 4 shows a numerical comparison between MD-eMoM and FVM. The newly derived MD-eMoM clearly outperforms FVM. First, it does not need to satisfy a CFL condition [19] because it relies on the solution of the characteristics. Second, MD-eMoM does not suffer from poor (coarse) discretization of the disperse property as only the support of the initial datum has to be discretized, see Figure 4. ## 6 Additional insights into Nanoparticle structure In many applications, the inner-particle structure determines the product properties, e.g., the radial composition of gold (Au) and silver (Ag) in AuAg nanoalloys determine the optical properties [11]. Having the time evolution of the educt concentrations \(\mathbf{c}\), we can also reconstruct the inner-particle composition for every particle in the number density function based on the characteristics. For a particle at time \(t\) with initial disperse property \(\mathbf{x}^{0}\), the composition \(\mathcal{F}\) at every radial position \(\mathbf{\xi}^{\mathbf{c}}(0,\mathbf{x}^{0},\tau)\) for \(\tau\in[0,t]\) is given by: \[\mathcal{F}_{\mathbf{x}^{0},t}\big{(}\mathbf{\xi}^{\mathbf{c}}_{1}(0,\mathbf{x}^{0},\tau) \big{)}=\frac{\mathbf{G}_{1}(\mathbf{c}_{1}(\tau))}{\mathbf{G}_{1}(\mathbf{c}_{1}(\tau))+\mathbf{G }_{2}(\mathbf{c}_{2}(\tau))}. \tag{15}\] This allows us to trace the evolution of the radial composition over process time and to characterize the properties of the final particle size distribution. To demonstrate the potency of this unique analysis approach, we consider the seeded growth of gold-silver alloy NPs and investigate the effect of different growth rate ratios on the inner-particle structure (here optical properties). Under the assumption of radial symmetry, the optical properties can be numerically calculated by MIE Theory [20] (we use the MATLAB code of J. Schafer[21]). As composition-dependent material properties, i.e., refractive indices, we interpolate the measured refractive index for gold-silver alloys by McPeak _et. al._[22]. We prescribe the initial datum \(q_{0}\) to be of uniform composition \(0.5\). Figure 5 (left) shows the inner-particle composition for a particle with size \(7\) nm after the solid formation process is finished. The different radial compositions due to the different growth rate ratios result in different extinction spectra, and thus optical properties, as seen in Figure 5 (right). ## 7 Conclusion and Outlook The multi-dimensional exact method of moment is an efficient way to approximate solutions of multi-dimensional population balance equations by relying on a fixed-point reformulation in terms of the driving forces - here the solute concentrations. The introduced numerical approach allows for highly accurate prediction of the evolution of multi-dimensional number density function, and the reformulation of the governing equations into a set of integro-differential equations results in markedly improved computational efficiency and numerical accuracy compared to state-of-the-art finite volume or finite element methods. Another advantage relates to process optimization, as the derived numerical scheme is differentiable by construction Figure 3: Evolution of the PSD for \(G_{1}\equiv G_{2}\) (**top row**), \(G_{1}\equiv 2G_{2}\) (**middle row**) and \(G_{1}\equiv 5G_{2}\) (**bottom row**) from left to right. The predicted PSD focusing, denoted by the growth kinetics, clearly shows the strong ability of eMoM to capture this. and thus derivatives with respect to process conditions can be computed by utilizing the implicit-function theorem. Our numerical scheme can easily be extended to take into account the nucleation and growth of composite particles, as well as the formation of anisotropic particles[8]. Moreover, it is not limited to just one composition but can handle multiple compositions. A natural straightforward extension of the scheme would be implicit discretization of the fixed-point problem, resulting in an \(n\)-dimensional nonlinear system of equations for each time-step in Algorithm 1, (\(n\) being the number of considered concentrations). Furthermore, the multi-dimensional exact method of moment idea can be coupled to fluid-flow, as already outlined in Bansch et. al.[23] for the one-dimensional case. ## Acknowledgements L. Pflug has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Founda- tion) - Project-ID 416229255 - SFB 1411.
2306.09449
Betelgeuse: a Review
Betelgeuse has fascinated people since they first looked at the sky. Here we present a contemporary summary of the observations and theory that lead to our understanding of Betelgeuse as a massive red supergiant doomed to collapse and explosion. At only ~200 parsecs from Earth, Betelgeuse can be spatially resolved yet uncertainties in its distance remain a critical impediment to deeper understanding. The surface of Betelgeuse is rent with a complex structure as deep convective eddies arise to the surface affecting most of its measured physical properties. Determination of the equatorial rotation velocity is critical since some current estimates indicate that Betelgeuse is rotating anomalously rapidly, a property that cannot be explained by single-star evolutionary models. Betelgeuse is also moving through space at relatively high velocity that indicates that it received a boost, likely via collective interaction with other stars in its birth cluster. A bow shock and other structure in the direction of the star's motion suggest that it has affected the organization of the circumstellar and interstellar medium. Betelgeuse varies in brightness on a variety of time scales with 200, 400 and 2000 days being prominent. Betelgeuse is probable to have been born in a binary system, and the high space velocity and apparent rotation have been related to binary star evolution. One possibility is that Betelgeuse underwent common envelope evolution culminating in a final merger with the core of a massive primary. Such merger models have been invoked to account for the anomalous rotation velocity. Betelgeuse underwent a Great Dimming in 2020 that received widespread attention. Explanations have focused on large cool spots on the surface and the expulsion of a cloud of dust that obscured the surface. We sketch the nature of the explosion to come and discuss perspectives for further research.
J. Craig Wheeler, Emmanouil Chatzopoulos
2023-06-15T19:08:16Z
http://arxiv.org/abs/2306.09449v1
# Betelgeuse: a Review ###### Abstract Betelgeuse has fascinated people since they first looked at the sky. Here we present a contemporary summary of the observations and theory that lead to current understanding of Betelgeuse as a massive red supergiant doomed to eventual collapse and explosion, probably \(\sim 100\),000 years from now. Although it lies only \(\sim 200\) parsecs from Earth, and hence can be spatially resolved with appropriate instrumentation, uncertainties in its distance remain a critical impediment to deeper understanding. The surface of Betelgeuse is rent with a complex structure as deep convective eddies arise to the surface affecting the photosphere, chromosphere, mass loss, the formation of dust and molecules, and the surface magnetic field structure. The global effective temperature has some irreducible uncertainty because of associated temperature variations in the atmosphere. The surface gravity is not precisely known, leading to further uncertainties in the current mass. Determination of the equatorial rotation velocity is critical since some current estimates indicate that Betelgeuse is rotating anomalously rapidly, near rotational breakup, a property that cannot be explained by basic single-star evolutionary models. Betelgeuse is also moving through space at high, though not unprecedented, velocity that indicates that it received a boost, perhaps through collective interaction with other stars in its birth cluster, though disruption of an original binary system has been suggested. A bow shock and other structure in the direction of the motion of Betelgeuse suggests that it has affected the organization of the distant circumstellar and interstellar medium. Betelgeuse varies in brightness on a variety of time scales with \(\sim 200\) d, \(\sim 400\) d and \(\sim 2000\) d being prominent. Models of this variability may be in conflict with historical records suggesting that Betelgeuse was yellow in color, not red, only two millenia ago. Betelgeuse is also subject to a rich variety of theoretical studies that attempt to understand its observational properties and current evolutionary state. Betelgeuse is statistically probable to have been born in a binary system, and the high space velocity and apparent rotation have been related to binary star evolution. One possibility is that Betelgeuse has been subject to common envelope evolution in which a companion star plunges into the primary and becomes tidally disrupted as it nears the core of the primary. This interaction is complex in three dimensions and not sufficiently well understood. Such merger models have been invoked to account for the apparently anomalous rotation velocity. Betelgeuse underwent a Great Dimming in 2020 that caught the attention of astronomers and the general public world wide. Explanations have focused on large cool spots on the surface and the expulsion of a cloud of dust that obscured the surface. We finally sketch the nature of the explosion to come and finish with perspectives for further research. 0000-0002-4880-7888]J. Craig Wheeler 0000-0002-4073-0703]Emmanouil Chatzopoulos ## 1 Introduction Betelguese (\(\alpha\) Orionis) is a nearby, massive red supergiant (RSG) that is most likely destined to explode as a classic Type IIP supernova (SN IIP) and leave behind a neutron star. Study of Betelgeuse thus promises insight into a broad range of issues of the structure, evolution, rotation, magnetic fields, mass loss, stellar winds, circumstellar medium, dust formation, atmospheres, chromospheres, radiative transfer, nucleosynthesis, and, eventually, the explosion of massive stars. Betelgeuse is special because its propinquity allows its image to spatially be resolved. Betelgeuse also has properties such as its runaway kinematics that may be special to it. Most massive stars arise in binary systems and there are hints this may have been true for Betelgeuse despite its current apparently solo state, which seems typical of SN IIP. Betelgeuse shows a 420-d period that is most likely a first over-tone radial pulsation mode and variance on time-scales of 2000 d that is associated with overturn of convective plumes. Then, just to keep us guessing, Betelgeuse staged the "Great Dimming" of 2019/2020, the detailed origin of which is still debated. Figure 1 gives some sense of scale of Betelgeuse. Despite the relatively small distance from Earth, and in some sense because of it, it has been difficult to obtain tight constraints on the distance, luminosity, radius, current and Zero Age Main Sequence (ZAMS) masses, and information about the internal rotational state and associated mixing and hence on the evolutionary state of Betelgeuse and when it might explode. The best current guess is that Betelgeuse is in core helium burning and will not explode for about a hundred thousand years, but it will be a tremendous spectacle from the Planet Earth when it does. ## 2 Observations Valuable summaries of basic observational properties of Betelgeuse are given by Dolan et al. (2016) and Joyce et al. (2020). Here we summarize some key aspects. ### Distance Even recently the distance to Betelgeuse has been known to only 20% (\(D\approx 197\pm 45\) pc; Harper et al., 2008, 2017), a situation that was not improved by the Gaia mission that provided accurate parallaxes but that saturates on such a bright star or is rendered less certain by transient star spots (Chiavassa et al., 2022). Key properties such as radius and luminosity were thus significantly uncertain, \(R\) to within 20% and \(L\) to only 40%. Estimates of mass that determine the evolution depend sensitively on \(L\) and \(R\) and thus also remained uncertain. The effective temperature that can be determined independent of distance has its own intrinsic uncertainties. Dolan et al. (2016) estimated \(T_{eff}=3500\pm 350\) K. Within these uncertainties, models of contemporary Betelgeuse could be brought into agreement with observations of \(L\), \(R\), and \(T_{eff}\) all the way from the minimum-luminosity base of the giant branch to the tip of the red supergiant branch (RSB) (Wheeler et al., 2017). Recent work has proposed ways to reduce the uncertainly in distance, but with conflicting solutions converging on either the base (SS2.13) or the tip of the RSB (SS2.12). ### Spatial Resolution A special characteristic of Betelgeuse is that its relatively small distance allows its surface to be spatially resolved with appropriate instrumentation as shown in Figure 2. The photosphere of Betelgeuse subtends an angle of \(\sim 40\) milliarcseconds that can be resolved with ground-based interferometry in the optical and infrared (Haubois et al., 2009; Montarges et al., 2016; Lopez Ariste et al., 2022) and submillimeter (O'Gorman et al., 2017; Kervella et al., 2018; Haubois et al., 2019) or from space with the Hubble Space Telescope (HST) (Gilliland & Dupree, 1996; Uitenbroek et al., 1998). Gilliland & Dupree (1996) resolved Betelgeuse spatially by obtaining images with the HST Faint Object Camera in 10 resolution elements across the surface. They found the ultraviolet diameter of Betelgeuse to \(108\pm 4\) mas, a factor of 2.2 larger than the optical diameter, suggesting an extended chromosphere in analogy to the hot temperature inversion in the Sun. A single Figure 1: Schematic showing the scale of the red supergiant Betelgeuse and its circumstellar medium compared to that of the Solar System (AU = Astronomical Units). Art by L. Calçada, by permission of the European Southern Observatory. bright, unresolved area was 200 K hotter than the mean value. Gilliland & Dupree (1996) suggested this surface inhomogeneity might be due to magnetic activity, atmospheric convection, or global pulsations that produce shock structures that heat the chromosphere. Spatially resolved spectroscopy with the Goddard High Resolution Spectrograph suggested the complicated dynamics of outflowing material in the chromosphere (Lobel & Dupree, 2001). Haubois et al. (2009) undertook H-band interferometry with the Infrared-Optical Telescope Array (IOTA) at the Whipple Observatory to measure the diameter (\(44.28\pm 0.15\) mas), effective temperature (\(3600\pm 66\) K), limb darkening, and bright or dark patches in the photosphere and surroundings. Montarges et al. (2016) did H-band interferometry on the VLT to explore mass loss driven by strong convective motions by mapping the shape of the envelope and following the structure of the wind from the photosphere out through the nearby circumstellar medium and into the interstellar medium. They detected a hot spot on the photosphere comparable in size to the radius of the star. O'Gorman et al. (2017) used submillimeter observations with the _Atacama Large Millimeter Array_ (_ALMA_) to study the free-free emission in the extended atmosphere of Betelgeuse. They found that the mean temperature at 1.3 stellar radii was 2760 K, a value that is less than both the photospheric temperature they gave as \(T_{eff}=3690\) K and the temperatures at 2 solar radii, implying an inversion of the mean temperature in the atmosphere. The emission showed evidence for inhomogeneous localized heating in the atmosphere of Betelgeuse, perhaps related to magnetic activity generated by large-scale convection. We will return to the power of interferometry in SS4. ### Convection and Plumes The extended outer envelope of Betelgeuse engenders appreciable superadiabatic temperature gradients that lead to strong convection Schwarzschild (1975). Both direct observations (Gilliland & Dupree, 1996; Uitenbroek et al., 1998; Haubois et al., 2009; Dupree & Stefanik, 2013; Montarges et al., 2016; O'Gorman et al., 2017; Kervella et al., 2018; Haubois et al., 2019; Lopez Ariste et al., 2022) and models (Chiavassa et al., 2010; Goldberg et al., 2022) indicate that the convective structure of the envelope of Betelgeuse is characterized by large plumes of upwardly rising hot material and inwardly cascading cooler material. The plumes in turn lead to hot and cold patches on the surface that are substantially large compared to the radius of the star (Montarges et al., 2016). This leads to complications in determining basic quantities like the global effective temperature (Levesque & Massey, 2020). ### Atmosphere, Photosphere, Chromosphere Driven by the irregular convective plumes, the outer layers of Betelgeuse reveal a complex atmospheric structure as the optically-thick convective envelope yields to a wavelength-dependent and position-dependent photosphere and chromosphere (Bernat & Lambert, 1976; Lim et al., 1998; Plez & Lambert, 2002; Montarges et al., 2016; O'Gorman et al., 2017; Lopez Ariste et al., 2022). O'Gorman et al. (2017) established a temperature inversion between the photosphere and chromosphere (SS2.2). ### Mass and Mass Loss The ZAMS mass is a fundamental property that determines the evolution of a star. In the case of Betelgeuse, the uncertainty in distance and other factors yields intrinsic uncertainty in the ZAMS mass. Betelgeuse qualifies as a massive star, but estimates of the ZAMS mass vary from 10 to 25 M\({}_{\odot}\). In recent estimates, Dolan et al. (2016) gave 17 - 25 M\({}_{\odot}\) whereas Joyce et al. (2020) found 18 - 21 M\({}_{\odot}\). This mass range destines Betelgeuse to succumb to iron core collapse and a likely catastrophic explosion. Direct collapse to a black hole is a remote possibility (SS3). The subsequent evolution of Betelgeuse is not determined solely by its ZAMS mass, but also depends on abundances, rotation, stellar winds, the presence of a binary companion, and the possibility of a merger. Mass loss on the main sequence is estimated to be less than 0.1 M\({}_{\odot}\), a small effect compared to other uncertainties. Harper et al. (2001) and Le Bertre et al. (2012) determined the current mass loss rate to be \(\sim 1-4\times 10^{-6}\) M\({}_{\odot}\) y\({}^{-1}\). Dolan et al. (2016) adopted \(2\times 10^{-6}\) M\({}_{\odot}\) y\({}^{-1}\). Estimates of the current wind velocity of Betelgeuse range from 3 to 14 km s\({}^{-1}\). Dolan et al. (2016) adopted a range of \(9\pm 6\) km s\({}^{-1}\)(their Table 5). Figure 2: Spatially-resolved H band image of Betelgeuse. From Haubois et al. (2009) by permission of X. Haubois, ESO/Observatoire de Paris, and Astronomy & Astrophysics. The wind accelerates, so a single wind velocity may not be appropriate. As for other RSGs, mass loss from Betelgeuse in its current configuration is episodic (Decin et al., 2012; Massey et al., 2023), a factor often neglected in prescriptions for mass loss rates. This variability is probably linked to the sporadic convective plumes and to the intrinsic pulsational properties. Magnetic fields may play a role (SS2.11). Lopez Ariste et al. (2022) sought to understand convection and the mechanisms that trigger mass loss by using linear spectropolarimetry of the atomic lines to provide velocity and hence depth information in addition to spatial distribution. The result was images of the photosphere of Betelgeuse that provide information about the 3D distribution of brightness in the atmosphere. The data revealed the velocity of vertical convective flows at different heights in the photosphere that showed that non-gravitational forces are present in the photosphere of Betelgeuse that allow plasma to reach velocities close to the escape velocity. These forces may trigger mass loss and sustain large stellar winds. Humphreys & Jones (2022) argue that Betelgeuse gives evidence for discrete, directed clumpy outflows as suggested by circumstellar gas knots detected in the submm region that are related to magnetic fields and surface activity. They argue that this clumpy outflow analogous to solar coronal mass ejections is a major contributor to mass loss from RSGs, including Betelgeuse. ### Molecules and Dust Plasma recombining to gas continues to cool as it is ejected from the surface of Betelgeuse. If it gets sufficiently cool, the gas can form molecules through complex non-equilibrium chemistry. The molecules can then serve as nucleation sites where inorganic dust can form. Dust grain surfaces in turn provide an environment to form yet other molecules. Jennings & Sada (1998) discovered water in the atmosphere of Betelgeuse. Tsuji (2000) confirmed the presence of water in data taken 35 years previously with the balloon-borne telescope Stratoscope II (Woolf et al., 1964). He proposed a molecular shell, a MOLsphere, in the atmosphere of Betelgeuse. Perrin et al. (2007) subsequently identified a geometrically thin shell between 1.31 and 1.43 \(R_{star}\) with a typical temperature of 1550 K that contained H\({}_{2}\)O, SiO, and Al\({}_{2}\)O\({}_{3}\). Ohnaka et al. (2009, 2011) spatially resolved the macroturbulent gas motion in the photosphere and MOLsphere of Betelgeuse for the first time. Models presented by Harper et al. (2001) suggested that dust formed at about 33 \(R_{star}\) at a temperature of \(\sim 360\) K. Kervella et al. (2018) argued that convective cells lead specifically to the production of molecular plumes and dusty knots in the north polar region of Betelgeuse. Related notions came to the fore during the Great Dimming of 2019/2020 (SS4). Haubois et al. (2019) did near-IR interferometry to explore the connection between dust formation and mass loss from Betelgeuse. They found a halo of fosterite (Mg\({}_{2}\)SiO\({}_{4}\)) dust beginning about 0.5 \(R_{star}\) above the photosphere, much lower than suggested by the models of Harper et al. (2001). The height of molecule and dust formation may vary inhomogeneously over the surface of Betelgeuse. ### Surface Gravity The gravitational acceleration at the surface of Betelgeuse, the surface gravity, \(g=GM/R^{2}\), provides an independent constraint on the ratio \(R/M\). This quantity is determined from the analysis of line structure in the photosphere and is typically presented as the logarithm in base 10 of \(g\) measured in the cgs system. Lambert et al. (1984) observed forbidden O I lines, vibration-rotation bands of second-overtone CO near 1.6 micron, NH bands between 3 and 4 microns, OH fundamental bands near 3 microns, and CN red lines near 8000 A and 2 microns, and employed sophisticated model atmospheres designed for supergiant stars. For Betelgeuse, Lambert et al. (1984) adopted \(log~{}g=0.0\pm 0.3\). Lobel & Dupree (2000) used near-UV, optical, and near-IR high-dispersion spectra analyzed with non-LTE radiative transfer calculations to obtained \(log~{}g=-0.5\) that is somewhat less, even given the nominal uncertainties. Neither Lambert et al. (1984) nor Lobel & Dupree (2000) considered the plume structure of the envelope and departures from spherical symmetry. Neilson et al. (2011) employed limb-darkening laws and grids of spherical model stellar atmospheres to determined \(R/M=82^{+13}_{-12}\) R\({}_{\odot}\)/M\({}_{\odot}\). From their best-fitted models, Dolan et al. (2016) obtained \(R/M=40\) R\({}_{\odot}\)/M\({}_{\odot}\), substantially less than Neilson et al. (2011), and \(log~{}g=-0.05\) for their Eggleton-based code and \(log~{}g=-0.10\) with the stellar evolution code Modules for Experiments in Stellar Astrophysics (mesa; Paxton et al., 2011, 2013, 2015, 2018). The latter estimates for \(log~{}g\) are roughly consistent with Lambert et al. (1984) but appreciably larger than found by Lobel & Dupree (2000). In principle, the effective gravity at the surface of a star is reduced by the centrifugal effects of rotation that is substantial in Betelgeuse (SS2.8). For a 20 M\({}_{\odot}\) model rotating at velocities typical of Betelgeuse, Wheeler et al. (2017) found \(log~{}g=+0.42\) at the luminosity minimum at the base of the RSG branch and \(log~{}g=-0.48\) during carbon burning when the model had slowed due to envelope expansion. The former is somewhat beyond the upper limit set by Lambert et al. (1984) and the latter in close agreement with the determination of Lobel & Dupree (2000). For their models with a 16 M\({}_{\odot}\) primary merging with a 4 M\({}_{\odot}\) secondary, Chatzopoulos et al. (2020) found post-merger surface gravity for models merging at 300 and 250 R\({}_{\odot}\) to be \(4.67-6.65\) cm s\({}^{-2}\), corresponding to \(log~{}g=0.67-0.82\). There are thus significant uncertainties in both observations and models of \(log~{}g\) for Betelgeuse. Constraints on \(log~{}g\) come into play in considering pulsational properties (SS2.12) and the possibility of a recent color change in Betelgeuse (SS2.13). ### Rotational Velocity The rotation of Betelgeuse at the surface and at depth has implications for estimates of the current age, the current mass, the ZAMS mass, the current evolutionary state, and the time to explosion. Betelgeuse appears to have an anomalously large rotational velocity. Long slit spectroscopy across the minimally resolved disk of Betelgeuse obtained with the _Hubble Space Telescope_ (HST) yielded an estimated surface rotational velocity \(v_{\rm rot}\sin(i)\sim 5\) km s\({}^{-1}\) at an inclination of \(i\approx 20^{\rm o}\)(Dupree et al., 1987; Gilliland and Dupree, 1996; Uitenbroek et al., 1998; Kervella et al., 2009). These data imply an equatorial rotational velocity of \(\sim 15\) km s\({}^{-1}\). The uncertainty in this quantity is itself uncertain. More recent observations appear to further support this result even within the uncertainties imposed by large-scale convective motions on the star's surface. Kervella et al. (2018) used _ALMA_ to resolve the surface velocity and determined that Betelgeuse rotates with a projected equatorial velocity of \(v_{\rm eq}\sin(i)=5.47\pm 0.25\) km s\({}^{-1}\) with an estimated rotation period of \(36\pm 8\) yr (see SS2.8). They confirmed that the chromosphere is co-rotating with the star up to a radius of 1.5 times the continuum radius. They found that the position angle of the polar axis of Betelgeuse coincided with a hot spot in the _ALMA_ data, suggesting that focused mass loss was currently taking place in the polar region. They proposed that a particularly strong convection cell was driving a focused molecular plume that could subsequently condenses into dust at a few stellar radii thus contributing to anisotropic mass loss (SS2.3, SS2.5, SS2.6, SS2.8, SS4). High rotation during the supergiant phase is not found in stellar evolution calculations of single massive stars (SS3.1) - including those that are rapid rotators at the Zero Age Main Sequence (ZAMS) - nor expected by simple arguments of angular momentum conservation (Wheeler et al., 2017). Single massive stars lose a fraction of their mass and angular momentum through winds already during the main sequence (MS) phase. O stars with initial rotation velocities of \(\sim 200\) km s\({}^{-1}\) evolve through rapid mass and angular momentum losses to become much slower rotating B stars with \(v\sin i\leq 50\) km s\({}^{-1}\)(Maeder and Meynet, 2000; Higgins and Vink, 2019 and references therein). Simple analytic arguments (Chatzopoulos et al., 2020) and stellar evolution calculations (Claret and Gimenez, 1989) suggest that a star rotating at \(\sim 200\) km s\({}^{-1}\) at the ZAMS is likely to decrease to \(\leq 50\) km s\({}^{-1}\) at the Terminal Age Main Sequence (TAMS). Similar estimates and detailed simulations of the evolution of massive stars, including mass and angular momentum losses from the ZAMS to the supergiant stage typically yield an upper limit to the equatorial rotational velocity of \(v_{\rm eq}<1\) km s\({}^{-1}\) on the RSB (Ekstrom et al., 2008, 2012; Brott et al., 2011, 2011). Measurements of giant and supergiant star rotation rates support this argument (Ceillier et al., 2017). Wheeler et al. (2017) and Chatzopoulos et al. (2020) found a velocity of \(\sim 0.1\) km s\({}^{-1}\) high on the RSB. _Kepler_ observations of low-mass giant stars (\(<3\)M\({}_{\odot}\)) showed 17 with rotational speeds up to \(\sim 18\) times that of the Sun (Costa et al., 2015). It is possible that a yet unknown mechanism, perhaps transfer of angular momentum from inner regions by g-mode acoustic waves (SS2.14), could account for this rapid rotation (Fuller et al., 2014; Townsend et al., 2018), but it is not clear that even such mechanisms can account for the rotation of a massive RSG like Betelgeuse. Taken at face value, Betelgeuse is thus rotating too rapidly by a factor \(\sim 15\) and perhaps as much as 150 compared to basic single-star models high on the RSB (Wheeler et al., 2017; Chatzopoulos et al., 2020; Joyce et al., 2020). Models of Betelgeuse on the RSB give a critical Keplerian velocity of \(\sim 65\) km s\({}^{-1}\)(Wheeler et al., 2017); the observed rotational velocity is thus a substantial fraction of the escape velocity. Such a rotation may cause measureable oblateness that could complicate interpretation of the observations (Tatebe et al., 2007; Haubois et al., 2009). There are concerns that the deduced rotational velocity is not correct, perhaps confused by the complex large scale convective flows at the photosphere. Gray (2001) found a macroturbulence Gaussian dispersion \(\sim 15\) km s\({}^{-1}\) with a FWFM of \(\sim\pm 50\) km s\({}^{-1}\) consistent with many convection cells appearing on the stellar disk but with no evidence for giant convection cells. More recently, Lopez Ariste et al. (2018) found characteristic upflow and downflow speeds of 22 and 10 km s\({}^{-1}\), respectively. Jadlovsky et al. (2023) argued that the projected rotational velocity \(v_{\rm rot}\sin(i)\) is not trustworthy, as both edges of Betelgeuse seem to be moving towards Earth at a similar velocity. An accurate measurement of the equatorial rotational velocity of Betelgeuse is important in order to constrain models. Single-star rotating models give \(v_{rot}\sim 15\) km s\({}^{-1}\) only in a brief phase near the base of the RSB that would last for a few thousand years at most. It is conceivable that Betelgeuse might currently reside in this portion of the Hertzsprung Russell Diagram (HRD) by appropriately pushing \(3\sigma\) error bars on \(R\), \(L\), and \(T_{eff}\)(Wheeler et al., 2017). The historical color changes of Betelgeuse characterized by Neuhauser et al. (2022) may demand that Betelgeuse is currently in this lower portion of the HRD where massive stars can change \(T_{eff}\) on timescales of 1000 y (SS2.13). This conclusion conflicts with the results from the careful study of the pulsation period given by Joyce et al. (2020) that places Betelgeuse higher on the RSB (SS2.12). One possibility to account for the high rotation velocity is that Betelgeuse has undergone a merger as it expanded and evolved up the RSB (SS3.3). Another pathway to form a rapidly-rotating supergiant is presented in de Mink et al. (2013). They propose that Case A Roche lobe overflow mass transfer from a \(\sim 20\) M\({}_{\odot}\) primary is enough to spin up a \(\sim 15\) M\({}_{\odot}\) secondary to high rotational velocity if the transfer occurs right after the TAMS before the ascent up the RSB (their Figure 2). This possibility requires considerable fine tuning of the binary evolution parameters and the timing of the onset of mass transfer. Both the merger model and the Case A transfer model should be examined for testable observational consequences. ### Observed Abundances Photospheric abundances are yet another clue to the evolutionary history and state of Betelgeuse. The measured N/C (nitrogen to carbon) and N/O (nitrogen to oxygen) surface abundance ratios for Betelgeuse are 2.9 and 0.6, respectively, compared to solar values of N/C=0.3 and N/O=0.1 and the ratio \({}^{12}C/^{13}C\) is much lower than solar (Lambert et al., 1984). These ratios vary as massive stars burning hydrogen on the CNO cycle settle into CNO equilibrium, with N being produced at the expense of C and O. CN-equilibrium is achieved before an inhibiting gradient in mean molecular weight is established between the core and the envelope, so the excess N can quickly be transported to the stellar surface thus producing large N/C ratios. Full CNO-equilibrium is achieved only after significant hydrogen burning, so surface O depletion only occurs later. The observation of enhanced nitrogen at the surface of Betelgeuse may be indicative of enhanced mixing, perhaps triggered by rotation (Meynet et al., 2013). The effects of rotational mixing are more pronounced at lower metallicity, higher ZAMS mass, and higher rotational velocity (Brott et al., 2011). Rotational mixing may need to be supplemented by other effects such as binary evolution and magnetic fields to understand the abundance distributions in evolved massive stars (Brott et al., 2011). Luo et al. (2022) have used surface abundances to constrain the nature of Betelgeuse in terms of initial mass, rotation, and overshoot. They find the acceptable range of ZAMS masses is slightly larger for rotating models than non-rotating models, 12 - 25 M\({}_{\odot}\) versus 15 - 24 M\({}_{\odot}\), respectively. They find that the initial rotation on the ZAMS must be restricted to 0.3 of the Keplerian velocity in order to fit the surface abundances of Betelgeuse as an RSG and find that some of their models could be in the phase of carbon burning or beyond. The observed abundances in Betelgeuse are consistent with material that has been mixed to the surface in the first dredge-up phase when the convective hydrogen envelope penetrates the helium core (Lambert et al., 1984; Dolan et al., 2016). This constrains Betelgeuse to have passed the base of the RSB and to be ascending the RSB, consistent with the results of Joyce et al. (2020) but perhaps in contradiction with the conclusions of Neuhauser et al. (2022) (SS2.13). ### Kinematics, Nearby CSM, ISM, Bowshocks In addition to perhaps being a rapid rotator, Betelgeuse is also a known runaway star with a measured space velocity of \(\sim 30\) km s\({}^{-1}\) and a kinematic age of \(\sim 7\)-11 Myr (Harper et al., 2008, 2017). As shown in Figure 3, the flight of Betelgeuse through the interstellar medium is also illustrated by _HST_ and _Herschel_ observations of a bow shock forming a swept-up shell of material of \(\sim 0.14\)\(M_{\odot}\) at a radius of \(\sim 6\)-7 arcmin corresponding to a physical distance of \(\sim 0.8\) pc using a distance to Betelgeuse of \(\sim 400\) pc (Noriega-Crespo et al., 1997; Decin et al., 2012) (current estimates of the distance are less by a factor of two or three; SSSS2.1,2.12). The prominent bow shock is in the same direction as the kinematic motion, indicating a peculiar velocity with respect to the local standard of rest of \(v\approx 25\) km s\({}^{-1}\)(Harper et al., 2008) or perhaps as much as 35 km s\({}^{-1}\)(van Loon, 2013). The morphology of this structure is attributed to wind from the star sweep Figure 3: Structure in the large scale CSM surrounding Betelgeuse observed by the _Herschel_ mission. Note the prominent bow shock at 7 arcmin that is in the direction of the spatial velocity of Betelgeuse. From Decin et al. (2012). Adapted by permission of L. Decin and Astronomy & Astrophysics. ing up interstellar medium in the direction of motion (Mohamed et al., 2012; Decin et al., 2012; Mackey et al., 2014). The observations also show a smaller ring of material with a diameter of about 4 arcmin (Le Bertre et al., 2012). One explanation is that this is wind mass that is radiation-impeded by external radiation (Mackey et al., 2014). There is also an odd, very linear feature about 9 arcmin away, beyond the bow shock, that remains unexplained (Noriega-Crespo et al., 1997; Decin et al., 2012). Wheeler et al. (2017) noted that a merger event might have some relation with the interstellar shells of higher density in the vicinity of Betelgeuse. The strangely linear feature at 9 arcmin might be related to the square axisymmetric circumstellar nebula recently discovered around the B9 Ia star HD93795 by Gvaramadze et al. (2020). Such a connection might in turn suggest that Betelgeuse had undergone some previous mass expulsion. Proposals to account for the high space velocity of Betelgeuse include multi-body stellar interactions in its birth cluster and the possibility that a binary companion underwent a supernova explosion (Blaauw, 1961; van Loon, 2013). In a study of the 30 Doradus region of the Large Magellanic Cloud, Sana et al. (2022) conclude there are two different populations of massive runaway Main Sequence O stars: a population of rapidly spinning (\(v_{\rm eq}\sin(i)>200\) km s\({}^{-1}\)) but slowly moving (\(v=25-60\) km s\({}^{-1}\)) runaway stars and a population of slowly rotating (\(v_{\rm eq}\sin(i)<200\) km s\({}^{-1}\)) rapidly moving (\(v>60\) km s\({}^{-1}\)) stars. They found no rapidly spinning, rapidly moving stars in their sample. Sana et al. (2022) argue that slowly moving rapidly spinning stars result from binary ejections, while rapidly moving slowly spinning stars result from dynamical ejections, with slowly moving rapidly spinning stars and hence binary evolution dominating the current massive runaway star population in 30 Doradus. Betelgeuse nominally belongs in the slowly moving rapidly spinning runaway category. Backwards extrapolation of the current trajectory of Betelgeuse has led some to suggest that its possible birthplace is the Orion OB1a association (Briceno et al., 2005). Others have argued that a backward extrapolation of its known space velocity does not appear to bring Betelgeuse close to any plausible sub-association of OB1 as its birth place (Bally, 2008). (Bally, 2008) suggests a two step process: (1) a dynamical ejection of a binary within the first few million years after the formation of Betelgeuse's birth cluster, and (2) a subsequent merger of the binary or a supernova explosion of the more massive component, releasing the surviving now single Betelgeuse at some post MS stage of its evolution. Work on the kinematic effects of supernovae in massive star binary systems tends to discourage the conjecture of the previous explosion of a companion to Betelgeuse. Renzo et al. (2019) confirm that of order 20 - 50% of massive star binaries merge rather than undergoing disruption. They also find that by far the largest fraction of binaries disrupted by the collapse and explosion of the primary result in "walkaway" rather than "runaway" stars. The velocity distribution of the ejected companion peaks at about 6 km s\({}^{-1}\). For secondaries more massive than 15 M\({}_{\odot}\), as likely applies to Betelgeuse, only \(\sim 0.5\%\) have velocities of 30 km s\({}^{-1}\) and above, as appropriate to Betelgeuse. These results suggest that, while non-zero, the likelihood that the space motion of Betelgeuse resulted from the previous explosion of a companion is small. The results depend on assumptions about primordial binaries, among other things, but the general result is that it is easier to generate walkaway stars than runaway stars. A runaway binary is likely to be rare, but is not precluded. As discussed above, a reasonable alternative is that the proper motion of Betelgeuse arises from stellar dynamics in its natal cluster (Poveda et al., 1967; Oh & Kroupa, 2016; Schoettler et al., 2019). Early ejection as a single star either by the disruption of a cluster binary or dynamical escape from a cluster are unlikely to yield a rapid rotator in the present supergiant stage. Even if spun up on the ZAMS, its rotation on the RSB would be slow. If a previous binary companion exploded, then it clearly could not have merged with the current Betelgeuse as discussed in SS3.3. The origin of the space motion of Betelgeuse is thus one more fascinating open question about this tantalizing star. Whether Betelgeuse attained its proper motion from the explosion of a companion or from cluster dynamics, if it emerged as a single star then the apparent observed equatorial velocity remains an issue. A possible way to account for both the space motion and the equatorial velocity would be to provide the space motion by cluster dynamics and ejection of a binary, of which the star we currently observe as Betelgeuse was the primary, and a subsequent merger along the RSB. This is, admittedly, an improbable string of events. Oh & Kroupa (2016) find that a majority of ejected massive binaries have a period shorter than \(10^{5}\) days. Supergiant branch merger models have a typical presumed orbital period of about 30 years or \(10^{4}\) days (Wheeler et al., 2017). Having a rather massive companion might increase the likelihood that the binary remains intact upon ejection from the natal cluster. Current results allow for that possibility. We note that while Betelgeuse may have moved hundreds of pc during its main sequence lifetime, it is expected to have moved only \(\sim 2\) pc during the 100,000 years or so it has been in core helium burning as a RSG. ### Magnetic Fields The atmosphere of Betelgeuse is observed to harbor magnetic fields of \(\sim 1\) G as as measured by circular polarization (Mathias et al., 2018) and as inferred from the Zeeman effect (Auriere et al., 2010). These fields are thought to originate from local low-scale convective ac tivity and associated non-linear dynamo action in Betelgeuse or perhaps from giant convective cells on its surface (Dorch, 2004). We have noted earlier that magnetic fields may play a role in localized hot spots on the surface of Betelgeuse, in the formation of the chromosphere, and in clumpy mass loss. Thirumalai and Heyl (2012) addressed the effects of magnetic fields on winds, dust, and the structure of the photosphere and chromosphere of Betelgeuse. ### Pulsation Periods We noted in SS2.7 that measurement of the surface gravity provides a constraint on \(R/M\), given an independent measurement of \(R\). Stellar pulsation modes also depend on gravity, giving yet another constraint on \(R/M\). Betelgeuse displays a range of periodic behavior. Studies of Betelgeuse have long revealed a variety of pulsation modes. Of particular value is the record of optical photometry compiled by amateurs and professionals for nearly 100 years and recorded by the American Association of Variable Star Observers (AAVSO). These data reveal at least two different timescales, \(\sim 388\) d and a "long secondary period" (LSP) of \(\sim 2050\) d (5.6 yr) (Kiss et al., 2006; Chatys et al., 2019). The LSP might be related to the rotation, but the rotation period is apparently significantly longer (SS2.8). Joyce et al. (2020) analyzed the most recent \(\sim 40\) years of data from the AAVSO complemented with data incidentally produced by Solar Magnetic Ejection Imager (SMEI) observations. They find periods of \(185\pm 13.5\) d, \(416\pm 24\) d, and \(2365\pm 10\) d, cautioning that these periods could evolve with time. U-band observations are relatively rare. Ogane et al. (2022) presented 23 years of UBVRI data obtained at the private Ogane Hikari Observatory and found periods of \(\sim 405\) d and 2160 d. (Jadlovsky et al., 2023) presented an analysis of spectroscopic and photometric variability in the UV and optical, finding photometric periods of \(417\pm 17\) d and \(2190\pm 270\) d and radial velocity periods from spectroscopy of \(415\pm 11\) d and \(2510\pm 440\) d. The radial velocity determined from ultraviolet spectra show longer periods of variability that may be related to the outflowing wind. Models of RSGs show that pressure-mode or p-mode radial pulsations can be driven by the opacity or \(\kappa\)-mechanism in the hydrogen ionization zone. In this mechanism, the opacity varies out of phase with the luminosity, being lower when the star is compressed and hot releasing radiant energy and allowing more compression and higher when the star expands and cools thus blocking radiant energy and driving more expansion. Simulations yield mass-dependent periods of the fundamental of years in both linear and nonlinear models (Li and Gong, 1994; Heger et al., 1997; Yoon and Cantiello, 2010; Paxton et al., 2013; Dolan et al., 2016; Goldberg et al., 2022). Modeling pulsation processes may require 3D, time-dependent convection, or otherwise more sophisticated physical formalisms that are beyond the scope of typical 1D stellar evolution programs, but 1D analyses already provide useful insights. Joyce et al. (2020) used 1D hydrodynamical models and the GYRE pulsation module of mesa to analyze the pulsations and provide new constraints on \(R/M\) for Betelgeuse. They deduced that the 416 day period represents oscillation in the fundamental mode, driven by the opacity mechanism, and that the 186 day period represents the frequency of the first overtone of radial pulsations. Joyce et al. (2020) also used the period information to provide a tighter constraint on the radius of Betelgeuse, \(R=750^{+62}_{-30}\) R\({}_{\odot}\) (\(3\sigma\)), compared to the previous estimate of 887 R\({}_{\odot}\). Surprisingly, this led to a tighter constraint on the distance and parallax than previous methods, \(D=165^{+16}_{-8}\) pc with \(<\)10% uncertainty compared to the previous estimate of 197 pc with 20% uncertainty, and tighter constraint on the ZAMS mass, 18 - 21 M\({}_{\odot}\) and the current mass, 16.5 - 19 M\({}_{\odot}\). Joyce et al. (2020) do not give a surface gravity to compare with atmospheric observations (SS2.7). They give an extensive discussion of model degeneracies that make estimates of L and \(T_{eff}\) uncertain. With the new constraints, Joyce et al. (2020) concluded that Betelgeuse is in core helium burning, with \(\sim\) 100,000 years to go before explosion. ### Recent Change in Color? Another approach to determining the mass, luminosity, radius, distance, effective temperature, age, and current evolutionary state of Betelgeuse is to study the color evolution from the historical record. In a recent rigorous analysis of extensive multi-cultural historical literature including Tycho Brahe's comparison of Betelgeuse to his supernova of 1572 and to Aldebaran, Neuhauser et al. (2022) [and new summary in Astronomy and Geophysics] argue that Betelgeuse has significantly changed color over the last two millennia. Contemporary Betelgeuse is, as can be verified by casual naked eye observation, red, with a formal color of \(B-V=1.78\pm 0.05\) mag. Neuhauser et al. (2022) argue that 2000 years ago Hyginus in Rome reported Betelgeuse to have a color similar to Saturn that is equivalent to \(B-V=1.09\pm 0.16\) mag and that Sima Qian in China independently reported Betelgeuse to be "yellow," a condition that Neuhauser et al. (2022) quantify to be \(B-V=0.95\pm 0.35\) mag. Neuhauser et al. (2022) estimate that these historical estimates of color differ from the contemporary color by 5.1\(\sigma\). In contrast, Antares has always been reported as red for over 3000 yr. Taken at face value, this color change of Betelgeuse represents a strong constraint on evolutionary models. Neuhauser et al. (2022) compare their estimates of historical and contemporary colors of Betelgeuse to the mesa Isochrones and Stellar Tracks (MIST) of Choi et al. (2016). They deduce that Betelgeuse is likely to be near the cool end of the Herzsprung Gap and less than 1000 yr past the minimum of the RSB when relatively rapid changes in color are expected. Neuhauser et al. (2022) specifically argue that the color evolution and location in the color-magnitude diagram constrain the ZAMS mass to be \(\sim 14\) M\({}_{\odot}\) with a current age of \(\sim 14\) Myr. This deduction is in distinct contrast with the location in the Hertzsprung-Russell Diagram, the ZAMS mass (18 - 21 M\({}_{\odot}\)), and the evolutionary state deduced by Joyce et al. (2020). In their study of the rotation of Betelgeuse, Wheeler et al. (2017) noted that the radius increases and the surface velocity plummets as models proceed across the Hertzsprung gap and up the RSB. The only position in the Hertzsprung-Russell Diagram for which single star models could plausibly give the observed equatorial rotation of \(\sim 15\) km s\({}^{-1}\) (SS2.8) is when the models first approach the base of the red supergiant branch (RSB), having crossed the Hertzsprung gap but not yet having ascended the RSB. This condition is similar to that deduced by Neuhauser et al. (2022). Wheeler et al. (2017) argued that because that phase is so short (\(\sim 100\) yr), that possibility is highly unlikely. Rather, they suggested, Betelgeuse may have been in a binary system that merged (SS3.3), producing the observed rotation near the upper tip of the RSB, the condition deduced by Joyce et al. (2020). If Neuhauser et al. (2022) are correct in their interpretation of the historical data, their results are a challenge to models, including merger models, that attempt to place contemporary Betelgeuse in the upper reaches of the RSB. At this writing, the conflict between Joyce et al. (2020) and Neuhauser et al. (2022) is unresolved. Wheeler et al. (2017) noted that a solution near the base of the RGB, as advocated by Neuhauser et al. (2022), would yield an excessively large surface gravity, \(log~{}g\approx+0.42\) (SS2.7). This may mitigate against the solution of Neuhauser et al. (2022), but a proper resolution would involve identifying a flaw in either the analysis of Joyce et al. (2020) or that of Neuhauser et al. (2022). Once again, an important factor is the distance. Neuhauser et al. (2022) favor a distance of \(151.5\pm 19\) pc as determined from Hipparchos data (van Leeuwen, 2007) rather than greater distance of \(197\pm 45\) pc determined by Harper et al. (2008), for which they consider the _ALMA_ distance less certain. With the larger distance, Neuhauser et al. (2022) find a ZAMS mass of 17 or 18 M\({}_{\odot}\), closer to the result of Joyce et al. (2020). On the other hand, Joyce et al. (2020) favor a distance of \(\sim 165\) pc, closer to the preferred value of Neuhauser et al. (2022) despite their other disagreements. More accurate determinations of the surface gravity by spectral analysis and modeling would also be useful. Given the uncertainties, it is possible that Neuhauser et al. (2022) and Joyce et al. (2020) could be brought into agreement in terms of ZAMS mass, L, and \(T_{eff}\) but still disagree on the corresponding evolutionary state, near the end of the Hertzsprung Gap, or substantially up the RSB. Another possibility is that other surface activity analogous to the recent Great Dimming (SS4) may have caused color changes. It would be interesting if such a possibility could be ruled out. ### Asteroseismology Section 2.12 dealt with the fundamental global pulsation properties. There could, in principle, be other temporal signals coming from the depths of Betelgeuse that give yet more evidence of the inner structure and evolution, perhaps of unorthodox evolution such as a merger. Over the last two decades there has been tremendous progress in using the technique of asteroseismology to explore the depths of stars from the Sun to evolved giants. High precision \(\mu\)-magnitude space-based photometry from the _CoRoT_ and _Kepler_ missions showed complex but interpretable variations due to acoustic signals arising from deep within stars that is analogous to exploring the core of the Earth with seismic signals (Aerts et al., 2010). Study of these signals revealed the inner rotation of the Sun and understanding of the structure, rotation, and inner magnetic field distribution of thousands of stars from the ZAMS to the red giant branch, especially those of low mass that are technically easier to analyze. The question then arises as to whether such asteroseismology techniques can be applied to Betelgeuse and other RSB stars. The potential is great. The inner structure of evolved massive stars is suspected to yield complex convective regions that will generate acoustic signals in the form of pressure modes and gravity waves. These should get especially intense late in the evolution near core collapse when the convective timescales become comparable to the nuclear burning timescales (Arnett & Meakin, 2011; Couch et al., 2015; Chatzopoulos et al., 2016). Convective regions should hammer on the inside of the star with increasing violence and decreasing timescale as the star nears core collapse. In practice, it is difficult to do asteroseismology of RSB stars because typical oscillation periods are long and because the oscillations are affected by complex processes in the atmosphere and wind (SSSS2.3, 2.4, and 2.5) that affect the boundary conditions employed in the analysis but that are not well understood (Aerts, 2015). In addition, Betelgeuse is too bright to study with traditional telescopes on the ground or in space due to instrument saturation. In principle, asteroseismology could be used to determine the evolutionary stage of Betelgeuse since interior acoustic activity should get more intense with time and carry signals specific to certain stages of evolution, especially oxygen and silicon burning in the years or days before core collapse. The added mass and angular momentum and associated plumes and mixing might give evidence of a merger (SS3.3). The key question is whether some of that acoustic power reaches the surface. Could one see small perturbations on the surface of Betelgeuse given the extensive convective envelope? The potential of asteroseismology to glean an understanding of the interior structure of Betelgeuse in particular and RSG in general has been explored theoretically. Following Shiode and Quataert (2014), detailed stellar models can be used to estimate characteristic acoustic frequencies driven by inner convection as \(\omega=v_{conv}/H_{p}\), where \(v_{conv}\) is a convective velocity and \(H_{p}\) is an appropriate scale height associated with a given convective region. The outer extended convective envelope has a characteristic cutoff frequency, \(\omega_{cut}=c_{s}/H_{p}\), where \(c_{s}\) is the sound speed, below which any acoustic signal cannot effectively propagate. Typical signals from late in the evolution are potentially observable, but the envelope cutoff, propagation efficiency, wave effervescence, damping, and shock dissipation probably muffle all the inner convective noise (Fuller, 2017; Ro and Matzner, 2017; Nance et al., 2018). The largest envelope pressure waves may arise from wave heating during core neon burning and a third carbon shell burning phase a few years before core collapse because later, more intense waves associated with oxygen and silicon burning do not have time to reach the surface before core collapse (Fuller, 2017). The shock dissipation of the acoustic luminosity generated in the very late stages of burning may eject some mass into the CSM (Fuller, 2017; Ro and Matzner, 2017; Morozova et al., 2020). Most of the work on the issues described here have been done with spherically-symmetric codes, albeit ones that can treat angular momentum and its transport. Some work has been done in 2D (Leung and Fuller, 2020), but a more complete understanding probably requires 3D studies (Tsang et al., 2022). Beside effects on late-time ejection of mass from the extended envelope, effective 3D porosity of the envelope may mitigate some of the wave damping effects and allow some asteroseismological signals to percolate to the surface causing diagnostic brightness variations even at earlier evolutionary phases. ## 3 Evolutionary Models ### Single Star Models The evolution of single massive stars, both non-rotating and rotating, has been discussed extensively in the literature (Brott et al., 2011, 2011; Ekstrom et al., 2012; Branch and Wheeler, 2017; Wheeler et al., 2017; Sukhbold et al., 2018; Chatzopoulos et al., 2020) Models of these stars show that hydrogen is burned on the CNO cycle in a convective core yielding a helium core of about 1/3 the original ZAMS mass. The helium core contracts and heats, and a thin hydrogen-burning shells forms at its surface. The shell sits at a node in the structure such that as the core contracts, the outer envelope expands becoming large in radius and convective. Helium eventually ignites in the center, forming a core of carbon and oxygen. Contraction of this core results first in carbon burning and then the burning of heavier elements as the inner core contracts and heats. Shells form burning helium, carbon and other elements. Convection in these shells is expected to produce intense acoustic waves (SS2.14). Near the end of the star's lifetime, a core of silicon forms. Burning of silicon yields a core of iron. Iron is endothermic in terms of its thermonuclear properties. Within days of the formation of the iron core, it will absorb thermal energy from the star, reduce the pressure, and trigger catastrophic dynamical collapse to form a neutron star, or perhaps a black hole. For the case of a neutron star, most likely for Betelgeuse, most of the kinetic energy of collapse will be lost to neutrinos but of order 1% will be deposited in the inner regions, sufficient to cause a violent explosion of the star, ejecting the outer layers, and leaving behind the neutron star (SS5). ### Common Envelope Evolution It has been well established that a majority of O and B stars are in binary systems (Sana et al., 2012; de Mink et al., 2014; Dunstall et al., 2015; Costa et al., 2015; Renzo et al., 2019; Zapartas et al., 2019), so it is a priori likely that Betelgeuse began as a binary system. The implication is that many RSG - including Betelgeuse - that appear to be single now have undergone mergers. An important implication of the potential that Betelgeuse arose in a binary system is that Betelgeuse may have undergone common envelope evolution (CEE) sometime during its history. CEE is expected when the two stars in a binary are sufficiently close they interact as the more massive star evolves, expands, fills its Roche Lobe, and transfers mass to its lower-mass companion. In some circumstances, the companion cannot ingest the transferred material as rapidly as the primary loses it, and the excess mass forms a red giant like envelope surrounding the secondary and the evolving core of the primary. The secondary orbiting within the common envelope (CE) will undergo drag and spiral inward toward the evolved core. While the details are complex, there is then a potential for the secondary to merge with the core of the primary (SS3.3). The result could appear to be a single star, but with an inner structure rather different than would be expected of a single star of the same luminosity, radius, and \(T_{eff}\). CEE can result in several types of anomalous mixing within the core and between the core and the surface of the star. The inspiral phase leads to increased equatorial rotation and thus chemical mixing via rotational mechanisms. Plume mixing and nucleosynthesis occur during the moment of the final tidal disruption of the secondary, and merger with the core of the primary will affect the structure of the material inside and around the core of the primary. Details depend on whether the plume mixing is strong enough to rejuvenate hydrogen burning in the core. On longer timescales, rotational mixing can dredge some \(\alpha\)-enhanced material from the inner regions to the surface (SS2.9). Ivanova and Nandez (2016) (see also Morris and Podsiadlowski, 2007; Taam and Ricker, 2010; Ivanova et al., 2013, 2015; MacLeod et al., 2018; Chatzopoulos et al., 2020; Roepke and De Marco, 2022) describe the basic phases of CEE and the mechanisms for treating it in 3D and 1D. There are three stages to the process, each with associated loss of mass and angular momentum: 1) a precursor phase when the stars begin to interact and co-rotation is lost; 2) a plunge-in phase with a large rate of change of orbital separation and a timescale close to dynamical, at the end of which most of the mass of the CE is beyond the orbit of the companion; and 3) a self-regulated slow inspiral of the companion. There are two basic endpoints to CEE: formation of a compact binary system and merger. For mergers, Ivanova and Podsiadlowski (2003a) differentiate three outcomes: a quiet merger, a moderate merger, and an explosive merger. Only the former leaves behind an RSG and hence is pertinent to Betelgeuse. An important aspect of the problem is the deposition of the mass and orbital angular momentum of the secondary. In 3D simulations most of the initial angular momentum of the secondary is deposited in the outer layers of the primary envelope. Mass and angular momentum are lost by dynamical interaction, outflow driven by recombination, and shrinking of the orbit. The surface layers are "shock heated" and quickly ejected prior to the plunge-in (Zhao and Fuller, 2020). The slow inspiral often begins with an envelope that is significantly reduced in mass and angular momentum. In some cases, recombination outflow can eject nearly all the envelope during the slow inspiral. The exception to these cases of extreme mass loss is when the primary is substantially more massive than the secondary. For small secondary masses, the fraction of mass lost in the precursor phase and the plunge-in phase is of order \(q\), the mass ratio of secondary to primary. In their treatment of a red giant of modest mass (1.8 M\({}_{\odot}\)), Ivanova and Nandez (2016) find that companions of mass less than 0.10 M\({}_{\odot}\), corresponding to about 5% of the primary mass, undergo merger. The time to merger is about 1000 d, long compared to the dynamical time of the CE but short compared to the thermal or evolutionary time of the primary. While these results do not necessarily scale with mass, this suggests that for many cases of interest here, a companion of about 1 M\({}_{\odot}\) undergoing CEE with a primary of about 20 M\({}_{\odot}\) is likely to quickly undergo merger while sustaining a substantial envelope, as Betelgeuse is observed to have. The plunge-in phase is expected to induce very asymmetric structures and the slow inspiral to yield appreciable departures from spherical symmetry that can be simulated in 3D but are beyond the capacity of 1D models. In 3D there is a significant density inversion in the vicinity of the companion and rather little material near the center of mass of the binary. On the other hand, the 3D simulations often treat the companion star and the red giant core as point sources. In 1D, the primary core, at least, can be modeled in more detail. A 1D code like mesa conserves energy and angular momentum within expected numerical accuracy. mesa also automatically handles energy released by recombination as the envelope expands and the angular momentum is lost in winds. In some 1D simulations of CEE, the companion is treated in a "thin shell" approximation. Chatzopoulos et al. (2020) argue that for massive primaries with mass ratios \(q~{}<q_{\rm blue}\) (where \(0.25~{}<q_{\rm blue}~{}<~{}0.33\)) and initial period, \(P_{\rm i}\), greater than a few tens of days, mass transfer starts in early Case B mass transfer. This situation arises when hydrogen is exhausted in the primary, so the primary has evolved off the main-sequence but not yet ignited helium, and while the secondary is still on the main-sequence. This mass transfer is rapid and results in the primary envelope engulfing the much-lower mass secondary. The secondary spirals inward producing a merger. In this scenario, the helium core of the primary is surrounded by a H-burning shell. When the secondary reaches the critical tidal disruption distance from the core of the primary, a tidal stream will form that transports fresh H fuel toward the core (Ivanova, 2002; Ivanova et al., 2002; Ivanova and Podsiadlowski, 2003b) as shown in Figure 4. Mixing can thus happen if the mass transfer stream can penetrate the core (Ivanova et al., 2002). The depth of penetration of the stream into the core depends on the direction, entropy, width, and angular momentum of the stream, the rotation, orientation, and mass of the secondary, on the density structure and relative rotation of the core, and on fluid instabilities. Figure 4: Density profile from a 2D simulation of a 16 M\({}_{\odot}\)+1 M\({}_{\odot}\) merger occurring at an initial separation of 12 R\({}_{\odot}\) showing the formation of the tidal stream (teardrop shape to the right of center) within the common envelope (outer green and blue green) as the secondary fills its Roche Lobe and is disrupted by the core of the primary (large red dot in the center). From Chatzopoulos et al. (2020). The penetration depth of the stream into the core of the primary determines the extent of its rejuvenation; if fresh fuel reaches the core then core H-burning will be re-ignited and the star may evolve toward the blue supergiant (BSG) phase. If, on the contrary, the stream does not penetrate deeply into the core but rather converges with the H-burning shell, then the star will continue to evolve toward the RSG stage. Chatzopoulos et al. (2020) confirm, by using the arguments presented in Ivanova et al. (2002), that none of the models they explored (secondaries in the range 1-4 \(M_{\odot}\) merging with primaries in the range 15-17 \(M_{\odot}\)) undergo stream-core penetration. These results suggest that the "quiet merger" described above is more relevant to the case of Betelgeuse. In that case, the amount of orbital angular momentum depends mostly on the binary separation when the primary overflows its Roche lobe. The total angular momentum deposited in the envelope of the primary depends on the radius of the primary when it engulfs the secondary during its crossing of the Hertzsprung gap. ### Merger Models Of primary interest for Betelgeuse is how and under what circumstances a merged system could end up rotating at \(\sim 23\%\) of the critical velocity, as observations suggest (SS2.8). Merger models provide a reasonable "natural" explanation for why Betelgeuse has a large, but sub-Keplerian equatorial velocity (Wheeler et al., 2017; Chatzopoulos et al., 2020; Sullivan et al., 2020). These results do not prove, but do allow that Betelgeuse might have merged with a lower mass companion. Betelgeuse might look substantially the same whether it merged with a 1 or 10 M\({}_{\odot}\) companion. Joyce et al. (2020) concluded that Betelgeuse merged prior to the later carbon-burning phases, but see Luo et al. (2022). While the hypothesis that Betelgeuse might have merged with a companion is credible and consistent with the a priori estimate that Betelgeuse has a probability of \(\sim 20\%\) of being born in a binary system (de Mink et al., 2014), it raises a number of interesting issues involving common envelope evolution, the fate of the companion and its angular momentum, and effect on the post-merger structure of the primary. The luminosity of an evolved massive star is typically a function of the mass of the helium core and rather independent of the mass of the envelope. If a companion merged with the core of Betelgeuse, then the current luminosity may be a measure of the core mass (\(\sim 5\) to 6 M\({}_{\odot}\)), but the mass of the envelope would be rather unconstrained and probably smaller than the estimates given based on single-star models that attempt to reproduce the luminosity, radius and effective temperature. If there were a coalescence, there would be some mass ejected. The mass lost from the system during the merger may be substantial. The 3D 16\(M_{\odot}\)+4\(M_{\odot}\) merger model of Chatzopoulos et al. (2020) lost 0.5 M\({}_{\odot}\). This model accounted for rotation, but not radiative effects nor recombination. Sullivan et al. (2020) found up to 5 M\({}_{\odot}\) lost. The mass loss is a combination of the loss of mass accreted from the secondary plus loss of mass from the primary itself. The latter is due to winds prior to the accretion event and then the rotationally-induced mass loss after the accretion. A main sequence companion of about a solar mass would have a mean density of about 1 g cm\({}^{-3}\). That density is characteristic of the base of the hydrogen envelope in the RSG models, implying that a companion might not be dissolved until it reaches the edge of the helium core (see discussion of common envelope evolution and plume penetration in SS3.2). If the companion merged with the core, the evolution of the primary might be severely altered by anomalous burning and mixing effects, and surface abundances might be affected. Sullivan et al. (2020) used the mesa code to study the merger problem in a rudimentary way that nevertheless gave some insights to the relevant physical processes. They did not attempt to treat the companion as a corporeal entity, but allowed for its effects by adding the relevant mass and associated angular momentum to the outer envelope of the primary, a computational process identified as "accretion" to distinguish it from the more complex behavior of a true merger. The HRD of all the models of Sullivan et al. (2020) were qualitatively similar. The accretion events resulted in irregular transient loci before settling down to a rather normal evolution up the RSB to the point of collapse of the models. The models suggest that the rotation of Betelgeuse could be consistent with a primary of ZAMS mass somewhat less than 15 M\({}_{\odot}\) accreting between 1 and 10 M\({}_{\odot}\) in the core helium burning and core carbon burning epochs. The observed equatorial velocity might also be attained by accreting a broad range of masses onto a primary of ZAMS mass somewhat more than 20 M\({}_{\odot}\) in the later carbon shell burning epoch. Chatzopoulos et al. (2020) used the mesa code to compute the 1D rotating post-merger evolution of systems with mass ratio \(0.06<q<\)0.25 that suffer an early Case B merger. In this case, unstable mass transfer occurs during during the crossing of the Hertzsprung gap. A "stellar engineering" approach was adopted by incorporating a perturbation term that captures the effects on the specific angular momentum and entropy. This term was used to re-adjust the post-merger structure of the envelope of the primary star during the in-spiral prior to the dynamic disruption of the secondary around the He core of the primary. The magnitude of the perturbation applied is proportional to \(q\) and the structure of the primary (their Equation 13). In their mesa simulations, the mass of the secondary was added to the core plus hydrogen-burning shell. The composition was not adjusted as done by Menon and Heger (2017) in their models of SN 1987A (SS3.4). Post-merger profiles were computed for different primary radii corresponding to the time when the envelope of the primary engulfed the secondary (200-700 \(R_{\odot}\)). The initial primary radii represented initial separations corresponding to binding energies that enabled the binary progenitor system to survive a possible past ejection from its birth cluster, to be consistent with the borderline "runaway" nature of Betelgeuse (SS2.10). The resulting models were used to investigate the rotation rate of the post-merger object. The models explored by Chatzopoulos et al. (2020) were able to reproduce the overall observed properties of Betelgeuse, including its position in the HRD, its surface rotation rate, and its surface abundances, especially the observed overabundance of nitrogen (SS2.9). Their 16\(M_{\odot}\)+4\(M_{\odot}\) merger occurring at \(\sim\) 200-300 \(R_{\odot}\) yielded the best fit. These models had a sustained high equatorial rotation for a few hundred thousand years after the merger. Chatzopoulos et al. (2020) also presented a 3D simulation of the merger between a 16\(M_{\odot}\) primary and a 1\(M_{\odot}\) secondary that occurred when the primary reached a radius of \(\sim\)12\(R_{\odot}\), right after the end of its TAMS. The simulation was performed with the 3D _OctoTiger_ Adaptive Mesh Refinement (AMR) hydrodynamics code developed by the LSU Center for Computation and Technology (CCT) (Marcello et al., 2021). Post-processing of the 3D internal structure of the post-merger object confirmed that the envelope of the primary was spun-up by a significant amount during the in-spiral phase. The degree of envelope spin-up is, however, proportional to the primary's radius at the onset of the merger. Three-dimensional simulations of mergers occurring at larger primary radii are needed to compute post-merger structures that evolve to become rapidly-rotating supergiants. The limitation in simulating the CEE evolution of such systems in 3D is purely of computational nature; the in-spiral timescale for a 15\(M_{\odot}\)+1\(M_{\odot}\) merger occurring at \(\sim\) 300\(R_{\odot}\) is \(\sim\)1000 years, requiring a very long simulation time. In addition, the density contrast between the compact secondary and the low-density outer regions of the envelope of the primary as well as the large simulation box that would be required to include the entire system makes it difficult to adequately resolve the full structure of the secondary, its tidal disruption plume, and the dense core of the primary, requiring billions of zones rendering such calculations prohibitively expensive. Despite these computational challenges, there are ongoing efforts involving the use of point masses to represent the secondary and the core of the primary. The merger can be accelerated by the removal of a constant, yet small, amount of angular momentum per orbit. This allows the long-term evolutionary calculation of post-merger angular momentum profiles for the primary. An example of such a simulation involving the merger between a 15\(M_{\odot}\) primary and a 4\(M_{\odot}\) secondary initiated with a separation between the secondary and the core of the primary of 50 \(R_{\odot}\) is shown in Figure 5. This model lost \(\sim\) 0.4 M\({}_{\odot}\) in the "mergeburst" (Soker and Tylenda, 2006) phase right after the merger when the surface equatorial velocity was \(\sim\) 60 km s\({}^{-1}\). The simulation focused on the angular momentum of the remaining bound object and did not quantify the amount of angular momentum lost. The spherical, mass-weighted, angle-averaged profiles for internal energy, density, and temperature and the cylindrical mass-weighted profile for specific angular momentum of the post-merger object resulting from this simulation are shown in Figure 6. Note that the x-axis is q, the normalized mass-coordinate variable. The specific angular momentum, j, decreases with q, but increases in radius, so the bulk of the structure is dynamically stable. A minor decrease/instability develops in the very outer regions that contain very little mass. That behavior is in agreement with Ivanova and Nandez (2016). This simulation was long and expensive. It used a sufficiently large box to follow the unbound material after the merger for as long as possible in order to characterize the mergeburst transient and also with sufficient zones to resolve enough of the secondary structure and the primary core such that they are dynamically stable in the grid. This balance did not allow sufficient resolution to resolve the stream-core interaction in detail but such simulations will be done in the future. These proposed simulations will allow a comparison to the 3D models of Ivanova and Nandez (2016) for lower mass systems. For related work on the effect of radiation pressure and recombination energy in the ejection of mass in the CEE of an RSG star see Lau et al. (2022). #### 3.3.1 Insensitivity of Final Equatorial Velocity to Accreted Mass The original motivation of Wheeler et al. (2017) for hypothesizing that Betelgeuse might have merged with a companion was the difficulty of accounting for the nominal currently-observed equatorial rotation velocity, \(\sim\) 15 km s\({}^{-1}\), allowing for inclination. A companion mass of \(\sim\) 1 M\({}_{\odot}\) was estimated from simple arguments based on conservation of angular momentum. Subsequent work showed that, broadly, the final rotational velocities of the models were rather independent of the companion mass accreted. Although the treatment of the post-merger system by Sullivan et al. (2020) and by Chatzopoulos et al. (2020) is rather different, the results for the final equatorial rotational velocity are very similar. This gives confidence that this quantity is somewhat robust against the details of the merger process and depends primarily on a global quantity such as the pre-merger orbital angular momentum. If a merger occurred in Betelgeuse, the product must have settled into a state for which the rotation is sub-Keplerian. This global criterion is independent of the masses of the primary and secondary involved in the merger. The implication is that the loss of mass and angular momentum must adjust to meet this criterion rather independently of the masses involved and the epoch of accretion. This also serves to constrain the final rotation of the envelope to potentially large, but finite values. For these studies to have any relevance to Betelgeuse, it is important that the structure remain that of an RSG after the proposed merger. As mentioned in SS3.2, a "quiet merger" can leave behind an RSG, depending on pre-merger conditions. Ivanova & Podsiadlowski (2003a) suggest that this condition favors secondary masses \(>2\) M\({}_{\odot}\) and a primary close to carbon ignition so that strong gradients inhibit core/envelope mixing. Ivanova & Nandez (2016) note that during a slow spiral-in, the angular velocity becomes constant in most of the CE and the value of the angular velocity is significantly smaller than the local Keplerian velocity in the envelope, so the approximation of spherical symmetry is reasonable. #### 3.3.2 Angular Momentum During the merger and redistribution of density, angular momentum, and composition, some angular momentum is lost to the surroundings in the rotation-enhanced wind, and some is retained to propagate inward toward the primary core. In the 1D "accretion" models explored by Sullivan et al. (2020), the angular momentum that is retained is redistributed by an inward diffusive wave of angular momentum. The profiles of the specific angular momentum and angular velocity quickly evolve to stable forms delineated by an inward propagating front with the specific angular momentum increasing outward beyond the front and the angular velocity being nearly constant. A few years after accretion, the ingoing wave of angular momentum propagated to the boundary between the outer envelope and the H/He shell. The wave of angular momentum was halted at the composition boundary at Figure 5: The initial (_upper panel_) and final (immediately prior to the merger of the secondary with the core of the primary (_lower panel_) density structure of a \(15M_{\odot}+4M_{\odot}\) system with an initial core/secondary separation of \(50~{}R_{\odot}\) simulated with the 3D AMR _OctoTiger_ code. The core of the primary and the secondary are treated as point masses. The color bar on the right represents density in g cm\({}^{-3}\). Dashed lines represent equipotential surfaces. In the upper panel, arrows with length proportional to magnitude represent the velocity field. Velocities beyond the primary core and secondary are typically \(50\) km s\({}^{-1}\). The numbers 8.1 and 39.8 in the lower left corners represent the “orbit number” that was used in _OctoTiger_ (even after the merger) to represent the time/phase of the simulation. From Chatzopoulos et al. 2023, in preparation. Figure 6: Energy density, specific angular momentum, density, and temperature profiles of the post–merger object following the merger of a \(15M_{\odot}+4M_{\odot}\) system occurring at \(50~{}R_{\odot}\). The original 3D profiles have been mass–weighted and angle-averaged. From Chatzopoulos et al. 2023, in preparation. the edge of the helium core leaving behind an envelope of constant angular velocity and a monotonically rising angular momentum per unit mass. By the epoch of collapse, the angular momentum distribution in the outer envelope had scarcely changed. The wave of angular momentum swept through the H/He shell, but was halted at the outer boundary of the He shell at 7 M\({}_{\odot}\) for both the 20M\({}_{\odot}\)+1M\({}_{\odot}\) and the 20M\({}_{\odot}\)+10M\({}_{\odot}\) models. The composition distribution remained virtually unchanged. All the final models of Sullivan et al. (2020) have inner regions of negative gradient in \(j\) in regions of sharp composition gradients. These must be stabilized against the Rayleigh instability by the associated composition gradients. This condition has not been investigated in detail. Ivanova and Nandez (2016) presented a model of a primary of 1.8 M\({}_{\odot}\) and secondary of 0.1 M\({}_{\odot}\)(model M10; their figure 7). While the mass scale is smaller than considered by Sullivan et al. (2020) and Chatzopoulos et al. (2020), the mass ratio, \(\sim 0.05\), is about the same as for their 20M\({}_{\odot}\)+1M\({}_{\odot}\) models. The angular velocity as a function of mass for model M10 50 days after the plunge-in is basically flat throughout the model. The value of the angular velocity, \(\sim 3\times 10^{-7}\) rad s\({}^{-1}\), is close to that of Sullivan et al. (2020), perhaps fortuitously, but the peak value of the angular momentum per unit mass for model M10 is about a factor of 30 less than found by Sullivan et al. (2020). The flat angular velocity profile in the 3D simulations seems to arise naturally in mesa simulations. Significant departures in behavior between Ivanova and Nandez (2016) and Sullivan et al. (2020) are found in the innermost and the outermost regions. Ivanova and Nandez (2016) do not consider the inner core, so they do not explore the distribution of angular momentum in the core. On the other hand, Ivanova and Nandez (2016) and the upper right panel of Figure 6 find a distinct decrease in both the specific angular momentum and the angular velocity in the outer 10 % of the mass of the envelope that the models of Sullivan et al. (2020) do not reveal. This difference probably arises in the loss of mass and angular momentum in the dynamical plunge-in phase that Sullivan et al. (2020) do not treat accurately. #### 3.3.3 Composition In their 1D calculation with a fully-resolved primary core, Sullivan et al. (2020) found the composition distribution of the inner core to be only slightly affected even by the accretion of a companion of large mass. Thus while the inner structure might be somewhat perturbed by accretion of substantial mass, there may be rather little effect on the outside to indicate that the accretion occurred. The implication is that the inner composition structure of Betelgeuse could be rather different depending on the mass accreted with basically no indication reflected in the outer, directly observable structure. Evidence of internal mixing due to a merger (or other effects) can be revealed by anomalous surface abundances (SS2.9). Chatzopoulos et al. (2020) were able to reproduce surface abundances of Betelgeuse, especially the observed overabundance of nitrogen (Lambert et al., 1984). Brott et al. (2011, 2011, 2012) and Ekstrom et al. (2012) find N/C surface enhancements that are similar to those found in the merger simulations of Chatzopoulos et al. (2020) due to the enhanced rotation from the spiraling-in phase. #### 3.3.4 Entropy Ivanova and Nandez (2016) give an extensive discussion of the treatment of entropy in CEE. They argue that the evolution of the entropy of the common envelope material differs between 3D and 1D simulations. In 1D, the entropy is generated because the energy is added as heat. Since the radius at which the recombination energy release overcomes the potential well depends on the entropy of the material, the entropy generation observed in 1D codes will likely predict different outcomes than 3D CE evolution. Ivanova and Nandez (2016) argue that 1D stellar codes should add the energy as mechanical energy rather than "heat" that moves the material to a higher adiabat. Chatzopoulos et al. (2020) find relatively little heating effects in their 3D merger simulation of Betelgeuse. We note, however, that heating during merger can lead to non-linear envelope pulsations and to potentially large mass loss (Clayton et al., 2017). #### 3.3.5 Recombination The role of hydrogen and helium recombination in abetting CE mass loss is discussed by Ivanova et al. (2015), Ivanova and Nandez (2016), and Lau et al. (2022). These reservoirs of energy can help to trigger envelope instability depending on where and when the recombination energy is released. The time-scale of recombination runaway can be up to several hundred days and gets longer as the mass of the companion decreases. In such cases, radiative losses can become important so that 3D simulations that lack radiative transfer are no longer appropriate. For all their limitations, 1D stellar evolution codes like mesa can handle this aspect of the physics. #### 3.3.6 Magnetic Fields As noted in SS2.11, Betelgeuse displays various effects of magnetic fields. The magnetic properties are often omitted in CEE simulations. Sullivan et al. (2020) and Chatzopoulos et al. (2020) included magnetic effects as treated by the mesa Spruit/Tayler algorithm in some cases, but did not include magnetic effects of the magnetorotational instability (Wheeler et al., 2015; Moyano et al., 2023). The omission of the latter will undoubtedly alter the quantitative, if not qualitative results. The Spruit/Tayler mechanism gives results that typically weight the radial component, \(B_{r}\), orders of magnitude less than the toroidal component, \(B_{\phi}\). The magnetorotational instability tends to give the radial component about 20 per cent of the toroidal component. Another important caveat is that mesa computes the magnetic field structure based on the instantaneous structure of the model. In reality, the field only decays on a dissipation timescale that might in some circumstances be long compared to the evolutionary timescales. This would lead to fossil magnetic field in a region that made a transition from being unstable to stable to the Spruit/Tayler instability. mesa has no means to treat the existence and decay of such fossil fields. The magnetic structure computed by Sullivan et al. (2020) is thus interesting, but should not be given any quantitative weight. Sullivan et al. (2020) found that accretion has little effect on the production of magnetic fields by the Spruit/Tayler mechanism. Their models show a more substantial field in the outer part of the helium shell, reaching up to the base of the hydrogen envelope. The peak fields are of order 1 G and 1000 G for the radial and toroidal fields, respectively, with considerable variation with radius that is likely to be affected by issues of numerical resolution. Below an inward gap where the fields are very small the fields become large, but variable, in the innermost layers of the oxygen core. The radial fields peak at \(\sim 1000\) G and the toroidal fields at \(\sim 10^{6}\) to \(10^{7}\) G. In the models, the fields peak off center and the toroidal field declines to about 1 G in the center. The accretion appears to have a quantitative, but not qualitative, effect on the field strength and distribution just prior to collapse. Subsequent core collapse by a factor of \(\sim 100\) in radius would amplify the field by compression alone by a factor of \(\sim 10^{4}\). The resulting field of \(\sim 10^{11}\) G would not be dynamically significant, but would give ample seed field for growth of the field in the proto-neutron star by the MRI (Akiyama et al., 2003; Obergaulinger et al., 2009; Mosta et al., 2018) ### Lessons from SN 1987A To account for the circumstellar nebular rings, many studies of the mergers of massive stars have focused on the prospect that the progenitor of SN 1987A may have undergone a merger (Morris and Podsiadlowski, 2007). Merger models can also account for why the progenitor was a blue rather than red supergiant by invoking mixing of helium from the core into the outer envelope (Menon and Heger, 2017). In the case of Betelgeuse, a contrasting conclusion applies. While some discuss the possibility that Betelgeuse will explode as a blue supergiant (van Loon, 2013), Betelgeuse is still a red supergiant. If one accepts the basic _ansatz_ that a merger is required to account for the observed rotational velocity of Betelgeuse, then it follows that a merger did not produce a compact blue envelope and thus, by the arguments of Ivanova et al. (2002a) and Menon and Heger (2017), little to no helium could have been mixed outward from the core, consistent with the simulations of Sullivan et al. (2020) and Chatzopoulos et al. (2020). The modeling of a putative Betelgeuse merger by Chatzopoulos et al. (2020) (SS3.3) concluded that the plume from the disrupted secondary would not penetrate the helium core and induce substantial helium mixing according to the prescription of Ivanova et al. (2002a). Mixing may be more likely for more massive secondaries, so the results of Sullivan et al. (2020) and Chatzopoulos et al. (2020) may be less reliable for larger mass secondaries. Plume mixing is a complex hydrodynamical problem that deserves more study if we are to understand both Betelgeuse and SN 1987A as products of massive star mergers. ## 4 The Great Dimming While Betelgeuse is known to display a wide range of fascinating behavior, it surprised the Betelgeuse community and fascinated people world wide when it went through a phase of anomalously low optical luminosity beginning in October 2019 and lasting through March 2020 that became known as the Great Dimming. The brightness decreased by over a factor of two and was easily noticed by even casual observers of the night sky. Twitter alilt with rampant speculation that Betelgeuse was about to explode, thus requiring some effort by supernova experts to tamp that particular hyped fever. The Great Dimming was dramatic enough in its own (Guinan et al., 2020; Levesque and Massey, 2020; Harper et al., 2020; Dupree et al., 2020; Dharmawardena et al., 2020; Harper et al., 2020; Safonov et al., 2020; Montarges et al., 2021; Levesque, 2021; Harper et al., 2021; Alexeeva et al., 2021; Kravchenko et al., 2021; Dupree et al., 2022; Matthews and Dupree, 2022; Taniguchi et al., 2022; Cannon et al., 2023). Both professionals and amateurs had monitored the brightness of Betelgeuse for about a century. Much of that data is stored in the valuable records of the American Association of Variable Star Observers (AAVSO) (Figure 7). These studies had established that Betelgeuse was a variable star with regular pulsations on a variety of time scales as discussed in SS2.12. The decrease in V band amplitude corresponding to the \(\sim 400\) day pulsation is typically 0.3-0.5 mag. In the Great Dimming, the decrease was over 1 mag. There is some suggestion in the AAVSO records that Betelgeuse underwent other periods of anomalously large dimming, perhaps in the early 1950s and the late 1980s (Joyce et al., 2020). Dimming with a period of about 30 - 40 years might be related to the rotation period of Betelgeuse (SS2.8), but uncertainties in the early photometry and in the radius and rotational velocity and hence the period of Betelgeuse make that difficult to determine. There is also some concern that individual AAVSO observers in the 1950s and 1980s showed a tendency to report anomalously faint data that might have biased the mean AAVSO values and produced a false impression of minima (T. Calderwood, private communication, 2023). There was no missing the Great Dimming of 2019/2020, but the attention to it may have been amplified by an initiative of Andrea Dupree of the Center for Astrophysics who convened many of the world's experts on Betelgeuse to participate in an intense global, multi-instrument, multi-wavelength coordinated study of Betelgeuse beginning in April, 2018, when Betelgeuse was especially well situated for studies with the HST, a project she named the Month of Betelgeuse (MOB). Forty-four astronomers joined the MOB. A month was not, of course, sufficient to address all the mysteries of Betelgeuse, the project was rapidly renamed Months of Betelgeuse. Because of the MOB activity, attention to Betelgeuse was still focused as the Great Dimming got underway. Dupree et al. (2020) witnessed a UV enhancement from Sept-Nov. 2019 with HST that may have been a precursor event to the Great Dimming prior to any substantial decrease in brightness. Ed Guinan of Villanova had been doing careful photometry and other studies of Betelgeuse for over 25 years. Guinan et al. (2019) reported V-band, Wing TiO-band, and NIR photometry on 7 December, 2019. They presented one of the first interpretations of the Great Dimming, noting that "The light variations are complicated and arise from pulsations as well from the waxing and waning of large super-granules on the star's convective surface." They predicted a minimum on 21 February, 2020 \(\pm\) a week, based on the assumption that a 420 day period was in a fortuitous concatenation with longer-term (5 - 6 yr) and shorter term (100 - 180 d) brightness changes and perhaps a super-granule upwelling of a cool plume. This estimate turned out to be remarkably close to the observed light minimum of \(1.614\pm 0.008\) mag during 07-13 February 2020 (Guinan et al., 2020). Levesque and Massey (2020) reported optical spectroscopy on 15 February, 2020. They also examined the TiO bands and reported a \(T_{eff}\) of 3600 K compared to a typical value \(\sim 3660\) K. They argued that the small change in \(T_{eff}\) was not commensurate with the large change in V-band luminosity and that a temporary cool period on the surface of Betelgeuse due to convective turnover was likely not the primary cause of the Great Dimming. Rather, they proposed an increase in large-grain gray dust. Guinan et al. (2020) and Levesque and Massey (2020) thus framed the extremes of the possible explanations of the Great Dimming that continues to be debated. Early discussions of \(T_{eff}\) during the Great Dimming assumed Figure 7: Century long record of the visual and V band brightness of Betelgeuse compiled by the American Association of Variable Star Observers supplemented by data from the Solar Magnetic Ejection Imager (SMEI). The final large dip is the Great Dimming of 2020 (§4). From Joyce et al. (2020) by permission of Meridith Joyce, Laszlo Molnár, and the Astrophysical Journal. spherical symmetry. While the assumption of spherical symmetry was an obvious starting point, subsequent consideration of spots, super-granules, and circumstellar dust call that assumption into question. The question of how accurately a global value of \(T_{eff}\) can be determined if \(T_{eff}\) varies over the surface of the star remains to be determined. Montarges et al. (2021) illustrated this conundrum by producing dramatic spatially-resolved interferometric images of Betelgeuse obtained with the SPHERE instrument on the VLT in December 2019 and January 2020 that showed that the southern half of the star had become markedly fainter than in January 2019, indicating that a major change has occurred in, or near, the photosphere (Figure 8). Montarges et al. (2021) attributed this dark patch to "a dusty veil." Harper et al. (2020) reported Wing three-filter (A, B, and C band) TiO and near-IR photometry that showed that portions of the photosphere had a mean \(T_{eff}\) that was significantly lower than that found by Levesque and Massey (2020). They interpreted the image of Montarges et al. (2021) to be a large patch in the photosphere that could be 250 K cooler than surroundings. They concluded that no new dust was required and emphasized the interpretation of Guinan et al. (2019) that the Great Dimming resulted from a coincidence of the 430 day and 5.8 year periods of Betelgeuse. They suggested that the cooling of a large portion of the surface was produced dynamically by photospheric motions due to pulsation and large-scale convective motions (Figure 9). Dharmawardena et al. (2020) reported 13 years of submillimeter observations of Betelgeuse including the epoch of the Great Dimming obtained with the James Clerk Maxwell Telescope and the Atacama Pathfinder Experiment. These long wavelength observations were significant because they were not expected to be obscured by dust as the optical observations could be. Dharmawardena et al. (2020) found that Betelgeuse had also dimmed by \(\sim 20\%\) at these longer wavelengths during the optical minimum. They concluded that the dimming was due to changes in the photospheric luminosity as opposed to observation by surrounding dust. See also Matthews and Dupree (2022) for 1.3 cm and 7 mm observations with the VLA on August 2, 2019, just prior to the onset of the optical dimming and Harper et al. (2021) for observations of circumstellar [O I] 63.2 \(\mu\)m and [C II] 157.7 \(\mu\)m emission profiles and [O I] 63.2 \(\mu\)m, [O I] 145.5 \(\mu\)m, and [C II] 157.7 \(\mu\)m fluxes obtained shortly after the Great Dimming with SOFIA. Dupree et al. (2022) presented a synthesis of observations and an interpretation of the Great Dimming that was a variation on the picture first presented by Kervella et al. (2018). Dupree et al. (2022) outlined a scenario in which the UV burst reported by Dupree et al. (2020) represented a surface activity perhaps catalyzed by an upwelling that ejected a blob of matter. That matter cooled as it expanded away from the surface allowing dust to form. The resulting clump of dust crossed the line of sight to Betelgeuse resulting in the dim patch observed by Montarges et al. (2021) and the resulting Great Dimming (Figure 10). Alexeeva et al. (2021) presented high-resolution high S/N ratio near-infrared spectra obtained at Weihai Observatory on four epochs in 2020 during and after the Great Dimming. They argued that a decrease in the overall mean \(T_{eff}\) by at least 170 K on 2020 January 31 could be attributed to the emergence of a large dark spot on the surface of Betelgeuse. Levesque (2021) argued for a synthesis in which the dark patch of Montarges et al. (2021) and the Great Dimming were caused by dust forming over a cold patch in the southern hemisphere. Cannon et al. (2023) presented VLTI Multi AperTure mid-Infrared SpectroScopic Experiment (MATISSE) observations in the N-band (8 - 13 \(\mu\)m) near brightness minimum. They explored a model invoking multiple clumps of dust close to the star and another considering a large cool spot on the stellar surface with no dust. They found that both the dust clump and the cool spot models are compatible with the data, noting that the extinction and emission of a localised dust clump in the line of sight compensate each other making the clump undetectable. They concluded that the lack of infrared brightening during the Great Dimming (Dharmawardena et al., 2020) does not exclude extinction due to a dust clump as one of the possible mechanisms. Spectropolarimetry provides yet another tool to explore the geometry of the surface of Betelgeuse (Lopez Ariste et al., 2018; Haubois et al., 2019; Cotton et al., 2020). Safonov et al. (2020) argued that to address the challenges of the Great Dimming it was fundamentally important to employ methods to resolve an inhomogeneous stellar atmosphere. They presented a set of differential speckle polarimetric observations of Betelgeuse obtained at the 2.5 m telescope of the Caucasian Mountain Observatory operated by the Sternberg Astronomical Institute of Moscow State University. Observations on 17 days at wavelengths 465, 550, 625 and 880 nm spanned the Great Dimming event. The envelope was found to be highly inhomogeneous but correlated, with features varying on a timescale of two or three months (Figure 11). An animation captured the dramatic variablility registered in the polarization 1. The net polarized brightness of the envelope remained constant as the \(V\) band flux approached its minimum. After the minimum, the polarized flux of the envelope rose a factor of two as the optical flux was restored. Safonov et al. (2020) concluded that the Great Dimming was caused by the formation of a dust cloud located on the line of sight. Footnote 1: [http://lnfm1.sai.msu.ru/kgo/mfc_Betelgeuse_en.php](http://lnfm1.sai.msu.ru/kgo/mfc_Betelgeuse_en.php) Kravchenko et al. (2021) used high resolution spectroscopy to do a tomographic study of the structure during the Great Dimming. This analysis suggested that two shocks propagated in the upper atmosphere, one generated in February 2018 and one in January 2019 with the second amplifying the effects of the first. Kravchenko et al. (2021) suggest that this shock structure modified by underlying convection or outward gas motion altered the molecular opacity in the line of sight. Mittag et al. (2023) have engaged in long-term studies of the chromosphere of Betelgeuse since 2013, including the epoch of the Great Dimming when they determined the absolute and normalized excess flux of the Ca II H&K lines. They found a behavior similar to that during a previous decrease in the visual brightness of Betelgeuse in 1984 and 1985. Unlike the Mg II emission (Dupree et al., 2022), the Ca II emission attributed to the lower chromosphere of Betelgeuse did not change significantly between November 2019 and February 2020, but did vary after the Great Dimming. Mittag et al. (2023) argue that this delay of the chromospheric reaction suggests that the cause for the great dimming is located in the photosphere. Figure 8: Resolved images of Betelgeuse through the Great Dimming. January 2019 was prior to the dimming; December 2019 was somewhat after the onset of the dimming; January 2020 was near the minimum of the dimming; and March 2020 was after the maximum dimming. From Montarges et al. (2021) by permission of M. Montarges and ESO. Figure 10: Schematic of model of the Great Dimming in which a blob of gas is ejected, cools to form dust, and passes through the line of sight to cause the Great Dimming. Courtesy Ray Villard, STScI and NASA, art by E. Wheatley, STScI. Figure 9: Schematic of gigantic cool dim patches on the surface of Betelgeuse that were proposed as contributing to the Great Dimming. By permission of T. Dharmawardena and Max Planck Institute for Astrophysics, Heidelberg. Credit: Judith Neidel, MPIA graphics department. Himawari-8 is a Japanese geostationary meteorological satellite orbiting 35,786 km above the equator at \(140.74^{\circ}\)E. Several bright stars, including Betelgeuse, occasionally fortuitously appear in the images. Taniguchi et al. (2022) used the 16-band photometry from 0.45 to 13.5 \(\mu\)m from Himawari-8 to construct a light curve of Betelgeuse spanning 4.5 yr from 2017 to 2021 with a sampling that averaged once per 1.72 d. Taniguchi et al. (2022) used the infrared optical depth in contrast to variations in the visual extinction, A(V), to directly trace the amount of circumstellar dust. The IR optical depth increased during the Great Dimming with a small delay between the peak in IR optical depth and the extinction in the visual. Taniguchi et al. (2022) argue that their data suggest that a clump of gas produced dust that obscured the photosphere of Betelgeuse and contributed to the Great Dimming and that the enhancements of visual extinction and IR optical depth during the Great Dimming may have occurred very close to the photosphere. They conclude that their results support a scenario in which the Great Dimming was caused by a combination of a decrease in \(T_{eff}\) and an increase in A(V) in roughly equal amounts consistent with the change in polarization reported by Cotton et al. (2020) and Safonov et al. (2020). The Great Dimming was clearly a complex phenomenon. It seems unlikely that its alignment with the phase of the \(\sim 400\) d pulsation period was a coincidence, but that alone cannot account for the magnitude and spatial inhomogeneity of the obscuration. There seems to be solid evidence that dust with an inhomogeneous spatial distribution played a role. A decrease in \(T_{eff}\) occurred, perhaps in the form of large, cooler patches. Such patchy structure in surface temperature calls into question the meaning of a global \(T_{eff}\) and at least sets upper limits on the precision with which a global \(T_{eff}\) can be determined. In an epilogue, after the Great Dimming, Betelgeuse has shown repeated minima at an interval of \(\sim 200\) d and has steadily increased in mean brightness to reach an historic maximum of V \(\sim 0\) in April, 2023 (M. Montarges, private communication, 2023). ## 5 The Explosion to come Betelgeuse will eventually explode, most probably still as a red supergiant and thus producing a Type II supernova (Branch and Wheeler, 2017). The details will be affected by the intense convection in the late shell-burning phases less than a year before explosion that may also produce significant outward acoustic flux and mass loss. The convection seeds turbulence in the collapsing material that is expected to enhance the subsequent explosion (Arnett and Meakin, 2011; Couch et al., 2015; Chatzopoulos et al., 2016). At the expected iron-core collapse, \(10^{53}\) ergs of neutrinos will be produced that emerge from the envelope about an hour after collapse and flood into space. About 600 years later, detection of those neutrinos will give humans on Earth (if there are any 100,000 + 600 years from now) their first hint of the events to come. At that time, a human body would receive \(\sim 100\) trillion neutrinos, vastly less than a lethal dose of radiation. The shock wave generated by the explosion carrying \(\sim 10^{51}\) erg of kinetic energy will take about a day to reach the surface. A blast of UV lasting about an hour will then occur. The resulting UV flux will be less than Figure 11: Speckle polarimetry of the surface of Betelgeuse on 27 October 2019 at the very beginning of the Great Dimming (left panel) and on 5 February 2020 at the extreme of the dimming (right panel). From Safonov et al. (2020) by permission of B. Safanov, Caucasian Mountain Observatory, and the Sternberg Astronomical Institute of Moscow State University. the flux of the Sun at Earth but perhaps sufficient to cause some disruption of atmospheric chemistry. In two weeks, the explosion will be producing a billion times the solar luminosity. At the Earth, Betelgeuse will appear as a pinpoint about as bright as a quarter Moon lasting for \(\sim\)3 months. The explosion will then fade, but remain visible at the Earth for many years and to scientific instruments for centuries. The explosion of Betelgeuse is likely to produce a pulsar that will also be visible for a million years or more. The supernova blast wave will propagate out through the surrounding CSM and ISM and interact with any mass lost in the next 100,000 years and eventually with the complex CSM illustrated in Figure 3. The shock wave will propagate at \(\sim\) 5000 km s\({}^{-1}\) and hence collide with the bow shock in about 60 y and the odd linear structure about 20 y later, assuming a distance to Betelgeuse of 165 pc (Joyce et al., 2020). By the time the supernova shock reaches the Earth more than 100,000 years after the explosion, the Solar magnetosphere should easily deflect it. The Earth immersed within the supernova remnant may witness an increase in cosmic ray flux. ## 6 Summary, Conclusions, and Future As nearby and well studied as it is, Betelguese still presents a host of outstanding issues: its irregular surface; the manner in which it ejects matter to form a chromosphere, wind, dust, and molecules; how it came to move through space and spin so rapidly; the nature of its variability and magnetic fields; and the possibility that it underwent a significant change in color within historical times. Statistics suggest that it was likely to have been born in a binary system and undergone complex common envelope evolution. Are the distant circumstellar structures related to the interaction of winds with the interstellar medium or the products of the dramatic turmoil of a merger? A pertinent question is the structure and condition of Betelgeuse as we see it today, gracing Orion. While uncertainties in the distance remain troubling, Betelgeuse is most likely near the tip of the RSB. Since core helium burning is far longer than subsequent burning phases, Betelgeuse is most likely in core helium burning. The pulsation period likely constrains the radius and distance and the evolutionary state to core helium burning (Joyce et al., 2020), but there are arguments to the contrary (Neuhauser et al., 2022; Lau et al., 2022). Estimates of the surface gravity of Betelgeuse span a rather large range. The inclination angle is uncertain. A more accurate measurement of \(log~{}g\), radius, and distance could yield new constraints on the current total mass. Comparison to the luminosity that most directly measures the core mass could reveal hints of the current mass of the outer hydrogen envelope, past mass loss, and the current evolutionary state. Continued exploration of atomic, isotopic, and molecular abundances may yield more information on internal mixing processes. The notion that Betelgeuse may have undergone a merger remains viable. The extra angular momentum may have come from merger with a companion in the red supergiant phase, nearly independent of the mass of the secondary. Once a transient phase of merging has settled down and substantial mass and angular momentum have been ejected, there is rather little external difference in models in late core helium burning and subsequent phases. This frustrates attempts to determine the internal structure and state of the star. How can we prove Betelgeuse has undergone a merger? There remain great challenges in understanding the associated physical processes if Betelgeuse underwent a merger. These must be explored with extensive, highly-resolved 3D studies of the formation and evolution of tidal plumes as any secondary is disrupted near the core of the primary. How deeply does the plume penetrate? What abundances are mixed to the surface? What is the level of envelope enrichment with helium that may determine whether the star remains red or moves back to the blue? What is the expected structure of the inner core as it evolves, rapidly rotating, beyond core helium burning? The Great Dimming of 2019/2020 did not portend imminent explosion. The origin of the dimming might have been related to a resonance of pulsation periods, expulsion of a dust cloud, and extra large star spots. How can we test the current evolutionary state of Betelgeuse with observations of pulsations and surface irregularities? Are there faint signals from acoustic waves generated internally that carry information about the core structure that would prove or disprove the hypothesis of a merger? What are the clues to imminent collapse that might pertain now, or in the far future if Betelgeuse is in core helium burning now? Models predict extensive mass loss shortly before collapse driven by rapid convection in the inner burning shells and there are hints of such pre-collapse mass loss in Type II supernovae that are thought to arise in red supergiants like Betelgeuse. Mysteries abound! ## 7 Addendum: Post-Publication Notes Andrea Dupree's Month of Betelgeuse project, MOB, that helped to focus attention on the Great Dimming was promptly renamed Months of Betelgeuse as the work of the group continued after April 2018. Tom Calderwood (private communication, 2023) pointed out that possible deep minima in the light curve of Betelgeuse registered in AAVSO visual estimates in the early 1950s and mid to late 1980s came from only one or two amateur observers. Those data and hence the depth of any minima may be suspect. After the Great Dimming, Betelgeuse brightened to near historic highs, oscillating with a dominant timescale closer to 200 days rather than 400 days (Sigismondi et al. 2023; Astronomer Telegram #16001: Monitoring Betelgeuse at its brightest). Saio et al. (2023; arXiv:2306.00287) presented an argument that the 2000 day "long secondary period" of Betelgeuse was the fundamental radial oscillation period, and that Betelgeuse may already be in carbon burning with only years to live. Both these points of view were contested by Molnar et al. (2023; Research Notes of the AAS, Volume 7, Number 6). ## Acknowledgments We are grateful to Natasha Ivanova for discussions of common envelope evolution and to the Aspen Center for Physics for providing the environment to do so. We also thank Ed Guinan, Meridith Joyce, and Andrea Dupree and the Month of Betelgeuse (MOB) team for discussions of Betelgeuse and mergers. We thank Ralph Neuhauser, Brad Schaefer, and Anita Richards for discussions of the historical color evolution of Betelgeuse. We are especially thankful for the ample support of Bill Paxton and the mesa team. JCW is grateful to the group of enthusiastic undergraduates at The University of Texas at Austin who catalyzed his interest in Betelgeuse: Sarafina Nance, Jamie Sullivan, Manuel Diaz, Steven Smith, Justin Hickey, Li Zhou, Maria Koutoulaki, and Julia Fowler. The research of JCW was supported in part by the Samuel T. and Fern Yanagisawa Regents Professorship in Astronomy and by NSF AST-1813825. EC is grateful to Dr. Juhan Frank, Dr. Sagiv Shiber, and Dr. Bradley Munson for insightful discussions and for their collaboration. The research of EC was supported in part by the National Science Foundation Grant AST-1907617 and in part by the Department of Energy Early Career Award DE-SC0021228. mesa(Paxton et al., 2011, 2013, 2015, 2018) ; _OctoTiger_ Adaptive Mesh Refinement (AMR) hydrodynamics code developed by the LSU Center for Computation and Technology (CCT) (Marcello et al., 2021).
2305.10448
Sequence-to-Sequence Pre-training with Unified Modality Masking for Visual Document Understanding
This paper presents GenDoc, a general sequence-to-sequence document understanding model pre-trained with unified masking across three modalities: text, image, and layout. The proposed model utilizes an encoder-decoder architecture, which allows for increased adaptability to a wide range of downstream tasks with diverse output formats, in contrast to the encoder-only models commonly employed in document understanding. In addition to the traditional text infilling task used in previous encoder-decoder models, our pre-training extends to include tasks of masked image token prediction and masked layout prediction. We also design modality-specific instruction and adopt both disentangled attention and the mixture-of-modality-experts strategy to effectively capture the information leveraged by each modality. Evaluation of the proposed model through extensive experiments on several downstream tasks in document understanding demonstrates its ability to achieve superior or competitive performance compared to state-of-the-art approaches. Our analysis further suggests that GenDoc is more robust than the encoder-only models in scenarios where the OCR quality is imperfect.
Shuwei Feng, Tianyang Zhan, Zhanming Jie, Trung Quoc Luong, Xiaoran Jin
2023-05-16T15:25:19Z
http://arxiv.org/abs/2305.10448v1
# Sequence-to-Sequence Pre-training with Unified Modality Masking for Visual Document Understanding ###### Abstract This paper presents GenDoc, a general sequence-to-sequence document understanding model pre-trained with unified masking across three modalities: text, image, and layout. The proposed model utilizes an encoder-decoder architecture, which allows for increased adaptability to a wide range of downstream tasks with diverse output formats, in contrast to the encoder-only models commonly employed in document understanding. In addition to the traditional text infilling task used in previous encoder-decoder models, our pre-training extends to include tasks of masked image token prediction and masked layout prediction. We also design modality-specific instruction and adopt both disentangled attention and the mixture-of-modality-experts strategy to effectively capture the information leveraged by each modality. Evaluation of the proposed model through extensive experiments on several downstream tasks in document understanding demonstrates its ability to achieve superior or competitive performance compared to state-of-the-art approaches. Our analysis further suggests that GenDoc is more robust than the encoder-only models in scenarios where the OCR quality is imperfect. ## 1 Introduction Document understanding is a research topic that involves analyzing, understanding, and reasoning over business documents (e.g., invoices, contracts, financial statements, etc.). The topic includes a wide range of tasks such as document image classification Harley et al. (2015), layout detection Zhong et al. (2019), information extraction Guillaume Jaume (2019), table detection Gao et al. (2019), scene text recognition Neumann and Matas (2012), etc. Pre-training with transformer-based models Vaswani et al. (2017) for document image understanding receives significant attention recently Xu et al. (2020); Appalaraju et al. (2021); Xu et al. (2021); Powalski et al. (2021); Hwang et al. (2021); Huang et al. (2022); Wang et al. (2022) as the pre-trained models are able to achieve remarkable improvement on the above downstream tasks. Most of existing research efforts focus on pre-training multi-modal transformer encoders Tan and Bansal (2019); Chen et al. (2020); Su et al. (2020); Peng et al. (2022) to encode the textual, layout, and visual information (e.g., LayoutLM Xu et al. (2020, 2021); Huang et al. (2022)). They proposed various transformer design (e.g., spatial attention) and pre-training tasks (e.g., image-text contrastive learning) to enable interaction among the modalities. While these pre-trained encoders achieve great performance on some downstream tasks, encoder-decoder models are more appropriate for generation tasks such as question answering Mathew et al. (2021) and more flexible to adapt to different kinds of downstream tasks. Furthermore, encoder-decoder models are prone to suffer less from imperfect optical character recognition (OCR), which is essential for encoder-only models to achieve good performance on certain tasks (SS3.2). Powalski et al. (2021) proposed a spatial-aware T5 model Raffel et al. (2020) called Text-Image-Layout Transformer (TLT) to incorporate the layout as spatial bias in the attention mechanism. However, their pre-training objective only involves the masked language modeling loss which potentially causes the model to rely solely on the textual modality. Instead, we add relative disentangled attention Peng et al. (2022) in the encoder for better position understanding and incorporate the mixture-of-modality-experts Wang et al. (2021) in the decoder to capture the modality-specific information. We then propose a unified modality masking schema and a unified pre-training loss function for each modality (SS2.3). Specifically, we adopt the standard text infilling task Lewis et al. (2020) for textual modality. Inspired by the masked image modeling in encoder models Bao et al. (2021) and image generation Yu et al. (2022); Ramesh et al. (2022), we use the Vector Quantised-Variational AutoEncoder (VQ-VAE) Van Den Oord et al. (2017) to obtain the image tokens and perform masked image token prediction. Finally, we also design a masked coordinate prediction task to enhance the spatial awareness of our encoder-decoder model. In order to handle different modalities and provide guidance for downstream tasks, we design specific instructions during both pre-training and fine-tuning (SS2.1). Our main contribution can be summarized as follows: * We propose GenDoc, a general sequence-to-sequence (Seq2Seq) multi-modal transformer for document understanding. Such a Seq2Seq model allows us to unify the output format of various pre-training and downstream tasks and exhibits robust performance in the presence of imperfect OCR. * We implement the pre-training of all three modalities (text, visual, and layout) by applying text infilling, masked image token prediction, and masked coordinate prediction, respectively. We also develop effective designs for modality fusion within the encoder and modality separation within the decoder. * We conduct experiments on four document understanding tasks and achieve either state-of-the-art or comparable performance. Furthermore, we perform ablation experiments to demonstrate the significance of each pre-training task and modeling architecture. ## 2 GenDoc GenDoc is a general document understanding model that unifies the training of text, vision and layout modality within a single framework. Figure 1 shows the model architecture and the pre-training process for different modalities. Our model is a transformer model Vaswani et al. (2017) consisting of an encoder with additional layout incorporated and a decoder that features a mixture of modality experts Wang et al. (2021) to capture the modality-specific information. In addition, we also include an image backbone to encode the visual images. ### Input Representation As illustrated in Figure 1, the input to our model comprises of four primary components: the task-specific instruction, the optional OCR text, the document image and the layout information of text and image patches. The instruction and OCR text are tokenized into subwords and then encoded as embeddings. The visual document image is encoded as visual embeddings using an image backbone and the layout information is represented by standalone spatial embeddings. The final input sequence consists of token embeddings and visual embeddings, as well as position embeddings representing 1D and 2D positions integrated. InstructionIn order to differentiate between the various pre-training tasks, unique instructions are crafted based on their modalities and appended ahead of other inputs. For example, the instruction "_What is the complete text for <mask> tokens?_" is used in the text infilling task, while "_What are the values for masked image tokens?_" is used in the masked image token prediction task. During fine-tuning, instructions that are semantically meaningful for each downstream task are designed. These instructions can effectively guide the generation for the corresponding modality and aid the model in adapting to similar tasks for practical application. OCR TextThe textual content in a document is obtained through the use of an off-the-shelf OCR parser. Prior research efforts Huang et al. (2022); Powalski et al. (2021) have mainly adopted the use of commercial OCR services such as Microsoft READ API1 or Amazon OCR2. However, due to budget constraints, we employed our own internal OCR service for pre-training. Footnote 1: [https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/call-read-api](https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/how-to/call-read-api) Footnote 2: [https://aws.amazon.com/textract/](https://aws.amazon.com/textract/) For both the instruction and OCR text, we use the subword tokenizer from BART Lewis et al. (2020) to tokenize them. As depicted in Figure 1, the instruction and OCR text are concatenated together as textual input to the encoder. Document Image and Visual TokensThe original document image is first resized to a pre-defined size \(H\times W\times 3\) where \(H\) is the height, \(W\) is the width. The image is then passed through the image backbone, a ResNet, to extract features as image patches of size with \(h\times w\). These image patches are subsequently flattened as a sequence of vectors for the input of the transformer encoder. On the other hand, the document image is quantized into discrete visual tokens during pre-training to build the target for Masked Image Token Prediction task introduced in next section(SS2.3). A VQ-VAE (Van Den Oord et al., 2017) tokenizer is trained on the complete pre-training data, and used to tokenize the images. The tokenized image can be represented as a sequence of tokens \([\mathbf{z_{0}},\mathbf{z_{1}},...\mathbf{z_{h\times w}}](\mathbf{z_{i}}\in 0,1,\cdots,| \mathcal{V}|)\), where \(|\mathcal{V}|=8192\) in our experiments. LayoutThe layout information is represented by 2D coordinates of each word and image patch. Padding layout is used for the instruction and other special tokens. Since the text extracted from the OCR parser might not appear in a natural language order (typically ordered in a left-to-right and then bottom-to-top manner), a learnable 1D embedding is utilized for positional embeddings. Specifically, separate positional embeddings are utilized for the visual and textual modality (i.e., \(\mathbf{E}_{text\_1d}\) and \(\mathbf{E}_{visual\_1d}\)). Furthermore, 2D layout representations are also necessary for documents. Following the method proposed in Xu et al. (2020), 2D layout embeddings are calculated by: \[\mathbf{E}_{2d}=\mathbf{E}_{x_{1}}+\mathbf{E}_{y_{1}}+\mathbf{E}_{x_{2}}+ \mathbf{E}_{y_{2}}+\mathbf{E}_{width}+\mathbf{E}_{height}\] where \((x_{1},y_{1})\) and \((x_{2},y_{2})\) represent the top-left and bottom-right coordinates, respectively. \(width\) and \(height\) represent the length of two dimensions. The complete tuple \((x_{1},y_{1},x_{2},y_{2},width,height)\) uniquely represens a bounding box or a region in a document image. These layout embeddings are used to represent the 2D coordinate of words and the image patches. Formally, our final input embedding representations can be obtained by: \[\mathbf{E}=[\mathbf{E}_{t}+\mathbf{E}_{text\_1d};\mathbf{E}_{v}+\mathbf{E}_{ visual\_1d}]+\mathbf{E}_{2d} \tag{1}\] ### Model Design Modality Fusion in EncoderIn order to reinforce the effectiveness of layout information, we incorporate the disentangled attention into our transformer encoder following previous research efforts (He et al., 2021; Peng et al., 2022). Specifically, we utilize attentions from content-to-content, content-to-layout in the x-dimension, and content-to-layout in the y-dimension: \[\begin{split} A^{cc}_{ij}&=\mathbf{Q}^{c}_{i}\mathbf{K}^{c \intercal}_{j}\\ A^{cx}_{ij}&=\mathbf{Q}^{c}_{i}\mathbf{K}^{\overline{x}}_{ \overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{ \overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{ \overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{\overline{x}_{ \overline{x}_{\overline{x}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\ & }}}}}}} \\ \end{ \end{ \ \ \\ \\ \\ \end{ \ score from content query \(\mathbf{Q}^{c}\) to content key \(\mathbf{K}^{c}\) between position \(i\) and \(j\); \(A^{cx}_{ij}\) and \(A^{cy}_{ij}\) stand for the disentangled attention scores from content to layout; \(\delta_{*}(i,j)\) is the relative distance function between \(i\) and \(j\), the explicit definition of which can be found in the work of (He et al., 2021). Different from previous research efforts, we do not use the attention from content to the 1D position, as the layout order information can be more representative than sequential order for the textual information in the document. Modality Separation in DecoderSome preliminary experiments suggest that our universal decoder could underperform on certain downstream tasks as the pre-training requires the decoder to capture all the information in different modalities (i.e., text, image, and layout). Motivated by such an observation, we adopt the mixture-of-modality-experts (MoE) (Wang et al., 2021) strategy in our decoder. We use independent feed-forward networks (FFNs) for different pre-training or downstream tasks. The activation of specific experts within the decoder is contingent upon the nature of the pre-training or fine-tuning task at hand. Specifically, the textual expert is used for natural language tasks, visual expert for image token prediction tasks, and layout expert for coordinate prediction tasks. ### Pre-training Tasks In this work, we implement three pre-training tasks utilizing a unified modality masking strategy across three distinct modalities: Text Infilling (TI), Masked Image Token Prediction (ITP), and Masked Coordinate Prediction (CP). Given that an encoder-decoder architecture is employed, all tasks are formulated within a unified generation framework, with the utilization of a shared vocabulary. Text InfillingText infilling (Raffel et al., 2020; Lewis et al., 2020) is a typical denoising task for pre-training the encoder-decoder architecture. In this work, spans within the text are randomly sampled following the Poisson distribution (\(\lambda=3\)). The masked text is replaced with _<mask>_ tokens, and the corresponding layout is replaced with a padding layout. Approximately 30% of tokens are masked in each document, and we use a special token _<sep>_ to join the masked spans as the reconstruction targets. The objective during pre-training is to recover these _<mask>_ spans, similar to the approach employed in T5 (Raffel et al., 2020). Masked Image Token PredictionSimilar to text infilling, we build an image denoising method to model image patches inspired by previous studies (Bao et al., 2021; He et al., 2022). Unlike the block-wise masking mechanism used in previous document understanding studies (Huang et al., 2022), a random masking strategy was adopted, which has been shown to be more robust in conventional visual tasks (Xie et al., 2022). Specifically, 50% of the image patches produced by the image backbone are replaced with a learnable mask embedding. The pre-training objective is to recover the quantized image token sequence \(\mathbf{y}=[\mathbf{z_{0}},\mathbf{z_{1}},...\mathbf{z_{h\times w}}]\) of the document image, where each token \(\mathbf{z_{i}}\) is represented by a unique visual token label in the shared vocabulary. Masked Coordinate PredictionIn order to assist the model in comprehending the diverse and complex layouts present within documents, we devise a task named masked coordinate prediction. To implement this task, a proportion of spans are selected via Poisson distribution (\(\lambda=3\)) and their layout coordinates are masked. The span length of the masked segments constitutes 20% of the total number of tokens in the document. The objective of this task is to predict the coordinates of the obscured spans, thus providing the model with a means of understanding the layout structure of the document. The resulting sequence comprises a series of coordinates, denoted by \([x_{1}\), \(y_{1}\), \(x_{2}\), \(y_{2}]\), separated by a special token _<sep>_. Unified LossThe training objective for all tasks are unified to minimize the cross-entropy loss: \[\mathcal{L}_{ce}=-\sum_{i=1}^{T}\sum_{j=1}^{N_{i}}\omega_{i}\log P(\hat{y}| \mathbf{x^{\prime}_{i}},y_{1:j-1}) \tag{3}\] where \(T\) is the number of tasks; \(N_{i}\) is the maximum sequence length for task \(i\); \(\omega_{i}\) is the task weight; \(\mathbf{x^{\prime}_{i}}\) is the masked input for task \(i\); \(\hat{y}\) and \(y\) are targets and decoder inputs, respectively. ## 3 Experiments ### Settings and Results In order to ascertain the efficacy of our proposed model, a thorough evaluation was conducted on four distinct downstream tasks, namely visual question answering, document layout analysis, form understanding, and document classification. Experiments were carried out utilizing publicly available benchmarks, including DocVQA Mathew et al. (2021), Publaynet Zhong et al. (2019), CORD Park et al. (2019), and RVL-CDIP Harley et al. (2015). The subsequent sections provide descriptions of the pre-training settings, methods employed for modeling the downstream tasks, and the results obtained. Information regarding the datasets, hyper-parameters and more experimental details can be found in the Appendix SSA. Pre-trainingTo ensure consistency with prior studies, our pre-training is performed using the IIT-CDIP dataset Lewis et al. (2006), which comprises of 11 million scanned documents. Textual and layout information are extracted from these documents using an off-the-shelf OCR engine. The batch size for each task is set independently, as the understanding of documents relies more heavily on textual information. The base model uses a batch size of \(640\) for text infilling, \(384\) for masked image token prediction, and \(112\) for masked coordinate prediction. The large model employs half the batch size of the base model to save computations. The input images are resized into \(448\times 448\) (\(H\times W\)), and the tokenized textual sequences are truncated if they exceed \(512\) tokens. The image backbone is initialized by a standard ResNet-101 model and the transformer encoder-decoder model is initialized by a pre-trained Bart model Lewis et al. (2020). The parameters of the visual and layout experts are initialized by the textual expert, while the remaining parameters are randomly initialized. Both models are pre-trained for \(150\)k steps with 32 Nvidia-A100 GPUs, and the following experiments are conducted with the pre-trained models. Document Visual Question AnsweringDocument visual question answering is a task that necessitates the model to comprehend document text and images in order to answer specific questions. Instead of modeling the DocVQA task as an extractive question answering, as is commonly done in previous work Xu et al. (2021); Huang et al. (2022); Li et al. (2021), we model this task as an abstractive question answering, which benefits from our sequence-to-sequence modeling. This approach eliminates the need for non-trivial pre-processing to match the span with the annotated gold answers and enhances the OCR error tolerance of our model. In the present study, we fine-tune the model on the DocVQA training set, evaluate on the validation set, and obtain the test results by submitting it to the official website3. It is observed during the experimentation process that task performance is highly correlated with the quality of OCR. To ensure a fair comparison, we obtain the OCR results through Microsoft Read API without any additional pre-processing. Consistent with the pre-training procedure, we include an instruction of "_What is the answer to the question?_" preceding the question inputs. The final textual input is formed by concatenating the instruction, question, and document text. The model is fine-tuned for 10 epochs with a batch size of 48. Footnote 3: [https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=1](https://rrc.cvc.uab.es/?ch=17&com=evaluation&task=1) We report the ANLS (Average Normalized Levenshtein Similarity) score calculated by the official rated system in Table 1. Without any post-processing, our result outperforms the previous state-of-the-art by \(1.1\) on ANLS for the large model and \(0.1\) for the base model. These results demonstrate the effectiveness of the GenDoc model in solving document question-answering tasks. Document Layout AnalysisDocument layout analysis is a task that involves object detection in document images. PubLayNet is a large dataset that provides abundant annotations of bounding boxes and five document layout categories (i.e. text, list, table, figure, and title). We train the model on its training set and report the results on the validation set, as per common practice Huang et al. (2022). Previous approaches use pre-trained models as the feature backbone and add additional modules such as Faster R-CNN Ren et al. (2015), Cascade R-CNN Cai and Vasconcelos (2018), and FPN Lin et al. (2017) for bounding box regression. In contrast, our model predicts bounding boxes using a language-modeling approach, similar to the coordinate prediction pre-training task. Specifically, the bounding box and its category are formatted as a sequence of discrete tokens, such as \([x_{1},y_{1},x_{2},y_{2},class]\), and the objective is to maximize the likelihood of the sequence given a document image. To maintain consistency with previous works Huang et al. (2022); Li et al. (2022), we only use images as input for the model. We use a batch size of \(64\) during fine-tuning and train the model for 50 epochs. We report the mean average precision (mAP) in Table 1. The result shows our unified model only has a narrow performance gap compared to the strong baseline DiT, which utilizes an extra detection network, Cascade R-CNN. Furthermore, our model down-samples the image to a shorter sequence, as opposed to the patch size of 16 utilized by DiT and LayoutLMv3. This leads to a reduction in sequence length by \(75\%\), resulting in a significant reduction in computation and training costs. Form UnderstandingForm understanding is a crucial task in the field of document and form information extraction. Commonly used benchmark datasets for this task include FUNSD Guillaume Jaume (2019) and CORD. In this work, the Consolidated Receipt Dataset (CORD) is selected as the primary dataset for analysis due to its superior robustness in terms of sample size. Specifically, the CORD comprises 1.1K receeits and 30 semantic labels (e.g., menu name, price, total price, etc.), whereas the Form Understanding (FUNSD) dataset only includes 200 forms. In line with prior research, we approach this task as a sequence labeling problem using BIO tags. Specifically, the GenDoc encoder is fed the instruction, text, layout, and image in the same manner as the Text Infilling task. The decoder input consists of the instruction, text, and position embedding sequences, which will be processed by the textual expert. With officially provided images and OCR annotations, the model is fine-tuned for 50 epochs with a batch size of 4. In alignment with previous studies, such as Appalaraju et al. (2021); Powalski et al. (2021); Huang et al. (2022), the entity level F1 score for the test set is presented in Table 1 for comparative analysis. The results demonstrate that our proposed model exhibits a slight improvement in performance compared to the state-of-the-art method. Document ClassificationDocument classification involves predicting the appropriate category for a given document. To evaluate the performance of our model, we utilize the RVL-CDIP dataset. Our model's input is consistent with the pre-training process, which involves providing a task-specific instruction: "_What is the category of the document?_" along with the text, image, and layout information obtained from an off-the-shelf OCR. In line with the approach adopted in Lewis et al. (2020) where a seq2seq model is utilized for classification, we feed the document text into the decoder and perform classification on the hidden state of the <eos> token. The model is fine-tuned for 10 epochs with a batch size of 96. We report the accuracy of the classification on the test data in table 1. The result shows our model has a desirable performance in capturing the coarse information from text and images of a document. ### Ablation Study In order to maintain a balance between computational efficiency and experimental validity, all ablation experiments are executed on a single node utilizing 8 A100 GPUs. To ensure comparability across all pre-training tasks, the ratio of batch size is kept constant, as previously discussed in \begin{table} \begin{tabular}{l l c c c} \hline \hline & **DocVQA** & **Publaynet** & **CORD** & **RVL-CDIP** \\ & **ANLS\(\uparrow\)** & **MAP\(\uparrow\)** & **F1\(\uparrow\)** & **Accuracy\(\uparrow\)** \\ \hline \multirow{8}{*}{Base} & LayoutLM Xu et al. (2020) & - & - & - & 94.42 \\ & LayoutLMv2 Xu et al. (2021) & 78.08 & - & 94.95 & 95.25 \\ & LILT Wang et al. (2022) & - & - & 96.07 & 95.68 \\ Model & LayoutLMv3 Huang et al. (2022) & 78.76 & **95.1** & 96.56 & 95.44 \\ & DocFormer Appalaraju et al. (2021) & - & - & 96.33 & **96.17** \\ & DIT Li et al. (2022) & - & 94.5 & - & - \\ & **GenDoc (Ours)** & **78.83** & 94.4 & **96.59** & 93.80 \\ \hline \hline \multirow{8}{*}{Large} & LayoutLM Xu et al. (2020) & - & - & - & 91.90 \\ & LayoutLMv2 Xu et al. (2021) & 83.48 & - & 96.01 & 95.64 \\ \cline{1-1} & StructuralLM Li et al. (2021) & 83.94\(\dagger\) & - & - & 96.08 \\ \cline{1-1} & LayoutLMv3 Huang et al. (2022) & 83.37 & - & **97.46** & **96.56** \\ \cline{1-1} & DocFormer Appalaraju et al. (2021) & - & - & 96.99 & 95.50 \\ \cline{1-1} & TILT Powalski et al. (2021) & 87.05\(\dagger\) & - & 96.33 & 95.52 \\ \cline{1-1} & **GenDoc (Ours)** & **84.49** & - & 96.97 & 94.50 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison with existing approaches on four document understanding tasks.\(\dagger\) indicates they combine the training and validation set for fine-tuning. the methodology. Additionally, each model is pre-trained for 100k steps under identical conditions to maintain a consistent experimental setup. Pre-trainingThis section aims to investigate the individual contribution of each pre-training task to downstream tasks. As previously discussed, three pre-training tasks are introduced for each modality (text, image, and layout) respectively and the results of unified training are reported. In order to gain a deeper understanding of the effect of each pre-training task, additional experiments are designed by removing each pre-training task individually and evaluating the model's performance on downstream tasks. The results of these experiments are presented in Table 2, where two downstream tasks are selected as representative examples of text and image-layout modality tasks. As observed from the results presented in Table 2, the exclusion of text pre-training (Text Infilling, TI) results in a significant decline in performance for the DocVQA task (-\(13.28\) ANLS). Conversely, a slight improvement in performance is observed for the PubLayNet task (+\(0.52\) mAP), which is consistent with the findings reported in Wang et al. (2022). Additionally, the removal of layout pre-training task (CP) results in only a moderate decrease in performance for the DocVQA task, indicating that DocVQA is a text-centric task. The results for the object detection task, PubLaynet, reveal that the masked Coordinate Prediction task (CP) is crucial, which is consistent with the design consideration. This is due to the fact that the CP task not only trains the layout modality but also functions as an unsupervised object detection task, thus the removal of this task results in a substantial decrease in performance (-\(2.16\) mAP). It should be noted that, upon the removal of the image token prediction task (ITP) alone, the pre-training becomes unstable and convergence becomes difficult to achieve. This phenomenon was consistently observed across multiple replication of this experiment. Through analysis of the training process, we conclude that this instability is likely due to the absence of a link between text and layout modality, which further highlights the importance of the masked Image Token Prediction (ITP) task. ModelIn our study, we conducted experiments to investigate the effects of specialized designs implemented in the encoder and decoder. Specifically, we incorporate disentangled attention into the encoder to enhance the representation of 2D positional information, as the order of text in a document is not inherently sequential. Through the removal of the relative disentangled attention component, we observed a significant decline in performance for both tasks (-\(2.98\) ANLS, -\(1.25\) mAP), thereby demonstrating the effectiveness of this design. Additionally, we examined the impact of incorporating MoE, which is intended to mitigate confusion caused by the different modalities in our design. Different from previous work, the results are unexpectedly not significant, with only a slight increase in performance observed upon the inclusion of MoE (+\(0.28\) ANLS, +\(0.71\) mAP). In our analysis, we think this phenomenon is a result of the level of training our downstream tasks received. Unlike the few-shot paradigm utilized in previous studies Wang et al. (2022), we use a fully-supervised fine-tuning approach with the full dataset, which likely contributes to the observed \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Task** & GenDoc & _w/o_ TI & _w/o_ CP & _w/o_ ITP\({}^{\dagger}\) & _w/o_ MoE & _w/o_ Rel\({}^{*}\) \\ \hline DocVQA: ANLS & 78.15 & 64.87 & 77.98 & - & 77.87 & 75.17 \\ PubLaynet: mAP & 92.31 & 92.83 & 90.15 & - & 91.60 & 91.06 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of GenDoc: Using the pre-trained model, we fine-tuned the model on the DocVQA and the PubLayNet dataset for 10 epochs, respectively. \(\dagger\) Deleting image token prediction task alone would cause Nan loss problem during pre-training. \(*\) Rel means the disentangled relative attention scheme. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Model \& OCR** & LayoutLMv3 & LayoutLMv3 & GenDoc & GenDoc \\ & Original OCR & MS OCR & Original OCR & MS OCR \\ \hline DocVQA: ANLS & 68.5 & 73.2 & 73.5 & 77.5 \\ \hline \hline \end{tabular} \end{table} Table 3: Experimental results on the effectiveness of OCR. All experiments are conducted using the train dataset of DocVQA and report results on the validation dataset. Original OCR refers to the officially provided OCR results from DocVQA, and MS OCR refers to the Microsoft Read API service. performance proximity. OcrIn this section, we evaluate the impact of OCR effects on performance. In various tasks related to document understanding, textual input plays a crucial role in determining performance. However, many previous studies have utilized internal OCR techniques and have not made the parsed data available. Therefore, we conduct experiments to assess the significance of OCR quality. Specifically, we test the pre-trained model using different OCR results on the DocVQA task, which heavily relies on textual inputs. The results of these experiments are presented in Table 3. We replicated LayoutLMv3 using their open-sourced pre-trained checkpoint and code, and trained LayoutLMv3 model for 40 epochs. Additionally, we trained our GenDoc model using the pre-trained checkpoint for 10 epochs for each test. The results of our experimentation indicate that OCR quality has a substantial impact on performance. Furthermore, when utilizing the same quality of textual inputs, our model exhibits superior performance in comparison to previous state-of-the-art work. This further supports the assertion that our model is less susceptible to the negative effects of imperfect OCR. ## 4 Related Work Our work is mostly related to recent work on multi-modal transformers for visual document understanding. Xu et al. (2020) proposed the LayoutLM model to first combine the textual and layout information in a pre-trained transformer model. Inspired by BERT Devlin et al. (2019), they designed the 2-D position embedding to denote the position of a token within a document. They achieved much better performance compared with pure text-based approaches (e.g., BERT and Roberta Liu et al. (2019)) on the downstream tasks (i.e., sequence labeling and document classification). Following research efforts can be broadly categorized by the architecture and the pre-training objective design. Li et al. (2021) proposed to use the cell-level positions rather than word-level positions (i.e., LayoutLM) to better understand the layout in documents. Various spatial-aware attention mechanisms Xu et al. (2021); Powalski et al. (2021); Appalaraju et al. (2021); Wang et al. (2022) are proposed to capture layout information. Pre-training strategy focuses on designing a pre-training objective to enable the model rely on cross-modality interaction. Specifically, pre-training losses such as text-image alignment and text-image matching are essential to strengthen the tie between the visual and textual information Xu et al. (2021); Cho et al. (2021); Huang et al. (2022). In addition, the choice of image backbone also makes a difference as it would affect the ability to understand the document image and the pre-training strategy for the visual modality. While most literature adopts different variants of ResNet He et al. (2016) to for down sampling and obtain a flatten sequence to the transformer, LayoutLMv3 Huang et al. (2022) achieves state-of-the-art performance by using image patches. Such a design also allows them to perform masked image token prediction. However, the input sequence length could be too long to include all the information. Our approach mitigate the above limitation by combining the usage of ResNet and sequence-to-sequence architecture. ## 5 Conclusions and Future Work The GenDoc model is an integrated sequence-to-sequence architecture for document understanding. To achieve this, We design text infilling, masked image token prediction, and masked coordinate prediction pre-training objectives for the textual, visual, and layout modality, respectively. The sequence-to-sequence model design allows for fine-tuning and inference on all downstream tasks utilizing the same structure. A comprehensive examination of the proposed approach was conducted through a series of experiments on four standard document understanding tasks. The results of these experiments demonstrate the effectiveness of the proposed approach. Additionally, it was found that the proposed approach yields more robust performance in the presence of imperfect OCR when compared to encoder-only models. Our future work includes extending our model to the scenario where OCR is not required, increasing both model size and data size to enable zero-shot prediction capabilities, and integrating the vision-language models across various domains such as visual document Mathew et al. (2021), Infographics Mathew et al. (2022), webpage image Tanaka et al. (2021); Chen et al. (2021), and visual scene understanding Lin et al. (2014).
2305.18014
Radiotherapy Dosimetry: A Review on Open-Source Optimizer
Radiotherapy dosimetry plays a crucial role in optimizing treatment plans for cancer patients. In this study, we investigate the performance of a dozen standard state-of-the-art open-source optimizers for radiotherapy dosimetry. Our evaluation includes the use of TGG119 benchmark cases as well as one real case obtained from the Institute du Cancer de Montpellier (ICM). Among the tested optimizers, Newton CG demonstrates the fastest convergence in terms of the number of iterations. However, when considering the computation time per iteration, LBFGS emerges as the most efficient optimizer. These findings shed light on the performance of open-source optimizers for radiotherapy dosimetry, aiding practitioners in selecting suitable optimization tools for efficient treatment planning.
Paul Dubois
2023-05-29T11:12:21Z
http://arxiv.org/abs/2305.18014v1
# Radiotherapy Dosimetry: A Review on Open-Source Optimizer ###### Abstract Radiotherapy dosimetry plays a crucial role in optimizing treatment plans for cancer patients. In this study, we investigate the performance of a dozen standard state-of-the-art open-source optimizers for radiotherapy dosimetry. Our evaluation includes the use of TGG119 benchmark cases as well as one real case obtained from the Institute du Cancer de Montpellier (ICM). Among the tested optimizers, Newton CG demonstrates the fastest convergence in terms of the number of iterations. However, when considering the computation time per iteration, LBFGS emerges as the most efficient optimizer. These findings shed light on the performance of open-source optimizers for radiotherapy dosimetry, aiding practitioners in selecting suitable optimization tools for efficient treatment planning. ## 1 Introduction Radiotherapy, a widely utilized intervention for cancer treatment, employs ionizing radiation to eliminate malignant cells. Intensity-modulated radiation therapy (IMRT) has emerged as a notable technique within radiotherapy, aiming to deliver high radiation doses to tumors while minimizing exposure to healthy surrounding tissues [4]. Traditional IMRT strategies typically employ a set number of beams, often 5, 7, or 9, originating from various angles around the patient, commonly distributed evenly [3]. Each beam's intensity is modulated to optimize the delivery of radiation doses to the tumor while reducing exposure to healthy tissues. This approach surpasses the effectiveness of the 3D-conformal radiotherapy (3D-CRT) technique [10][16][21]. To facilitate precise and efficient radiation delivery, a computer-controlled device called the multi-leaf collimator (MLC) is utilized to shape the radiation beam according to the contours of the tumor. The effectiveness of a radiotherapy treatment plan relies on the optimization procedure, which involves a series of steps aimed at ensuring the optimal delivery of radiation in accordance with the prescribed guidelines of medical practitioners. Typically, computer software is employed to facilitate the optimization process, taking into account various factors such as patient anatomy, the size and location of the tumor and organs, and the radiation objectives defined by medical professionals. While there has been comparison between commercial software[19][13], the aim of this paper is to investigate the best optimizer for this task among the open source optimizers. Pre-dose-optimizationThe initial stage of the optimization process entails the creation of a virtual representation of the patient's anatomical structure using advanced medical imaging modalities, such as computed tomography (CT) or magnetic resonance imaging (MRI) scans. This model is subsequently utilized to accurately determine the size and location of the tumor, as well as to delineate the surrounding healthy tissues that necessitate protection from radiation exposure. Following this, the radiation dose required for effective treatment is established, typically based on dose-volume objectives defined by physicians (e.g., ensuring that 95% of the planning target volume receives a minimum dose of 75 Gy). Determining the appropriate dose takes into consideration factors such as tumor characteristics, location, size, as well as the patient's medical history and overall health status. These essential steps in the optimization process are carried out by medical professionals with expertise in radiotherapy treatment planning. Radiotherapy dosesThe subsequent step entails the computation of the radiation dose distribution within the patient's volumetric anatomy. This is achieved by simulating a particular configuration of the multi-leaf collimator (MLC) on the patient's body, utilizing the available medical imaging data. The resulting computed dose represents a mapping from the three-dimensional volume of the patient's anatomy to a scalar value measured in Grays (Gy), which denotes the absorbed radiation energy. In practical implementation, a discrete representation of the dose distribution is utilized, wherein the dose is calculated for each individual voxel comprising the patient's anatomical structure. Dose-Volume HistogramsMedical professionals have meticulously identified and delineated the pertinent anatomical structures within the patient's anatomy. This allows the computation of dose-volume histograms (DVHs) for each structure, predicated on a specified dose distribution. The dose-volume objectives are subsequently represented as specific points on the DVH curve, which correspond to the desired minimum or maximum dose constraints. These objectives delineate the desired thresholds that should be upheld, with points on the DVH curve either located above (for minimum dose constraints) or below (for maximum dose constraints) the prescribed thresholds. Dose EvaluationPhysicians employ multiple criteria to assess the quality of a radiation dose administered during treatment. Initially, they scrutinize the three-dimensional distribution of the dose across the patient's anatomy, focusing on the spatial allocation among different anatomical structures, as well as identifying the presence, number, and locations of regions with excessive radiation (referred to as "hot spots"). Subsequently, physicians conduct a thorough analysis of the dose-volume histograms (DVHs) to evaluate the degree of compliance with predefined DVH objectives. This crucial evaluation step aims to safeguard the adjacent healthy tissues from unnecessary radiation exposure. By optimizing the treatment plan and meticulously assessing the quality of the dose distribution, physicians strive to ensure the attainment of the most favorable outcome for the patient. ## 2 Methods To ensure the precision and effectiveness of radiation therapy, a robust dose optimization process are essential. ### Radiotherapy Dosimetry Radiotherapy dose optimization can be conceptualized as an inverse problem, whereby the objective is to determine the most suitable radiation dose distribution that aligns with the desired treatment outcome [22]. In other terms, the challenge lies in identifying the radiation intensity or fluence maps that deliver the prescribed dose to the tumor while minimizing exposure to healthy tissues. Mathematical optimization algorithms are employed to address the inverse problem of radiotherapy dose optimization. These algorithms aim to identify the optimal solution by minimizing a predefined objective function that encompasses treatment goals and constraints. Typically, the objective function includes terms that penalize both underdosing and overdosing of the tumor and overdosing of surrounding healthy tissues. It may also incorporate terms that account for the complexity or deliverability of the treatment plan. Efficiently solving this optimization problem often involves designing the objective function to be convex, thereby providing a well-defined target for the optimization process. Gradient-based methods, Newtonian algorithms, or quasi-Newtonian algorithms are commonly employed for this purpose. We aim at benchmarking state-of-the-art open-source optimization algorithms for the specific task of radiotherapy dosimetry. ### Data In this research endeavor, our focus was to evaluate the various open-source optimizers. We used the widely recognized TG-119 [14] cases as a benchmark for evaluating radiation therapy plans optimization. The TG-119 dataset provides specific dose goals, which we incorporated into our proposed cost function. We also used one real case of prostate cancer treatment from ICM. For this case, doctors had provided specific dose goals, that we again incorporated into our proposed cost function. The TGG 119 multiple PTVs is a theoretical case, unlikely to happen in real life. However, the three other cases represent a comprehensive set of what dosimetrists could encounter on a daily basis. The simulation of the beams was done using TheraPanacea dose engine, which uses collapse cone convolution techniques, and is conformal to other simulator available on the market. ### Objective function The cost function is formulated as a weighted sum of multiple objectives, with each objective corresponding to a specific dose goal. The formulation is as follows: \[f(\mathbf{d})=\sum_{o\in\mathcal{O}}w_{o}f_{o}(\mathbf{d})\] where: * \(\mathbf{d}\) represents the dose distribution at the voxel level, and \(\mathbf{d}[s]\) denotes the dose on voxels within the structure \(s\) * \(\mathcal{O}\) denotes the set of objectives corresponding to dose volume goals * \(w_{o}\) signifies the weight assigned to the objective \(o\in\mathcal{O}\) * \(o_{s}\), \(o_{d}\), and \(o_{v}\) refer to the structure, dose, and volume goals of the objective \(o\in\mathcal{O}\) The objective function \(f_{o}(\mathbf{d})\) is computed based on the specific type of dose volume constraint: If \(o\) represents a maximum dose volume constraint1, \(f_{o}(\mathbf{d})\) is calculated as \(\sum_{d\in\mathbf{d}[o_{s}]}(d-o_{d})_{+}^{2}\); if \(o\) represents a minimum dose volume constraint2, then \(f_{o}(\mathbf{d})\) is calculated as \(\sum_{d\in\mathbf{d}[o_{s}]}(o_{d}-d)_{+}^{2}\). The formulation involves a squared over/under-dose penalty function. Footnote 1: e.g.: top 20% of the volume should receive at most 30 Gy Footnote 2: e.g.: 95% of the volume should receive at least 70 Gy In addition to the above, we introduced a regularization term that penalizes variations in pixel values between neighboring regions, also employing a squared penalty. The optimization process involves finding the optimal pixel values (\(\mathbf{b}\)) by solving \(\mathbf{d}=\mathbf{L}\mathbf{b}\), where \(\mathbf{L}\) is a precomputed dose-influence matrix mapping bixels to voxels. Notably, since negative energy rays are physically infeasible, we ensured that each pixel value is non-negative (\(b\geq 0\quad\forall b\in\mathbf{b}\)). To achieve this, we computed \(\mathbf{d}=\mathbf{L}|\mathbf{b}|\), where \(|\mathbf{b}|\) denotes the element-wise absolute value of \(\mathbf{b}\). By construction, the objective function is convex. Consequently, minimizing the objective function with a given set of weights should invariably converge to the same radiotherapy plan. To generate different treatment doses for the same patient case, dosimetrists can play withe the weights of each sub-objective function. This is outside the scope of this small review article, so we decided to just set the weights of all constraints equal to one. ### Open-source Optimizers We tried to have a comprehensive test of available open-source optimizers, here is a short description of the ones tested: (Stochastic) Gradient DescentIs an optimization algorithm that iteratively updates the model parameters in the direction of the negative gradient of the objective function. In our case, it is not stochastic, since it calculates the gradient using the current solution3[8]. Footnote 3: Our objective function has all its inputs as parameters, so there is no notion of stochasticity. Conjugate GradientIs an iterative optimization algorithm commonly used to solve systems of linear equations or quadratic optimization problems. It iteratively computes conjugate directions and updates the solution along these directions, aiming to minimize the objective function [6]. Conjugate Gradient is often applied in scenarios where the Hessian matrix is unavailable or expensive to compute. NewtonNewton's method is an iterative optimization algorithm that uses the second-order derivative (Hessian matrix) to find the minimum of a function. It updates the current estimate by taking into account both the first-order derivative (gradient) and the second-order derivative [15]. Slsqp(Sequential Least Squares Programming) is a sequential quadratic programming algorithm used for constrained optimization. It iteratively solves a sequence of quadratic programming subproblems to find the optimal solution subject to constraints [2]. RMSprop(Root Mean Square Propagation) is an optimization algorithm that addresses the problem of diminishing learning rates in traditional gradient descent methods. It divides the learning rate by the root mean square of the past gradients, which helps to stabilize and speed up convergence [7]. BFGS-based Pure BFGS(Broyden-Fletcher-Goldfarb-Shanno) is a quasi-Newton method that approximates the Hessian matrix using updates based on gradient information. It performs a line search to determine the step size that minimizes the objective function along the search direction [5]. L-BFGS(Limited-memory BFGS) is a variation of BFGS that uses a limited-memory approach to approximate the Hessian matrix. It stores a limited number of past gradient and parameter values to compute an approximate inverse Hessian matrix efficiently [11]. Adam-based Pure Adam(Adaptive Moment Estimation) is an optimization algorithm that combines ideas from both adaptive learning rates and momentum methods. It computes adaptive learning rates for each parameter based on estimates of the first and second moments of the gradients [9]. ###### Abstract We propose a new method for learning the adaptive learning rate. The proposed method is based on the adaptive learning rate. The proposed method is based on the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate to the adaptive learning rate. The adaptive learning rate is obtained by applying the adaptive learning rate. Figure 1: TGG 119: Multiple PTVs Figure 2: TGG 119: Head and Neck Figure 3: TGG 119: Prostate Figure 4: ICM: (Typical) Prostate volume size); TGG 119 fake head & neck (fig. 2) and TGG 119 fake prostate (fig. 3) have similar sizes. Notably, there is an observable trend indicating that as the problem size increases, LBFGS outperforms both RMSprop and Adam optimization algorithms. LBFGS vs BFGSIt would be expected that BFGS performs better than LBFGS in terms of iterations, but not in terms of time (since LBFGS is a fast approximation of the BFGS technique). However, we observe that LBFGS outperforms BFGS even on the iterations-wise graph. This suggests that the limited memory approximation made are biased towards suitable directions in these type of problems. ## 4 Discussion If it was possible to make Newton's method faster, than we would advise to use Newton's optimization algorithm. However, to the best of our knowledge, computing the Hessian remains long, not only in our implementation. Hence, we advise to use the LBFGS algorithm for the problem of dose optimization i radiotherapy; it appears to be the fastest to converge, and converged steadily on the four cases tested.
2306.01015
How to Estimate Model Transferability of Pre-Trained Speech Models?
In this work, we introduce a "score-based assessment" framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low $p$-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models.
Zih-Ching Chen, Chao-Han Huck Yang, Bo Li, Yu Zhang, Nanxin Chen, Shuo-Yiin Chang, Rohit Prabhavalkar, Hung-yi Lee, Tara N. Sainath
2023-06-01T04:52:26Z
http://arxiv.org/abs/2306.01015v3
# How to Estimate Model Transferability of Pre-Trained Speech Models? ###### Abstract In this work, we introduce a "score-based assessment" framework for estimating the transferability of pre-trained speech models (PSMs) for fine-tuning target tasks. We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates using the extracted representations. Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers by making a temporal independent hypothesis. We evaluate some popular supervised speech models (e.g., Conformer RNN-Transducer) and self-supervised speech models (e.g., HuBERT) in cross-layer and cross-model settings using public data. Experimental results show a high Spearman's rank correlation and low \(p\)-value between our estimation framework and fine-tuning ground truth. Our proposed transferability framework requires less computational time and resources, making it a resource-saving and time-efficient approach for tuning speech foundation models. Zih-Ching Chen\({}^{1}\), Chao-Han Huck Yang\({}^{2^{*}}\),\({}^{3}\), Bo Li\({}^{2}\), Yu Zhang\({}^{2}\) Nanxin Chen\({}^{2}\), Shou-Yiin Chang\({}^{2}\), Rohit Prabhavalkar\({}^{2}\), Hung-yi Lee\({}^{1}\), Tara N. Sainath\({}^{2}\)\({}^{1}\)National Taiwan University, Taiwan \({}^{2}\)Google, USA \({}^{3}\)Georgia Tech, USA {r09942176, hungyillee}@ntu.edu.tw; [email protected]; {boboli,ngyuzh}@google.com **Index Terms**: Pre-trained speech models, transfer learning, model transferability, and foundation speech models ## 1 Introduction In recent years, large-scale pre-trained neural networks, also known as Foundation Models [1, 2, 3, 4], have demonstrated numerous benefits in various fields. One of these benefits includes the application of supervised learning [5] or self-supervised learning (SSL) models [6, 7, 8] trained on few-shot adaptation [9] or continuous learning [10] from pre-training. FMs have shown strong generalization and zero-shot learning abilities [11, 12] by learning representations, making them a popular choice for speech processing tasks [2, 3, 13]. These representations can then be utilized by downstream models for specific task-related feature extraction [14]. However, fine-tuning separate FMs for multiple downstream tasks can be computationally expensive and resource-intensive due to the size of the FMs. Even in partial model tuning settings, determining where and how to insert parameter-efficient learning modules such as residual adapters [15, 16] and neural reprogramming [17] relies heavily on hand-crafted expertise. In this work, we aim to address the aforementioned challenges of determining the best FM or candidate layer for tuning by _introducing a model transfer assessment framework for speech processing_ applications. As illustrated in Figure 1, our process consists of four steps: step 1 and step 2 involve collecting target speech data and pre-trained speech models (PSMs), step 3 involves using frozen PSMs to extract features, and step 4 involves using score-based assessment methods to determine the best model for the targeted task or the best layer for partial tuning or insertion of parameter-efficient modules. More specifically, our model assessment framework is based on two theoretical approaches: (i) optimal transport with latent space measurement [17] and (ii) maximum evidence [18] with Bayesian probabilistic graphical models (PGMs). Since both of these theoretical approaches have not been extensively explored in speech processing, we make a very first attempt to approximately model continuous speech recognition and isolated word recognition as classification problems by establishing a temporal independent hypothesis [19] (TH). In the following paragraphs, we first introduce recent work on model transferability estimation in non-speech processing tasks and then explain how the TIH connects to the two theoretical backbones, even in a simplified and non-autoregressive decoding setting. **Related Pre-Trained Model Transferability Works:** One of the earliest works on pre-trained model transferability estimation in computer vision tasks was the use of negative conditional entropy (NCE) [20]. NCE uses a pre-trained vision model and evaluates conditional entropy between target pseudo labels, which are the assigned label of the source model and real target labels. A subsequent work to NCE is the log expected empirical predictor (LEEP) [21], which modifies NCE by using soft predictions from the source model. LEEP calculates the log-likelihood between the target labels and the predictions to estimate model transferability, which allows the input data to come from arbitrarily different distributions. However, when evaluating the quality of model transferability estimation, both NCE and LEEP have been reported with correlation coefficient \(\leq 0.6\) with high \(p\)-value, as noted in [18]. On the other hand, a recent model transferability assessment solution for vision and language processing tasks is LogME [18]. LogME predicts accuracy on the target task by estimating the marginalized likelihood of labeled target exam Figure 1: A step-by-step illustration of the proposed framework for providing scores for assessing the best pre-trained model and the best layer for transfer learning with speech data. ples, under the premise that a linear classifier is added on top of the pre-trained model. LogME can be applied to both supervised and unsupervised learning tasks for cross-task adaptation, as it considers zero-shot features encoded by pre-trained models. In recent works [22, 23, 24], LogME has been shown to be an appropriate first step for encoder selection in _natural language processing_[22]. Preliminary attempts in [23] also showed high correlation results when using a modified version of LogME for ensemble selection of large language models. Meanwhile, to the best of the authors' knowledge, there are no studies of using the aforementioned log-likelihood methods to analyze speech processing-based tasks from large-scale pre-trained FM(s) to adaptation. While classification scores have been used as evaluation metrics to assess deep learning models [21, 18], they may not fully capture the performance of speech models due to the unique characteristics of speech signals. The sequential nature of speech signals requires modeling time-step dependencies, which cannot be directly assessed by classification metrics. Next, we review some related speech processing methods and empirical studies to highlight the differences between existing methods and the proposed model assessment framework in speech processing. **Pre-trained Speech Model (PSM) Selection Works:** In the realm of neural network-based PSM for various speech application tasks, previous research has explored the possibility of selectively reusing established PSMs by only fine-tuning certain trainable components, such as the acoustic encoder or language decoder [14]. For instance, prior research [25] suggests that tuning only selected encoders of RNN-Transducer [26] closest to the _input layer_ of PSM results in improved performance compared to fully fine-tuning. In contrast, _Pasad et al._[27] discovered that re-initializing the transformer layers closest to the _prediction head_ outperforms initializing all layers from a pre-trained SSL-based PSM. Given that different PSMs feature distinct encoder-decoder architectures that incorporate both acoustic and language information, the selection of an appropriate fine-tuning configuration can be task-dependent and model-specific, requiring heuristic search. In other words, training a model using every possible encoder or decoder has shown an extremely large sample complexity and a significant amount of time [28, 22, 29], making it difficult to reproduce for researchers with limited computational resources. A preliminary attempt to estimate pre-trained model transferability was proposed in [17]. This attempt is based on optimal transport and measures the distance between a target domain and its source domain in terms of population risk, as its difficulty model transferability. However, their work was focused on _selecting_ PSMs for _cross-modal_ adaptation to classify sensor data, and its effectiveness in speech processing tasks has not been studied. In this work, we also advance the use of optimal transport [17, 30] for estimating PSMs and provide some initial explorations for speech processing, which is based on a proposed temporal independent condition discussed in Section 2.1. The recent advent of FMs and their breakthroughs in speech processing tasks [31, 7, 32, 8, 2] has created a growing need for new transferability estimation techniques evaluating the strengths of FMs. But speech tasks have yet to be extensively investigated with transferability estimation techniques. In this work, we aim to answer the question of how well we can estimate the transferability of pre-trained speech models to specific speech recognition tasks. We establish baselines for layer-wise and model-wise evaluations. The proposed framework could serve as one first attempt to evaluate pre-trained speech models, which has some potential to enhance the development of model tuning for future studies. **Our contributions include:** * We connect model transferability estimation to speech tasks by leveraging a simplified hypothesis of temporal independence to relaxes the posterior nature of speech recognition. * We advance two different perspectives on model transferability for speech processing: (i) optimal transport and (ii) evidence maximization, to provide interpretable scores for assessing PSMs. * By conducting initial attempts in cross-layer and cross-model transfer learning setups, we evaluate that our framework has a high rank correlation and low \(p\)-value. ## 2 Transferability for Speech Models This study proposes two perspectives on evaluating model transferability in speech processing. We explore the use of optimal transport and evidence maximization as methods for generating interpretable scores to assess the performance of speech processing models (PSMs). To enable the evaluation of model transferability, we introduce a simplified hypothesis of temporal independence in Section 2.1, which relax the posterior nature of speech processing in this section and enables evaluating model transferability in sequential speech data. In Section 2.2, we incorporate TIH into the optimal transport sliced Wasserstein Distance, which is utilized for measuring the distance between source and target data distributions. In Section 2.3, we apply TIH in the likelihood aspect, LogME, a technique that models the relationship between extracted features of a speech signal and the output labels. ### Modeling Temporal Independent Hypothesis (TIH) To estimate the transferability for speech processing asks, we estimate the correlation between the label sequence and the features extracted from the input sequence. Since in speech processing tasks, the length of the input sequence and the label are not aligned, a temporal independent hypothesis was introduced in the connectionist temporal classification (CTC) mechanism [19] for automatic speech recognition (ASR) modeling. It is based on a simplified condition that ignores posterior information during the loss computation. This simplified condition assumes that a speech model can learn sequential information through its neural network encoders, such as attention or recurrent networks. Let \(f_{i}\in\mathbb{R}^{D}\) be the \(i\)-th feature extracted with \(D\) dimensions by a pre-trained model \(\phi\) for the input \(x\), and let \(\mathbf{y}_{i}\in\mathbb{Z}^{+}\) be the scalar label. The collection of all features is represented by a matrix \(F\in\mathbb{R}^{D\times n}\), and the collection of all labels is represented by the vector \(\mathbf{y}\in\mathbb{Z}^{n}\): \[p(\mathbf{y}|F)=\sum_{A\in A_{F}}\prod_{t=1}^{T}p_{t}(a_{t}|F), \tag{1}\] where \(A\) is defined as the set of all valid alignments between the input \(\mathbf{x}\) and the output \(\mathbf{y}\), and \(P(\mathbf{a}\mid\mathbf{x})\) is the probability of a specific alignment \(\mathbf{a}\) given the input \(\mathbf{x}\). The probability density \(p(\mathbf{y}|F)\) measures the compatibility between the feature matrix \(F\) and the label sequence \(\mathbf{y}\). It is calculated by summing over all possible alignments \(A_{F}\) between the input and output sequences, using Equation 1. This equation multiplies the probability of the alignment at each time step \(t\), given the feature matrix \(F\) and the label sequence \(\mathbf{y}\). **Foreced Alignment:** In the case when the input sequence and the output label are not aligned, we estimate the transferability of pre-trained models using the CTC forced alignment algorithm by aligning the model's output with the reference transcriptions of the target task. Our forced alignment process includes backtracking, which enables the correction of misaligned frames by considering previous frames and making necessary adjustments to the current alignment. ### Optimal Transport by Sliced Wasserstein Distance In speech processing tasks, the output label and input sequence may not align, directly calculating the median distance of the data distribution is infeasible. So here, we introduce optimal transport with TIH. Building upon Eq. (1), we conduct latent space measurement by sampling a random batch of source (\(\mathbf{x}_{t}^{\text{src}}\)) and target input (\(\mathbf{x}_{t}^{\text{trg}}\)) for time step \(t\). We model character-level prediction using the TIH and the latent distribution of \(\mathbf{\mu}_{t}^{\text{src}}\) and \(\mathbf{\mu}_{t}^{\text{trg}}\), which represent the zero-shot latent representation. We measure the distance between these two probability distributions, \(\mathbf{\mu}_{t}^{\text{src}}\) and \(\mathbf{\mu}_{t}^{\text{trg}}\), using the sliced Wasserstein distance [33] (SWD), \(\mathcal{W}_{p}(\mathbf{\mu}_{t}^{\text{trg}},\mathbf{\mu}_{t}^{\text{trg}})\), under \(L_{p}\)-norm, where \(p=1\) is used to obtain a closed form solution based on the theory presented in [17]. A larger SWD value (or score) indicates a greater difficulty in aligning the latent representation between the two selected domains and serves as a measure of the difficulty of model adaptation. In practice, we calculate the median SWD value over the total number of time steps, \(T\), and report this value as its transferability score. However, this TIH-based SWD score **cannot** be used directly for evaluating SSL tasks as the unsupervised pre-trained data and target tasks come from the same domain (e.g., Librispeech [34]). In the next section, we investigate a more general framework for estimating the transferability of PSMs. ### Transferability Estimation by Likelihood We illustrate a connection between the aforementioned TIH and likelihood-based estimation in LogME [18] by computing the probability of the target labels conditioned on the source features. LogME measures the suitability of the encoded features for predicting labels via a probability density, which is estimated by mapping \(F\) to \(\mathbf{y}\) using a linear transformation parameterized by \(w\). The goal is to find the optimal weight \(w^{*}\): \[p(\mathbf{y}|F)\to p(\mathbf{y}|F,w^{*})\rightarrow\int p(w)p(\mathbf{y}|F,w)dw, \tag{2}\] where Eq. (2) denotes a proxy for feature suitability. Next, the probability densities of \(w\) and \(p(\mathbf{y}|F,w)\) are modeled using positive hyper-parameters \(\alpha\in\mathbb{R}^{+}\) and \(\beta\in\mathbb{R}^{+}\). The prior distribution of the weights is modeled as an isotropic multivariate Gaussian \(w\sim\mathcal{N}(0,\alpha^{-1}I)\), and each observation's distribution is modeled as a one-dimensional normal distribution \(p(\mathbf{y}_{t}|f_{i},w,\beta)\sim\mathcal{N}(\mathbf{y}_{t}|w^{T}f_{i},\beta^{-1})\), which is computed by: \[p(\mathbf{y}|F,\alpha,\beta)=\int p(w|\alpha)p(\mathbf{y}|F,w,\beta)dw \tag{3}\] The logarithm maximum evidence \(\mathcal{L}(\alpha^{*},\beta^{*})=\log p(\mathbf{y}|F,\alpha^{*},\beta^{*})\) is used to evaluate the compatibility between features and labels. \(\mathcal{L}(\alpha^{*},\beta^{*})/n\) is used as our LogME evaluation metric. Based on Equations (1) and (3), we propose a new formulation of LogME for speech processing tasks. The model is parameterized by hyper-parameters \(\alpha\) and \(\beta\) and is given by the following equation: \[p(\mathbf{y}|F,\alpha,\beta)\rightarrow\sum_{A\in A_{F}}\prod_{t=1}^{T}\int p_{t} (w_{t}|\alpha)p(a_{t}|F,w_{t},\beta)dw_{t}. \tag{4}\] We compute the log-likelihood of the labels in the aligned sequence and sum them to obtain the LogME score for the speech model. It is important to note that this revised LogME measurement can be used to estimate both supervised speech model training with labeled data and SSL models, without a need for source data. ## 3 Experiments In our speech task experiments, we conduct layer-wise exploration to estimate the transferability of pre-trained models. ### Experimental setup **Continuous ASR**: Our experimental setup follows the same training procedure as reported in [35] using the official test data. We conduct ASR adaptation experiments using a pre-trained English-only Conformer model [36]. We evaluate the average word error rate (WER) on the Multilingual Librispeech (MLS) benchmark, which includes seven languages [35, 1]. The supervised Conformer-ASR model was trained with a batch size of 1024 on 64 TPUs following the same setup as in [35]. **Phoneme and Isolated Word Recognition**: We evaluated the performance of our models on the LibriSpeech dataset using the train-clean-100, dev-clean, and test-clean subsets. The evaluation metric was Phone Error Rate (PER) in Table 2. For isolated word recognition tasks, we used the Speech Commands dataset v1.0 [37], which includes ten keyword classes, silence, and an unknown class. The evaluation metric for this task was accuracy (ACC) in Tables 3 and 4. In the SSL setup, the raw waveform was used as input, and a linear head was added to the upstream SSL model for phoneme recognition and speech command tasks. The SSL models were trained on a single RTX3090 GPU using the experimental settings from the SUPERB benchmark [14]. **A tSNE-based Baseline Method:** The tSNE-based clustering [38] has been commonly used for model interpretability. We also calculate the median tSNE clustering points of the unpaired source (\(\mathbf{x}_{t}^{\text{src}}\)) and target input (\(\mathbf{x}_{t}^{\text{trg}}\)) distributions. As an extra baseline for transferability estimation, this tSNE score is similar to the SWD score, where a large distance indicates greater difficulty in aligning the two different representations. Figure 2: Illustration of the two approaches for estimating transferability in speech processing tasks: optimal transport (Section 2.2) and maximum evidence (Section 2.3). The transferability metric in 2.2 is SWD [17], while in 2.3, we use LogME (\(\log p(\mathbf{y}|F)/n\)), where \(p(\mathbf{y}|F)=\int p(w)p(\mathbf{y}|F,w)dw\), to assess transferability. ### Study 1: Layer-Wise Exploration Transferability estimation can aid in determining the optimal layer for fine-tuning pre-trained models. In Study 1, we performed a layer-wise analysis to investigate the fine-tuning of a pre-trained English-only Conformer for MLS cross-lingual adaptation [39], as detailed in Table 1. Our results suggest that fine-tuning the top layers achieves better performance. Using LogME, we evaluated the performance of fine-tuning each layer, obtaining a correlation of 0.87 with a \(p\)-value of \(6\times 10^{-6}\). Additionally, SWD accurately estimated the ranking with a correlation of 0.81 and a \(p\)-value of \(7\times 10^{-5}\). However, direct computation of distance with tSNE was found to be less precise. Our experimental results demonstrate that layer-wise analysis using the LogME score is a more accurate method for estimating transferability in speech processing tasks, which provide insights into selecting the optimal layer for fine-tuning in cross-lingual ASR transfer. In SSL models, tSNE and SWD cannot be used to estimate transferability as they rely on source data, which is not available in this task. To address this issue, we performed a layer-wise analysis of a HuBERT base model [8] for phoneme recognition. Our results, presented in Table 2, indicate that the LogME score is a precise method for evaluating transferability in speech processing tasks. Our findings also suggest that fine-tuning the top layers does not necessarily result in improved performance, which differs from the results of the RNN-T model presented in Table 1. Therefore, estimating transferability for individual layers is essential, as it can save time and computational costs and eliminate the need for expert knowledge in selecting the optimal layers for fine-tuning. ### Study 2: Model-Wise Exploration #### 3.3.1 Pre-trained speech model tuned on classification For model-wise exploration, we conducted experiments to explore the transferability of pretrained models tuned on the classification tasks. To this end, we evaluated six different pre-trained models and computed their transferability scores using various evaluation metrics. As presented in Table 3, all of the evaluation metrics produced accurate estimates of the transferability scores. Our experimental results thus demonstrate that the LogME score can effectively capture the transferability of a pre-trained model in speech classification tasks. #### 3.3.2 Different pre-train data on same neural architecture We evaluated transferability metrics on the AST model [40] using three pre-trained datasets: frozen +Linear Probing, ImageNet [41], and AudioSet [42], to examine the effectiveness of transferability metrics evaluation on different pre-training data. Only the last MLP layer was fine-tuned during the experiments. The results in Table 4 indicate that the LogME score effectively estimates the transferability of pre-trained models in speech classification tasks. The model pre-trained on AudioSet has the best transferability, whereas the model with random initialization showed the worst transferability. These findings underscore the significance of selecting appropriate pre-training data to enhance model transferability. ## 4 Conclusion In this study, we explored the transferability of pre-trained upstream models in speech recognition tasks. Our results showed we are able to estimate the transferability of speech processing tasks. Moreover, our model-wise analysis showed that different pre-trained upstream models had varying adaptability to different downstream tasks. We believe our findings can provide insights into improving the efficiency and effectiveness of transfer learning in speech recognition. \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline RNN-T Layer & WER (\(\downarrow\)) & Rank\({}_{\text{FT}}\) & Rank\({}_{\text{CNN}}\) & Rank\({}_{\text{other}}\) & Rank\({}_{\text{SWD}}\) \\ \hline Conf-01 & 62.63 & 17 & 17 & 17 \\ Conf-02 & 53.49 & 15 & 16 & 10 & 11 \\ Conf-03 & 53.61 & 16 & 15 & 15 & 10 \\ Conf-04 & 47.75 & 14 & 13 & 13 & 14 \\ Conf-05 & 37.02 & 11 & 14 & 16 & 13 \\ Conf-06 & 48.71 & 13 & 12 & 12 & 16 \\ Conf-07 & 42.13 & 12 & 5 & 14 & 15 \\ Conf-08 & 32.32 & 10 & 3 & 6 & 8 \\ Conf-09 & 21.74 & 7 & 1 & 7 & 9 \\ Conf-10 & 22.56 & 8 & 10 & 9 & 4 \\ Conf-11 & 19.86 & 4 & 4 & 5 & 6 \\ Conf-12 & 21.71 & 6 & 8 & 11 & 12 \\ Conf-13 & 25.56 & 9 & 7 & 8 & 7 \\ Conf-14 & 19.23 & 3 & 9 & 4 & 5 \\ Conf-15 & 20.09 & 5 & 11 & 3 & 2 \\ Conf-16 & 18.87 & 2 & 6 & 2 & 3 \\ Conf-17 & 18.27 & 1 & 2 & 1 & 1 \\ \hline Spearman s rank correlation coefficient (\(\uparrow\)) & 0.69 & 0.87 & 0.81 \\ & \(p\)-value (\(\downarrow\)) & \(1\times 10^{-3}\) & \(6\times 10^{-6}\) & \(7\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Fine-Tuning pre-trained English-only Conformer for Multilingual Librispeech (MLS) [39] for cross-lingual adaptation, which WER is reported as averaged of seven MLS languages as the setup in [35]. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Models & Para. Acc. (\(\uparrow\)) & Rank\({}_{\text{FT}}\) & Rank\({}_{\text{adap}}\) & Rank\({}_{\text{CAM}}\) & Rank\({}_{\text{NAD}}\) \\ \hline \hline \multirow{2}{*}{\({}^{\dagger}\)HuBERT [8]} & 95M & 95.94 & 2 & 2 & 1 & 2 \\ & 95M & 92.27 & 5 & 5 & 5 & 5 \\ \({}^{\dagger}\)DecoAR2.0 [32] & 90M & 92.63 & 4 & 4 & 4 & 4 \\ \({}^{\dagger}\)Vgghi [43] & 72M & 96.78 & 1 & 1 & 2 & 1 \\ Yamnet [41] & 4M & 94.32 & 3 & 3 & 3 & 3 \\ fsCNN [45] & 20M & 91.34 & 6 & 6 & 6 & 6 \\ \hline \multicolumn{5}{c}{Time\({}^{\dagger}\)} & \multicolumn{4}{c}{\(\sim 0.61\)Day} & 24.09s & 28.01s & 10.86s \\ \hline \hline \end{tabular} \end{table} Table 2: HuBERT-based SSL PSM layer-wise fine-tuning results \begin{table} \begin{tabular}{c|c c c} \hline \hline WLRNet & WER (\(\downarrow\)) & Rank\({}_{\text{FT}}\) & Rank\({}_{\text{CNN}}\) & Rank\({}_{\text{other}}\) & Rank\({}_{\text{SWD}}\) \\ \hline Conf-01 & 62.63 & 17 & 17 & 17 & 17 \\ Conf-02 & 53.49 & 15 & 16 & 10 & 11 \\ Conf-03 & 53.61 & 16 & 15 & 15 & 10 \\ Conf-04 & 47.75 & 14 & 13 & 13 & 14 \\ Conf-05 & 37.02 & 11 & 14 & 16 & 13 \\ Conf-06 & 48.71 & 13 & 12 & 12 & 16 \\ Conf-07 & 42.13 & 12 & 5 & 14 & 15 \\ Conf-08 & 32.32 & 10 & 3 & 6 & 8 \\ Conf-09 & 21.74 & 7 & 1 & 7 & 9 \\ Conf-10 & 22.56 & 8 & 10 & 9 & 4 \\ Conf-11 & 19.86 & 4 & 4 & 5 & 6 \\ Conf-12 & 21.71 & 6 & 8 & 11 & 12 \\ Conf-13 & 25.56 & 9 & 7 & 8 & 7 \\ Conf-14 & 19.23 & 3 & 9 & 4 & 5 \\ Conf-15 & 20.09 & 5 & 11 & 3 & 2 \\ Conf-16 & 18.87 & 2 & 6 & 2 & 3 \\ Conf-17 & 18.27 & 1 & 2 & 1 & 1 \\ \hline \hline \multicolumn{5}{c}{Spearman s rank correlation coefficient (\(\uparrow\))} & 0.69 & 0.87 & 0.81 \\ & \(p\)-value (\(\downarrow\)) & \(1\times 10^{-3}\) & \(6\times 10^{-6}\) & \(7\times 10^{-5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Fine-Tuning pre-trained English-only Conformer for Multilingual Librispeech (MLS) [39] for cross-lingual adaptation, which WER is reported as averaged of seven MLS languages as the setup in [35]. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline WLRNet & WER (\(\downarrow\)) & Rank\({}_{\text{FT}}\) & Rank\({}_{\text{adap}}\) & Rank\({}_{\text{CAM}}\) & Rank\({}_{\text{NAD}}\) \\ \hline \hline \multirow{2}{*}{\({}^{\dagger}\)HuBERT [8]} & 95M & 95.94 & 2 & 2 & 1 & 2 \\ & 95M & 92.27 & 5 & 5 & 5 & 5 \\ & \({}^{\dagger}\)DecoAR2.0 [32] & 90M & 92.63 & 4 & 4 & 4 & 4 \\ & \({}^{\dagger}\)Vghi [43] & 72M & 96.78 & 1 & 1 & 2 & 1 \\ & 4M & 94.32 & 3 & 3 & 3 & 3 \\ & \({}^{\dagger}\)FkCNN [45] & 20M & 91.34 & 6 & 6 & 6 & 6 \\ \hline \multicolumn{5}{c}{Time\({}^{\dagger}\)} & \multicolumn{4}{c}{\(\sim 0.61\)Day} & 24.09s & 28.01s & 10.86s \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation on GSC-v1 [37] on different PSMs.
2310.13491
HTS Dynamo Flux Pump: The Impact of a Ferromagnetic Stator Substrate
HTS dynamo magnetic flux pumps are perspective devices for contactless charging the superconductor magnets and coils. In this work, we investigate the influence of a ferromagnetic substrate of a coated conductor used as the pump stator. We use the thin shell model of a coated conductor with a ferromagnetic substrate and show that such a conductor increases the pump-generated voltage if the superconducting layer is between the rotor and the substrate. Chebyshev spectral method is employed for numerical solution. Using simulation results for problems with a given transport current we also derive a simple analytical description for feeding a current to a coil.
Vladimir Sokolovsky, Leonid Prigozhin
2023-10-20T13:33:16Z
http://arxiv.org/abs/2310.13491v1
# HTS Dynamo Flux Pump: The Impact of a Ferromagnetic Stator Substrate ###### Abstract HTS dynamo magnetic flux pumps are perspective devices for contactless charging the superconductor magnets and coils. In this work we investigate the influence of a ferromagnetic substrate of a coated conductor used as the pump stator. We use the thin shell model of a coated conductor with a ferromagnetic substrate and show that such a conductor increases the pump generated voltage if the superconducting layer is between the rotor and the substrate. Chebyshev spectral method is employed for numerical solution. Using simulation results for problems with a given transport current we also derive a simple analytical description for feeding a current to a coil. HTS dynamo pump, coated conductor, ferromagnetic substrate, thin shell model, Chebyshev spectral method ## I Introduction Electromagnetic induction and nonlinear resistivity of type-II superconductors enable HTS magnetic flux pumps to inject a high DC current into a closed-loop superconducting coil of a magnet and also to continuously compensate the decay of this current [1]. Wireless magnet excitation eliminates excessive cryogenic losses and this is the reason of much recent interest in HTS pumps, see the reviews [2-5]. Dynamo-type pumps, first proposed in [6], have a simple structure: one or several permanent magnets are mounted on a rotating disk and, passing close to a superconducting tape (the stator) induce there a traveling magnetic field wave generating an output voltage with a nonzero average. HTS dynamos have been intensively investigated experimentally; numerical simulations (see the comprehensive recent review [7]) helped to understand the physical mechanism of voltage generation and the impact on the dynamo pump efficiency of various geometrical factors and field-dependent current-voltage relation for the superconductor. Although losses in coated conductors with magnetic substrates were studied experimentally and numerically in a number of works (see [8-10] and the references therein), our interest is the pump-generated DC voltage. To the best of our knowledge, this issue has not been investigated yet. Here we investigate the influence of a magnetic substrate of the coated conductor employed as a dynamo pump stator on the pump characteristics. Incorporating the recently developed thin shell model of a coated conductor with a ferromagnetic substrate [9] into the pump model ([11], see also [12]), we obtain a system of one-dimensional integro-differential equations. We solve this system using the fast and accurate spectral numerical method [9] and show first that ferromagnetic substrates can increase the generated open-circuit DC voltage. Then, assuming a simplified lumped model of a coil, we simulate its charging during several thousands of rotor revolutions and demonstrate that such substrates accelerate this process. Finally, we show that this time-consuming simulation can be replaced by a simple analytical description of the charging process based on solutions to a few given transport current problems. Our aim is qualitative characterization of the magnetic substrate impact and, in our model, we employ the simplest constitutive relations: a power current-voltage relation with a constant critical current density for the superconductor and a constant magnetic susceptibility for the substrate. We also neglect currents in all layers of the coated conductor except the superconducting one. Numerical simulations were performed in Matlab R2020b on a PC with the Intel(r) Core(tm) i7-9700 CPU 3.00 GHz. ## II The Model The HTS dynamo model [1, 11, 12] is developed for a rotating long permanent magnet passing close to a stationary long thin coated conductor strip (fig. 1). Fig. 1: A scheme of an HTS dynamo: the geometry of the problem. In this work we assume the same dynamo configuration and parameters as in [11; 12] (see table 1) with one exception: now the coated conductor has a ferromagnetic substrate with magnetic susceptibility \(\mathbf{\chi}\) and thickness \(\mathbf{\delta}\); its width \(2a\) is equal to that of the HTS layer. We consider two possible orientations of the stator strip: the substrate is further from the rotor than the superconducting layer and vice versa. Following [9], we consider only two layers of the coated conductor: the thin HTS layer and its substrate. Electrical current is allowed only in the HTS layer, for which we use the infinitely thin approximation, assume the sheet current density is directed along the strip and is the same in all strip cross-sections. The power current-voltage relation, characterizing the HTS layer, is \[\mathbf{e}=\mathbf{e}_{0}\left|\frac{j}{j_{c}}\right|^{n-1}\frac{j}{j_{c}}.\] Here \(j(t,x)\) is the parallel to the \(z\)-axis sheet current density, \(\mathbf{e}(t,x)\) is the electric field, \(\mathbf{e}_{0}=10^{-4}\) V/m, and \(j_{c}=I_{c}\) / \(2a\) is the field-independent sheet critical current density. If the substrate material is ferromagnetic, e. g. a Ni-W alloy with the magnetic susceptibility \(\mathbf{\chi}\) of the order of tens and more, its magnetization influences the current distribution in the superconducting layer and should be taken into account. The thickness of a substrate layer, \(30-100\) um, is small comparing to its width, 4-12 mm. Hence, we can use the thin shell magnetization theory developed by Krasnov [13-15] in terms of "surface magnetization", \(\mathbf{\sigma}(t,x)\), which is attributed to the mid-surface of the substrate and can be regarded as scalar in the case of a long strip in a perpendicular field. As in [9], we use scaled dimensionless variables \[\tilde{j}=\frac{j}{j_{c}},\quad\tilde{\sigma}=\frac{\sigma}{aj_{c }},\quad\tilde{\mathbf{h}}=\frac{\mathbf{h}}{j_{c}},\quad\tilde{e}=\frac{e}{e_{0}},\] \[(\tilde{x},\tilde{y})=\frac{(x,y)}{a},\quad\tilde{t}=\frac{t}{t_{ 0}},\quad\tilde{I}=\frac{I}{I_{c}},\quad\tilde{V}=\frac{V}{ae_{0}},\] where \(t_{0}=a\,\mu_{0}\,j_{c}\) / \(e_{0}\), \(\mu_{0}\) is the magnetic permeability of vacuum, \(I\) is the transport current, \(V\) is the voltage. Omitting the sign "\(\sim\)" to simplify our notations, we present the coated conductor model in dimensionless form (see [9]), \[\begin{cases}\mathbf{\kappa}^{-1}\mathbf{\sigma}(t,x)+\partial_{x}\Bigg{(} \frac{1}{2\mathbf{\pi}}\int\limits_{-1}^{1}\frac{\mathbf{\sigma}(t,x^{\prime})}{x-x^{ \prime}}\,\mathrm{d}x^{\prime}\Bigg{)}+\frac{s}{2}\,j(t,x)\\ \qquad\qquad=h_{x}^{\mathrm{e}},\\ \partial_{t}\Bigg{(}h_{y}^{\mathrm{e}}+\frac{1}{2\mathbf{\pi}}\int\limits_{-1}^{1 }\frac{j(t,x^{\prime})}{x-x^{\prime}}\,\mathrm{d}x^{\prime}+\frac{s}{2}\frac{ \partial\mathbf{\sigma}(t,x)}{\partial x}\Bigg{)}\\ \qquad\qquad=\partial_{x}e(t,x),\\ \int\limits_{-1}^{1}j(t,x)\mathrm{d}x=2I(t),\qquad e\sqarpoonup\down The one-dimensional integro-differential model of an HTS dynamo, (1)-(2), is simplified in many aspects. The main simplification is the long permanent magnet assumption, which makes possible to consider only a cross-section of the stator but needs the notion of an effective (active) length. This assumption, often employed in dynamo models with non-magnetic substrates (e.g., [1, 11, 12]), enables very fast numerical simulations but provides no realistic description of the closed current loops in the HTS layer, as do more accurate full-dimensional models [18-20]. Comparison of numerical solutions, obtained using the two types of models (see [19], fig. 7) showed, however, that the simplified model still produces a good approximation to the open circuit voltage curve if the strip width does not exceed the permanent magnet length. The thin shell magnetization model we employed for the substrate also makes numerical solution much easier. This simplification is, however, well justified [13-15]. For numerical solution of equations (1) we employed the method presented in [9] and based on the method of lines for integration in time and Chebyshev spectral approximation for discretization in space. The values of unknowns \(\boldsymbol{j}\) and \(\boldsymbol{\sigma}\) are found in \(N+1\) nodes of the Chebyshev mesh \(x_{k}=-\cos(\boldsymbol{\pi}k/N)\), \(k=0,...,N\). This is an extension of the spectral method [12], developed there for HTS dynamos with a non-magnetic stator substrate and shown to be more efficient than all numerical methods considered in [11]. ## III Open Circuit Voltage To model the open circuit conditions, we set \(I(t)=0\) and compare voltages computed for different ferromagnetic substrates characterized by their values of parameter \(\boldsymbol{\kappa}\). To avoid the transient effects, we present our simulation results for the second rotor rotation cycle. The computations were performed using the Chebyshev mesh with \(N=200\) and the computation time was about 20 seconds per cycle. For \(N=100\) this time was less than 5 seconds and the difference in computed DC voltages was less than 1%. First, let the HTS layer be on the rotor side of the strip, so in the model (1) we set \(s=1\). For small \(\boldsymbol{\kappa}\) the influence of the magnetic substrate is negligible, the voltage curve and the DC voltage are similar to those for a non-magnetic substrate. As this parameter increases, the voltage curve changes and the two negative peaks become deeper (fig. 2). The DC voltage changes from -10.1 to -16 \(\mu\)V (fig. 3, top) and saturates: the results for \(\boldsymbol{\kappa}=10\) and \(\boldsymbol{\kappa}=100\) are close. We conclude that, if the HTS layer is between the rotor and the substrate, the magnetic substrate can significantly increase the DC voltage (its absolute value, the sign is unimportant). In case of the opposite orientation (the magnetic substrate is between the rotor and HTS layer, \(s=-1\)) the substrate shields the magnetic field and the DC voltage decreases; it becomes negligible for large \(\boldsymbol{\kappa}\) (fig. 3, bottom). A similar influence of the ferromagnetic slice inserted before or behind the coated conductor dynamo stator has been observed experimentally and simulated numerically in [21]. Figure 3: Influence of magnetic substrate on the generated DC voltage \(<V>\). Top: HTS layer is between the rotor and the substrate. Bottom: the substrate is between the rotor and HTS layer. Figure 2: The voltage \(V_{r}\) during the second rotation cycle: the influence of ferromagnetic substrate. ## IV Charging a superconducting coil We now assume the superconducting layer is between the rotor and substrate (the DC voltage is increased due to the ferromagnetic substrate) and consider first the problem with a given transport current. Let the current be changed linearly from zero to a prescribed value \(I\) during the first cycle and then remain constant. The DC voltage was now computed for the third cycle. Our numerical simulations showed that for each \(K\) the voltage \(<\!V\!>\) depends almost linearly on the transport current \(I\) (fig. 4). Denoting by \(V_{0}(\kappa)\) the open-circuit DC voltage, we approximate this dependance as \[<\!V\!>=\!V_{0}(\kappa)-R_{\rm eff}(\kappa)I\,, \tag{3}\] where \(R_{\rm eff}\) is a constant playing the role of an effective stator resistance. The voltage becomes zero at almost the same transport current, \(I_{0}(\kappa)\), for all \(K\). We find \(R_{\rm eff}(\kappa)=V_{0}(\kappa)/I_{0}(\kappa)\) (table II). To simulate charging a superconducting magnet by an HTS dynamo we consider, as in [22] for the case of a nonmagnetic substrate, the dynamo in a closed circuit with a simplified lumped load having the resistivity \(R\) and inductivity \(L\). We use the same values of these parameters as in [22]: \(R=0.88\,\mu\Omega\), \(L=0.24\,{\rm mH}\). The circuit equation \[V_{0}(\kappa)-R_{\rm eff}(\kappa)I=RI+LdI\,/\,dt\] yields \[I=I_{\rm sat}\left[1-\exp(-t\,/\,\tau)\right], \tag{4}\] where the saturation current and the characteristic charging time are, respectively, \[I_{\rm sat}=V_{0}\,/\,(R_{\rm eff}+R),\quad\tau=L/\,(R_{\rm eff}+R)\,.\] These parameters depend on \(\kappa\), see table II. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\kappa\) & 0.01 & 0.3 & 1 & 10 \\ \hline \(V_{0}\) (\(\mu\)V) & -10.3 & -12.5 & -14.2 & -15.7 \\ \hline \(I_{0}\) (A) & -31.1 & -34.0 & -34.0 & -31.1 \\ \hline \(R_{\rm eff}\) (\(\mu\Omega\)) & 0.330 & 0.369 & 0.418 & 0.503 \\ \hline \(I_{\rm sat}\) (A) & 8.5 & 10 & 10.9 & 11.4 \\ \hline \(\tau\left(s\right)\) & 198 & 192 & 185 & 174 \\ \hline \end{tabular} \end{table} Table II: Dependance of the main pump characteristics on the ferromagnetic substrate. Figure 4: Pump-generated DC voltage as a function of transport current for different ferromagnetic substrates. Figure 5: Numerical simulation of load charging by dynamos with different stator substrates. Solid lines – solutions to the model (1),(5). Black stars indicate the corresponding analytical solutions (4). The inset illustrates the current ripples, accompanying the process but not seen in the current curves for 3000 cycles. Solution (4) completely ignores oscillations of the pump voltage during each cycle. On the other hand, not using the linear approximation (3), we can supplement our model (1) by the differential equation \[RI+LdI/dt=V_{r}(t)\, \tag{5}\] in which the right hand side takes into account only the part of the pump voltage ripples related to the resistance of the stator but, anyway, has the cycle-averaged value equal to that of the total voltage. It is easy to incorporate the evolutionary equation (5) into our numerical scheme that uses the method of lines for integration in time. The employed numerical method is efficient and we were able to model several thousands of cycles and compare the analytical solution (4) with the numerical solution of the system (1),(5); see fig. 5. The two solutions practically coincide, which suggests that in modeling charging current the ripples can be ignored. The presence of ripples can, however, increase the AC loss. ## VI Conclusion The benchmark HTS dynamo pump problem [11] was in this work extended to the case of the pump stator made of a coated conductor with a ferromagnetic substrate. Such a substrate changes the superconducting current density distribution. To our knowledge, the impact of a magnetic substrate on the dynamo pump performance has not been studied yet. In our work [9], using the thin shell magnetization theory [13-15], we presented a new model of a coated conductor with a ferromagnetic substrate and an efficient Chebyshev spectral numerical method. The model makes use of the high width-to-thickness ratio of the substrate and superconducting layers and is much simpler than the previously proposed two-dimensional models (for which the high aspect ratio presents a difficulty). In this work we applied such approach to modeling the HTS dynamo pumps and showed that magnetic stator substrate can significantly increase the pump generated voltage and accelerate contactless charging of a coil if the superconducting layer is oriented towards the rotated permanent magnet. For the opposite stator orientation, the magnetic field is shielded by the substrate and the pump generated voltage decreases. For a given transport current, determining the pump generated voltage using our model and numerical scheme takes less than a minute on a PC. Results of such simulations can be used to determine also the effective lumped model parameters of the pump and replace simulation of charging a coil during thousands rotor rotations by a simple analytical formula.
2306.01773
Voluminous yet Vacuous? Semantic Capital in an Age of Large Language Models
Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text. However, despite their potential for content creation, they carry the risk of eroding our Semantic Capital (SC) - the collective knowledge within our digital ecosystem - thereby posing diverse social epistemic challenges. This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise. The study contribution is two-fold: first, it is acknowledged that, withstanding the challenges of tracking and controlling LLM impacts, it is necessary to reconsider our interaction with these AI technologies and the narratives that form public perception of them. It is argued that before achieving this goal, it is essential to confront a potential deontological tipping point in an increasing AI-driven infosphere. This goes beyond just adhering to AI ethical norms or regulations and requires understanding the spectrum of social epistemic risks LLMs might bring to our collective SC. Secondly, building on Luciano Floridi's taxonomy for SC risks, those are mapped within the functionality and constraints of LLMs. By this outlook, we aim to protect and enrich our SC while fostering a collaborative environment between humans and AI that augments human intelligence rather than replacing it.
Luca Nannini
2023-05-29T09:26:28Z
http://arxiv.org/abs/2306.01773v1
# Voluminous yet Vacuous? ###### Abstract Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text. However, despite their potential for content creation, they carry the risk of eroding our Semantic Capital (SC) - the collective knowledge within our digital ecosystem - thereby posing diverse social epistemic challenges. This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise. The study contribution is two-fold: first, it is acknowledged that, withstanding the challenges of tracking and controlling LLM impacts, it is necessary to reconsider our interaction with these AI technologies and the narratives that form public perception of them. It is argued that before achieving this goal, it is essential to confront a potential deontological tipping point in an increasing AI-driven infosphere. This goes beyond just adhering to AI ethical norms or regulations and requires understanding the spectrum of social epistemic risks LLMs might bring to our collective SC. Secondly, building on Luciano Floridi's taxonomy for SC risks, those are mapped within the functionality and constraints of LLMs. By this outlook, we aim to protect and enrich our SC while fostering a collaborative environment between humans and AI that augments human intelligence rather than replacing it. ## 1 Introduction The fable of Funes the Memorious, conceived by Jorge Luis Borges, serves as a powerful metaphor for the era we live in. Funes, the character blessed--or rather, cursed--with perfect memory, found himself submerged in an ocean of unfiltered details. He was a prisoner of his own capacity, drowning in his universe of relentless particulars. The individual who once boasted the greatest memory lost his ability to discern the important from the trivial, transforming his mind into a "_garbage heap_" of excessive detail. As Borges wrote, "_To think is to forget differences, generalize, make abstractions. In the teeming world of Funes, there were only details, almost immediate in their presence_" [1]. In an echoing resonance to Funes' plight, our society now finds itself amidst a surge of information generation and consumption with uncharted challenges to our epistemic filters. This era, marked by the infinite details of our rapidly expanding infosphere, mirrors Funes' predicament. In this context, the concept of **Semantic Capital** (SC), coined by Luciano Floridi, gains paramount importance. SC encapsulates the collective information resources--knowledge, skills, or competencies--that individuals or entities possess. These resources can be harnessed to create value within our interconnected global information ecosystem [2]. This construct actively shapes the infosphere, catalyzing communication, fostering innovation, and driving informed decision-making [3]. As the realms of human cognition and artificial intelligence (AI) increasingly coalesce, their intersection is redefining the landscapes of collaboration, decision-making, and knowledge creation. In these emerging dynamics, the role of SC escalates in importance and complexity. The collaborative environments entailing human and AI integration go beyond mere task execution; they embody intricate interactions that should be dependent on mutual beneficial augmentation, and not replacement or mimicry [4, 5]. Nonetheless, this surge in information, fueled in part by AI, has the potential to generate a cascade of cognitive and sociotechnical risks e.g., cognitive overload, misinformation, social polarization, and erosion of public trust. As we argue in this paper, we are possibly approaching a deontological 'tipping point'--an inflection where our moral obligation to promote open dissemination of AI information might conflict with our duty to prevent harm. Indeed, the relentless acceleration and proliferation of AI information might soon manifest their most detrimental and repressive effects [6, 7, 8, 9]. This paper endeavors to delve into the role of SC within the sphere of human-AI interaction. We depart by defining its value to foster societal knowledge and trust within the context of the information ethics challenges that we currently face. We pay particular attention to generative AI systems, particularly Large Language Models (LLMs). By contextualizing the debate happening over their capabilities and limitation, we move to address the broader range of ethical and deontological implications of LLMs. Central to this endeavor is a necessary reframing of AI narratives, with a mindful consideration of who benefits from these narratives and how they shape public perception. In an era where our infosphere is populated with increasingly accessible AI-generated content, a critical reassessment of our relationship with open-source practices is paramount. By reflecting on the value of open-source models and regulations, we endorse governance practices to ensure they align with our ethical obligations and societal values. But before achieving that, we exhart to consider a deontological tipping point in our AI-driven infosphere. This approach entails moving beyond calls to adhere to AI ethical guidelines or regulations and conceiving a range of social epistemic risks that LLMs pose to our collective SC. Following Floridi's taxonomy for SC risk, our main contribution lies in mapping them within the capabilities and limitations of LLMs. By doing so, we aim to bring a novel outlook to the LLM discussion while encouraging innovative strategies to reinforce our epistemic defenses. Our discourse seeks to guide us toward an equitable and sustainable infosphere, where innovation flourishes without compromising societal values and individual well-being. ## 2 Appraising Semantic Capital Our exploration of the crucial role of SC in human-AI collaboration begins with Luciano Floridi's philosophy of information. Floridi's seminal work, birfield from the metamorphosis of the information age, places information at the core of our world understanding [10]. The _infosphere_, Floridi proposes, is an immersive information environment housing all informational entities--humans, artificial agents, and other organisms [10]. In this sphere, constant streams and exchanges of information form a complex interaction network, shaping our reality perception and directing our actions. Within this infosphere nests SC--value derived from meaningful information. It transcends mere data accumulation, presenting as well-formed, meaningful data that bolsters one's power to create meaning--_semanticise_. An individual's, group's, or society's SC stock, demonstrated in various forms like knowledge repositories, skills, shared societal norms, cultural narratives, etc., is employed and invested in information creation, understanding, and dissemination. This process fuels essential life aspects like communication, decision-making, learning, problem-solving, and others. SC's value is intrinsically linked to its ability to enrich our understanding, navigation, and shaping of our realities. As such, managing and curating SC is vital in our increasingly information-dense society. The risks associated with SC-- (a) loss, (b) unproductive, (c) underuse, or (d) misuse or or (e) depreciation due to truth erosion -- are defined by Floridi as "_the potential of loss of part or all of the value of some content that can no longer enhance someone's power to semantics something_". [10; 3]. The digital technology era has brought forth new SC dimensions. Data abundance and computational power have created pathways for enhancing and expanding our SC. AI and other digital technologies facilitate SC management and curation, aiding its effective and efficient usage and enrichment. If our world understanding is based on relationships between informational entities, and not just their intrinsic properties [11], then these technologies give rise to new SC forms that significantly impact our semanticising processes and, ultimately, shape our identities and realities1. Why, then, is it essential to highlight these concepts? Their role in shaping human-AI collaboration is central. SC provides a crucial lens through which we understand, navigate, and shape the evolving landscape of human-AI interaction in the generative AI era. Footnote 1: SC can be differentiated from related concepts like ’intellectual capital’ and ’cultural capital’. While SC focuses on knowledge, skills, and resources used for communication and comprehension [3], intellectual capital pertains to an organization’s sum of knowledge and skills that provide a competitive edge [12; 13]. Cultural capital, however, refers to cultural resources like education and norms that influence individual behavior and societal opportunities [14]. ## 3 Development of LLMs and the Debate Over Language "Understanding" In the compelling narrative of natural language processing (NLP), we've borne witness to a series of remarkable advancements over the past decade, with Large Language Models (LLMs) and other AI generative systems claiming center stage [15]. Commencing with the invention of Long Short-Term Memory (LSTM) networks in 1997 [16], the journey has led us to the present-day marvels of AI, such as GPT-4 [17]. These developments have profound implications for SC, raising pressing questions about the deontology of knowledge and information resources within the infosphere. A brief historical overview of NLP highlights the rapid progress and increasing complexity of these models. From LSTM to Word2Vec [18], from Sequence-to-Sequence models [19] to the transformative attention mechanism [20], and ultimately to the groundbreaking Transformer architecture [21] and subsequent birth of BERT [22], each evolution has refined the capacity to process and generate text, thereby influencing the constitution and use of SC. The 'philosophical' foundations of these NLP applications, especially for Word2Vec, relied on concepts of Distributional Semantics [23; 24], paired with the untapped benefits and dangers of "_Big Data_" to mirror a presumptive realistic image of textual knowledge gathered from online repositories and communities [25; 26; 27]. Against such shallow reflection, scholars addressed concerns related to biases of this knowledge available online or within any other databases with spurious, impartial, or unguarded data entries [28; 29; 30; 31]. This raised challenges for these models, such as primarily avoid to display semantically incomplete or nonfactual information2. The advent of LLMs such as OpenAI's GPTs, and their deployment in various applications, represent the contemporary zenith of this technological trajectory [33]. Nevertheless, the rapid proliferation of these models has sparked a lively debate among researchers and scholars concerning their true capabilities and implications. Footnote 2: the so-called “hallucinations” [32] in Natural Language Generation (NLG). A crucial question raised in this debate is whether LLMs genuinely understand the information they process, or if they are mere "_stochastic parts_," as posited by AI researchers Emily Bender, Timnit Gebru, Angelina McMillan, and (under an alias) Margaret Mitchell [15]. The paper offered a continuation of a critical inquiry toward their natural language understanding, as previously expressed in 2020 by Bender [34]. They argued that these models, despite their seemingly human-like text generation abilities, merely mimic patterns without comprehending the underlying meaning, potentially leading to the dilution of SC shuffling human knowledge in a convincing manner. Foremost, their concerns were grounded around the biases embedded in the training data, the substantial environmental footprint of training such language models, and the concentration of power in a few tech giants controlling them. In echoing these concerns, Melanie Mitchell highlighted in December 2022 the limitations of LLMs in truly "_understanding_" the world and their reliance on superficial patterns in the data [35]. Yet, it needs to be recognized how LLMs are powerful tools that generate human-like narratives: their underlying architecture and scalability allow them to manipulate and operate inferences over the external world representations. But such abilities are generally hard to forecast, as well as to handle and interpret, by their designers. The so-called "_emergent abilities_," which become more evident as the scale of the models increases, refer to the unforeseen and unplanned behavior that LLMs display, which often defy easy understanding or control by the developers themselves [36]. Such abilities can result in outputs that are surprisingly insightful or disturbingly off-mark, underscoring the unpredictability and potential risks of deploying LLMs in real-world contexts [37; 38; 39; 40]. This challenge intensifies when we consider the increasing number of studies being released for their application in practical scenarios to assist various human tasks [41; 42]. This translates to the fact that despite the property to handle a certain degree of semantic information [38] to produce coherent textual information, LLMs cannot be universally trusted as epistemic agents capable to handle pragmatic constraints of human communication. The reason for this lies in their architectures per se, but also within potential _Eliza-effect_[35], e.g. how the user linguistically frames their prompts based on their intention and competencies [43]. This entails that the presumptive factuality of these model outputs needs then to be compared against their stochastic nature, heavily influenced by the design [44; 39] and also the interaction [45] of the users with the prompts fed. Despite growing efforts in providing additional heuristic bases to downplay unpredictable behavior, such as with _chain-of-thought_, _constitutional AI_ or _red-teaming_[36; 46; 47], a crucial question stands with the reliability and factuality of LLMs: can we equate the performances of LLMs with human understanding and knowledge? Recognizing the differences, the academic community is reevaluating how to benchmark these models' performance. This calls for more critical assessment measures that better reflect the nature and capabilities of LLMs, especially in terms of interpretability and predictability [48; 49; 44; 31]. ## 4 Fear Sells Well? On Ethical and Dentological Implications of LLMs The debate over LLMs' abilities underscores the complex implications of AI generative systems for SC. Indeed, it comes as no surprise how LLMs, by their design and capabilities, can profoundly influence the infosphere landscape: they operate as powerful amplifiers and conduits of information, capable of synthesizing and generating vast amounts of text that are, in many instances, indistinguishable from human-written content. Reasoning about their potential benefits, LLMs can democratize access to information by breaking down barriers to user understanding e.g., paraphrasing, summarizing, or translating text into different languages. By making information more accessible and interpretable, these models can enhance the inclusivity and utility of SC. Secondly, LLMs can also contribute to the expansion of SC by facilitating the creation of new content. Authors can use these tools to over come writer's block, generate creative ideas, or automate routine writing tasks. In academic and professional settings, LLMs can help to compile emails, draft reports, write code, or even create poetry and prose, thereby enriching the diversity and volume of SC. Such positive scenarios must be counterbalanced with a sober recognition of the potential costs and risks that these models pose to SC. Among others, the risks associated with LLMs extend beyond semantic information handling, touching upon socio-economic, political, and ethical domains, encompassing bias propagation, labor market disruptions, power centralization, misinformation campaigns, cyber threats, intellectual property issues, and unforeseen harmful uses [50]. At the core, LLMs can potentially spawn a proliferation of information a-like content, increasingly blurring the line with factual information. This proliferation risks diluting the quality of SC, contributing to an infosphere that is voluminous yet vacuous3. Alongside these concerns, the susceptibility of LLMs to the propagation of false information, as explored by Bian et al., adds another layer of complexity to the debate [51]. Their study claimed how false information tends to spread and contaminate related memories in LLMs via a semantic diffusion process. Also, these models might be subject to authority bias, often accepting false information presented in a more trustworthy style such as news or research papers. On this line, if LLMs are easily perturbable given prompt and information sources provided, they might be deployed at scale to scurfe or crowd out minor or dissenting public voices4. Footnote 3: Not only related to textual abilities, but so far public opinion was surprised by the dissemination of hyperrealist portraits of public personas made through generative AI tools, e.g. Pope Francis [22] wearing fashion coats or Donald Trump getting arrested [23] getting arrested. Afterward, part of the public was engaged to see how a professional photographer, Boris Eldagen, could even win an international award with an AI-produced image [24], or, even worse, famous painters such as Edward Hopper being displayed in Google’s search engine alongside AI-generated imitations of their works [25]. Footnote 4: On this note, a debate should be held on how appropriate is to deploy generative AI to represent social distress and identities, such as public manifestations [22, 23], or companies recoursing to generative AI tools claiming to promote “_diversity_” through fake fashion models advertising [22], while in reality displaying a subtle operation of ethics-washing - failing to hire and remunerate underrepresented individuals while still leveraging their image at no costs. Within these considerations, we should now approach the paper from Bender et al. [15] as a starting point of a wider debate, encompassing not only capabilities of LLMs, but rather the governance implication and social communities impacted, ultimately pertaining to the value of our shared SC [52]. Put in simple terms, their discourse shall be not considered merely a matter of academic disquisition over semantic handling of human language, but rather a pointed attempt to scrutinize how these generative tools are associated to a narrative about AI that serves those who possess the means and resources to develop and capitalize economic value and competitive advantage - not only directly from such models, but also around them [12]. Through this lens, it can be discerned two key interpretive perspectives in this debate. The first, immediate perspective mesmerizes the public by proclaiming these LLMs as “_sparks of Artificial General Intelligence_ (AGI)," [53], implying that these models display initial prototypes of human-a-like cognitive intelligence5. Such a view captivaries public imagination and fuels, at best, a techno-optimistic narrative, while at worst, technological determinism, having public opinion feel humanity as doomed by the advent of some unavoidable and imminent superior AI [52]. The second perspective, however, is way more sobering and less sensationalist, unpacking a far more structural and intricate argument concerning the ecology of AI development, commercialization, and the possession of SC in the form of know-how for gathering and maintaining increasingly sophisticated data and AI models [7]. When the conversation revolves solely around the inherent risks in the models, it inadvertently diminishes the role of their developers. As Bender et al. resonated, their research served as a warning bell, cautioning against a development trajectory of AI solutions promising extraordinary capabilities without due scrutiny [15, 58]. Footnote 5: Yet, the research from Bubeck et al. is released [as for May 2023] without peer-review by a team of Microsoft and OpenAI’s researchers, using foremost controversial definitions of human intelligence as a comparison [22]. Related to the AGI narrative, Giada Pistilli, main ethicist of HuggingFace and contributor of the LLM BLOOM [54], claimed in May 2022 to not engage herself to speak aware of an a formate Twitter thread [22].This is because the framing of that public debate was proved only detrimental to the real harms of LLMs, cautioning an in-depth analysis of the issue in a research study published the same month [55]. This position resonates with an increasing number of scholars being cautious to adopt or even engage in using these terms in the public discourse; similarly - as also for the current paper - concerns over unnecessary anthropomorphisms [56] within LLMs are now being raised while deploying terms pertaining to human cognition, such as “_hallucination_”, to address nonfactual information provided by LLMs [22]. In this perspective, some work is finally moving towards making explicit design choices to prevent anthropomorphism for conversational systems [57]. The core issue resides in the polarization of a debate where, on one hand, one faction predominantly comprises stakeholders - such as propriateries of AI solutions - might derive benefits from gauging public attention over these models. Their strategic maneuvers, despite genuine fears over downsides of their products, might also be geared towards maintaining the undivident of the global audience, intending to foster an environment conducive to the promotion and consumption of their AI-based creations. Concurrently, another group emerges posing stark opposition by unearthing the contentious aspects of such models. This group, yet widely heterogeneous, contends that these AI solutions are not inherently superior or advantageous, and instead, might cause more harm than good due to their pronounced socio-technical ramifications and the plausible monopoly [9] they can create in the AI innovation landscape 6. Indeed, the year 2023 witnessed an unprecedented surge in the release of LLMs applications to the public by large corporations. These developments were characterized by increasingly shortened time-to-market durations, intensifying the potential risks and implications of these systems7. This speed, while demonstrating their technological capabilities, also exposed gaps in their ethical governance. Despite their "_demo_" status, instances of these LLMs causing harm or harassment to users highlighted the need for careful deployment strategies and comprehensive product testing and feedback, as well as structural inquiry over the influence exerted over the AI development agenda by proprietary solutions. Footnote 6: Interestingly, the ferror and dynamism of this debate have garnered widespread attention. With the current momentum, an increasing number of scholars and civil rights associations are echoing the apprehensions about the potential LLMs can inflict, taking actions such as open letters to regulate LLMs. Of this group, a segment of the public is lending credence to the “_longtermism_” outlook—holding onto the belief that AI might be a blessing for all humanity in the future, only if it is perceived as an existential threat today [Forbes2022],[Bloomberg2023]. This viewpoint, however, does not advocate for immediate and tangible action against present structural issues, such as the exploitation of underrepresented communities involved in annotating and moderating LLMs. In response to these systemic issues, the affected communities have begun showcasing innovative grassroots initiatives. Karen Hao’s investigation into AI colonialism [MIT, TechRev2022] and the protests staged by African AI workers to unionize in Nairobi illuminate these ongoing efforts [Time2023]. Meanwhile, it is noteworthy that AI pioneers, like Geoffrey Hinton, have been vocal about the necessity for increased regulations but have not explicitly extended support to these communities or other concerned academics, such as Bender, Gebru, and Mitchell [TheGuardian2022]. Similarly, owners of AI technologies, like OpenAI's CEO Samuel Altman, have sought regulatory measures before the US Senate [TheNYT2023], while other industry leaders, such as Microsoft Chief Economist Michael Schwarz [ArsTechnica2023] and former Google CEO Eric Schmidt [Fortune2023], have either invited caution over the perceived risks of generative AI until incidents of "_meaningful harm_" occur or advocated for self-regulation in the industry while criticizing governments for their alleged lack of expertise to regulate technology effectively. The narrative spun by these AI proprietors oscillates between demanding no regulation and advocating for a different regulation. Such a seemingly contradictory stance might be interpreted as a strategic maneuver to hold investor attention captive while cleverly deflecting competitive threats in the AI arena [Insider2023]. Footnote 7: The rush to launch these applications often eclipsed necessary precautions, resulting in technology releases without sufficient safeguards. This haster raises concerns about corporate decision-making and leaves the public exposed to unanticipated AI-related risks, such as LLMs chat-bots harassing or recommending users to self-harm or indulge minors into socially irresponsible behaviors [Insider2023b, WashingtonPost2023, Time2023]. Such unforeseen detrimental consequences serve as stark reminders of the need to couple AI development with comprehensive evaluation processes that prioritize societal well-being over speed and profit. Navigating this debate, one must remain cognizant of the intricate dynamics at play and question who ultimately benefits from these narratives. This to ensure that the discourse around AI and its impact on our collective SC remains grounded in empirical realities and is sensitive to the broader socio-economic implications. ## 5 Open Source and Regulation for LLMs Let's momentarily pause and look beyond the current maelstrom of the ongoing debate on LLMs. Taking a step back, we find ourselves in the birth of the internet era, deeply influenced by the late 20th century's internet narratives. This was a time ripe with the promise of an information revolution [59]. The birth of the open-source paradigm during this time served as a catalyst to this revolution. By providing a universal platform accessible to anyone with an internet connection, it was an embodiment of the democratic ethos of these emerging digital utopias. The open-source movement, anchored in collaboration, transparency, and accessibility, has spurred an incredible acceleration in technological evolution [60]. This movement's transformative impact is especially palpable in the AI field, cultivating a fertile ecosystem ripe for progress and innovation. Emerging in this backdrop, LLMs wore much of their rapid development to open-source AI frameworks like TensorFlow and PyTorch as well as the Transformer architecture [61, 62, 21]. Such open-source tools have made it feasible for researchers, developers, and organizations across the globe to access, modify, and contribute to a shared body of knowledge and codebase. This democratization of AI technologies, however, is a double-edged sword; while it empowers innovation and progress, it simultaneously amplifies challenges related to misuse, ethical implications, and regulatory requirements. The diffusion of generative AI technologies, such as LLMs, via open-source platforms, accentuates the dual-use risk. LLMs can be applied for both beneficial and harmful purposes. Still cognizant of their risks, once an AI model is made openly available, specularly becomes harder to track, contain, or retract, given the scale, speed, and accessibility facilitated by open-source platforms. If instead an LLM is proprietary, such as GPT-4 [17], being undisclosed to the public, then risks might arise in not being able to reprove its design phase and data provenance, as well as oversight its deployment. From this, it comes as no surprise that regulating generative AI technologies is a formidable challenge. The pace at which AI evolves is often unmatched by the rate at which traditional regulatory frameworks adapt8. Crafting effective regulations requires a delicate balancing act: on one side, for disclosed models, it entails to manage the risks of misuse while preserving the democratic ethos of open-source, without stifling innovation; on the other, for proprietary models, it entails preserving marketing advantages while still allowing impart auditing measures to reprove model compliance and benevolence within regulatory standards as well as societal values. One potential pathway forward involves revisiting our relationship with open-source practices in the context of LLMs. This rethinking requires a comprehensive, integrative approach that respects the principles of open-source while recognizing and addressing the risks posed by AI technologies. Strategies could include more accountable deployers' practices, having them bear a greater responsibility for their creations, and revised legal frameworks that adapt to the specific challenges of LLMs. In terms of soft-power, this could be complemented by industry-wide certifications and licensing9 to enhance accountability over the design and development of those AI systems. In terms of hard-power, instead, AI governance measures should attain from clear legislative guardrails, such as regulatory sandboxes, risk assessments, and auditing practicing encompassing the development and deployment of LLMs. Within this scope, the current major regulatory effort in the global landscape is now being lead by the European Union (EU), yet not being exempted from potential legislative weaknesses that might not always efficiently mitigate LLMs risks10. Footnote 8: A lively example of this challenge can be found in the EU commissions efforts back in April 2023 to make amendments targeting generative AI, ahead of final parliamentary votation on May 11th with the EU AI Act draft [EuorapIPress2023, Euractiv2023]. ## 6 The Deontological Tipping Point: Navigating the Information Surge Yet, calling for ethical virtuosism and regulations might not be enough to shelter us our epistemic filters in this unprecedented storm of AI-generated information. While this surge of information has democratized access to knowledge and fueled progress in myriad fields, it also has the potential to create a state of social epistemic bewilderment. It is against this backdrop that it can be argued that we have reached a _deontological tipping point_--an inflection where the re-lentless acceleration and proliferation of information culminate in the epistemic condition to scale up its detrimental effects. The concept of a deontological tipping point suggested is constituted by a juncture where our moral obligation to assist to the open dissemination of certain AI narratives and solutions may come into conflict with our duty to prevent harm. Within the context of AI, and particularly in relation to LLMs, this tipping point is precipitated by the realization that unfettered access to information and open-source practices, while fostering innovation, can also amplify risks given how scalable and accessible these models are, independently of liability of major AI proprietors or individual developers and deployers. This democratization and explosion of information blur the lines between reality and artificial constructs, in echoing Baudrillard's notion of "_hyperreality_". The hyperreality conceived by Baudrillard--an environment where simulacra blur the boundaries between real and artificial, and virtual identities deontologically supersede their real references --becomes an eerily accurate premonition of a possible AI-saturated indispender [65]. As AI-generated content swells, we confront the dual challenge of strenghtening our cognitive ecology to preserve our SC, whilst upholding the open-source principles that have traditionally sparked innovation [66]. Despite being awash with information, we are precariously perched on the edge of what James Bridle refers to as a "_New Dark Age_," a paradox where information in our current technological ecosystem obscures knowledge instead of revealing it [8]. We must navigate this deontological tipping point, resisting unchallenged acceptance of an AI-driven information ecosystem. The challenge lies in recognizing and navigating this deontological tipping point: this aligns with Floridi's information ethics framework, which underscores the moral implications of creating, managing, and utilizing information. As remarked before, Floridi stresses how that the quality of our infosphere, or the environment in which information is created, shared, and consumed, profoundly impacts our lives and our moral decisions [2, 11]. To navigate this new complex infosphere, we must engage with a multi-faceted strategy. First, it necessitates moving forward from merely calling AI systems to adhere to ethical guidelines or cohorting to establish a culture of accountability, transparency, and shared responsibility when AI proprietors are able to influence AI agenda and public opinion [9]. This shift in approach should involve a critical reexamination of why, within our current informational ecology, certain narratives are dominant and universally accepted, and who benefits from this status quo. Such societal introspection might prompt a critical reconsideration of the merits of confining the AI debate and our notion of innovation to a single range of solutions. Furthermore, we argue that, while public online information sources have proven to be fertile ground for the proliferation of AI technologies, today the wealth of SC at stake might be threatened by a range of epistemic risks that we outline using Floridi's taxonomy [3]: * **Loss of SC**: This occurs when there is an oversimplification of complex semantic ideas or when an LLM relies on biased or erroneous explanatory models based on incomplete or distorted input data, resulting in flawed argumentation [15, 51]. In this case, the value of the semantic content is reduced due to the propagation of inaccurate or misleading information, akin to the spread of propaganda, fake news, or "_alternative facts_" [50]. Protection against this type of risk necessitates rigorous data curation (such as data provenance, and lineage) and model validation protocols to ensure LLMs generate accurate and reliable information. * **Unproductiveness and Underuse**: When LLMs are used to replicate semantic content without adding value or facilitating a deeper understanding, it can lead to the stagnation of SC. This can happen when users rely too heavily on LLMs for information generation and consumption while neglecting to actively participate in knowledge sharing and debate. Also, at the core, this underuse of SC might stems from the LLMs' architecture, being able to fetch only data that might be available in accessible online repositories, without yet considering the '_long-tail_' of secondary, related contributions, as well as different perspectives, on a given topic. To guard against this risk, it's essential first to inquire over the role of LLMs as epistemic agents, as well as to foster a culture of critical thinking and active engagement in the discourse, preventing the'mm-mification' of SC [66]. * **Misuse**: LLMs, if not properly calibrated or deployed by malicious actors, can generate content that disrespects, misunderstands, or illegitimately appropriates information [36, 51]. This misuse, or information expropriation, can lead to the loss of SC. Mitigating this risk requires careful design and tracking over their deployment, with due respect for cultural nuances and contexts. In terms of data, this might be possible also leveraging underrepresented communities to not just moderate, but actively participate in data annotation policies, to mitigate potential biases [31]. In terms of models, intellectual property, trademarks, and measure to ensure accountability shall be established to track responsibles within the development and deployment of generative solutions, also enforced by hard laws, such as the forthcoming EU AI Act or the Liability Directive [67, 63]. * **Depreciation**: The value of SC can depreciate over time, particularly when new LLM-generated information floods the infosphere and obscures or distorts earlier knowledge. Future LLMs models, being trained or finetuned in such a stagnating environment, might see an increase in diminished returns over their performance. This could happen by being fed data that are either synthetically produced or, even worse, being produced by a shrunken online community of users that lacks incentives to share and engage in knowledge creation and maintenance given the information accessibility of LLMs. Also connected to underuse, the concept of _Model Dementia_ has been recently coined [68] to signal how future LLMs training datasets might lead to diminished returns in terms of content richness, intended as forgetting underlying data distributions. Building on this assumption, our collective reliance on language models as repositories of information might entail a shift in our ethical responsibilities, as we transfer the locus of our communal knowledge from the outward sphere of human discourse to the inward representations within these models. This shift of direction needs also to be put in context, two additional factors play a key role, being inversely proportionals among them: availability of information and attention. With the sheer amount of data being produced by LLMs, we might approach new states of information magnitude. This overabundance of information is overshadowing and possibly distorting pre-existing knowledge, causing the depreciation of SC. It's becoming progressively more demanding to discern useful information or valuable knowledge in the face of this onslaught, which in turn undermines the value derived from it. In this new era defined by the _Attention Economy_[69], where human attention is a scarce and coveted resource, the pressure on LLMs to be deployed within work or educational tasks, outreach various audiences, and produce engaging content can inadvertently contribute to this range of risks. As these models strive to produce information that appears coherent and well-expanded - such as also sensationalist AI-generated images or news of public persons, sociopolitical facts etc - the focus might shift from providing comprehensive and nuanced insights to offering quick, often shallow pieces of information. This shift could potentially "flatten" the richness of discourse, leading to apparently more engaging, yet less insightful information being circulated. At the core of this acceleration, the parameter of epistemic filters becomes paramount. These are mechanisms that people use to sort and interpret the information they encounter. They help us decide what counts as evidence for forming a belief or what challenges it enough to lead to belief revision. There are different kinds of filters, among which the two most important ones are filters for omission and discredit [70]. Filters for Omission allow individuals or groups to ignore or reject information that does not align with their current beliefs or values. Filters for Discredit, instead, lead individuals or groups to dismiss or discredit opposing viewpoints or evidence. This can involve casting doubt on the source of the information, its credibility, or its relevance. Discredit filters are particularly active in polarized debates where individuals or groups have strong beliefs that they feel are being threatened. Social media platforms, through their algorithmic selection of content, might inadvertently strengthen that. Thus, from there, it can be stated that when encountered with new information, we can actively engage in accepting or rejecting that, yet, a role in such a selection is performed by social aspects of communication. In this regard, the concept of _Epistemic Fitness_ refers to the effectiveness of an individual's or group's ability to process evidence and revise their beliefs accordingly. It involves the ability to gather, evaluate, and use information to form accurate beliefs about the world. Maximizing epistemic fitness consists in enhancing one's epistemic filters to improve the quality of information intake and the efficiency of belief formation and revision. Through this lens, we can now reevaluate how the AI narratives [71] over capabilities of generative AI are spread by communities with certain interests and beliefs: while some might foresee economic revenues from instilling certain narratives, others might adopt a stance geared toward protecting human rights, remarking the nuances and challenges of human language among others. From there, we yet have to tackle how to deal with future conversations where LLMs could be deployed to reinforce existing viewpoints, possibly underpinning the deployment of these filters if online users will be led to believe that information spread by LLMs is actually factual and representative of an allegedly major group of people than it is in reality. The call to action is thus twofold. On one hand, consumers of AI-generated content need to refine their individual epistemic filters to navigate this new information landscape effectively. This might entail questioning why certain narratives are spread and validated, and for which purposes. On the other hand, developers and proprietors of LLM solutions have an ethical responsibility to design systems that support, rather than undermine, the collective epistemic fitness of society. Deployers, similarly, shall use these tools cognizant of the value of public SC, being also subjected to watermarks, licensing, and any other enforcement to reprove their own accountability. Thus, to conclude, a cornerstone in our collective response to these risks is the amplification of AI literacy initiatives. Creating an informed citizenry that understands AI technologies, including their potential advantages and associated risks, enables individuals to engage in meaningful discussions and decision-making processes concerning their epistemic validity. Central to this endeavor is the proactive integration of ethical considerations. Ethical responsibility should not be a reactionary measure or an isolated response to negative outcomes (e.g. regulate only when meaningful harm occurs). Instead, it needs to be woven into the fabric of the AI design and deployment process. Such proactive ethical responsibility can serve as a safeguard, aligning the development and utilization of AI technologies, and disincentivizing diminishing time-to-market agendas. However, this inquiry does not suggest a departure from open-source practices. Rather, it signals the need for a matured, conscientious version of open-source, devoid of narratives and utopias of technological emancipation or determination. One that is sober, cognizant of the social epistemic risks, and dedicated to enhancing public comprehension of AI technologies. ## 7 Conclusion This work attempts to evaluate the complex interplay between LLMs' potential for knowledge democratization and the sociotechnical challenges they present. Amid the accelerating proliferation of LLMs in 2023, the widespread narrative that frames them as precursors to AGI risks overshadowing important socio-economic implications, potentially facilitating an AI monopoly. It is vital, therefore, to question who benefits from these narratives and whether these beneficiaries align with societal interests broadly. Despite acknowledging the lively nature of this debate, we attempt to explore the delicate balance between the democratization of knowledge and the emergence of a deontological tipping point in our infosphere. This tipping point symbolizes a critical juncture where our commitment to open information dissemination may intersect, and potentially conflict, with our obligation to prevent harm. This dynamic has been exacerbated by the cognitive deluge driven by AI technologies, especially LLMs, leading to uncharted social epistemic challenges that stem from their sociotechnical risks. We have highlighted that the unchecked expansion and proliferation of AI-generated content such as textual information from LLMs, while holding considerable promise, also pose significant risks. Aside from the engaging debate over their properties to handle semantic information (i.e., "understanding"), we shall not fail to commit to a broader inquiry over the ecosystem that fuels attention towards them, being cognizant of a different array of risks that ultimately affect the value of our SC.
2307.16126
Non-Equilibrium Nature of Fracture Determines the Crack Paths
A high-fidelity neural network-based force field, NN-F$^{3}$, is developed to cover the strain states up to material failure and the non-equilibrium, intermediate nature of fracture. Simulations of fracture in 2D crystals using NN-F$^{3}$ reveal spatial complexities from lattice-scale kinks to sample-scale patterns. We find that the fracture resistance cannot be quantified by the energy densities of relaxed edges as in the literature. Instead, the fracture patterns, critical stress intensity factors at the kinks, and energy densities of edges in the intermediate, unrelaxed states offer reasonable measures for the fracture toughness and its anisotropy.
Pengjie Shi, Shizhe Feng, Zhiping Xu
2023-07-30T04:30:46Z
http://arxiv.org/abs/2307.16126v1
# Non-Equilibrium Nature of Fracture Determines the Crack Paths ###### Abstract A high-fidelity neural network-based force field, NN-F\({}^{3}\), is developed to cover the strain states up to material failure and the non-equilibrium, intermediate nature of fracture. Simulations of fracture in 2D crystals using NN-F\({}^{3}\) reveal spatial complexities from lattice-scale kinks to sample-scale patterns. We find that the fracture resistance cannot be quantified by the energy densities of relaxed edges as in the literature. Instead, the fracture patterns, critical stress intensity factors at the kinks, and energy densities of edges in the intermediate, unrelaxed states offer reasonable measures for the fracture toughness and its anisotropy. Fracture is a catastrophic process in nature and engineering, which leaves facets and kinks along the crack paths. In 2D crystals such as graphene and h-BN, different types of edges can be cleaved by fracture, including zigzag (Z), armchair (A), and mixed zigzag-armchair or chiral (C) edges (Fig. 1a). The relative stabilities of graphene edge structures were explored experimentally through the abundance of edges created by various techniques such as fracture [1], mechanical exfoliation [2], and irradiation [3]. Most observations show almost the same probabilities of zigzag and armchair edges [4; 5; 6], while a few of them report either zigzag or armchair direction is preferred over the other [2; 7; 8; 9]. These facts suggest that the stabilities of zigzag and armchair edges could be quite close, which contradicts theoretical predictions from first-principles calculations. Ground-state calculations based on the density functional theory (DFT) show that electronic and structural relaxation of the armchair edge of graphene significantly reduces its energy density, \(\gamma_{\rm A}\), which becomes lower than that of the zigzag edge, \(\gamma_{\rm Z}\)[10; 11; 12; 13; 14; 15; 16]. The disagreement between theory and experiment remains unsolved for more than a decade. The selection of crack paths during fracture is closely related to the relative stability of edges. Under the framework of fracture mechanics, the crack driving force can be measured by the energy release rate (ERR), \(G\), while the energy cost to activate the fracture is defined as the fracture toughness, \(G_{\rm c}\)[21]. The value of \(G_{\rm c}\) is difficult to determine by theory due to the non-equilibrium nature of fracture and thus commonly measured by experiments [21]. In theoretical studies, the fracture resistance is usually approximated by the surface or edge energy densities as \(G_{\rm c}=2\gamma\) (Fig. 1b) [18; 22; 23; 21]. The directional dependence of \(G_{\rm c}\left(\theta\right)\), which defines the relative stability of different edges, is expected to align with that of \(2\gamma\left(\theta\right)\)[1; 24]. By analyzing the crack path under specific loading conditions, the relative stability or the anisotropy of fracture can be deduced. The anisotropy in \(G_{\rm c}\left(\theta\right)\) and \(\gamma\left(\theta\right)\) of crystals with a honeycomb lattice such as graphene can be quantified by the ratios between their values at post-fracture zigzag (Z) and armchair (A) edges, \(A_{G}=G_{\rm Z}/G_{\rm A}\), and \(A_{\gamma}=\gamma_{\rm Z}/\gamma_{\rm A}\)[14], respectively [4; 5; 6]. Recently, direct tensile tests of monolayer graphene and peeling tests of highly-oriented pyrolytic graphite (HOPG), although unable to resolve the atomic-scale edge structures, conclude weak anisotropies (\(A_{G}=1.06\)[1] and \(0.971\)[2]) from the overall orientation of cracks. Energy densities of edges cleaved by fracture cannot be directly measured in experiments, and the use of theoretically calculated \(2\gamma(\theta)\) as the fracture toughness remains questionable. In fact, experimentally measured fracture toughness is usually much higher than the value of \(2\gamma\) even for brittle crystals where plastic dissipation is absent [18; 23; 25; 26] (Fig. 1b). Large-scale molecular dynamics (MD) simulations may help address the issue if provided with force fields of high accuracy and low cost. Empirical force fields (FFs) reported in the literature cannot capture the non-equilibrium nature and high, non-uniform lattice distortion at the crack front [16; 18; 27; 19; 20; 28; 14; 29]. The values of \(A_{\gamma}\) calculated using Stillinger-Weber or Tersoff potentials are the same as the bond-cutting estimation, \(\sqrt{3}/2\), and do not correctly capture the bonding characteristics of materials [28; 29]. By including the chemistry of interatomic bond Figure 1: (a) A crack-containing graphene specimen under force loading [8]. (b) Fracture toughness \(G_{\rm c}\) of cracks along the zigzag and armchair motifs, \(G_{\rm c}=K_{1}^{2}/E\)[17]. The oscillation in \(G_{\rm c}\) along with the advancing cracks signals the lattice trapping events. (c) Theoretical predictions of \(A_{\gamma}\)[16; 18; 20; 19; 21; 22], and experimental measurements of \(A_{G}\)[1; 2]. (d), Uniaxial stress-strain relations measured along the zigzag and armchair directions. ing, the adaptive intermolecular reactive empirical bond order (AIREBO) predicts \(A_{\gamma}<1\)[14; 16; 18], while the reactive force field (ReaxFF) yields opposite results, \(A_{\gamma}>1\)[9; 19; 20] (Fig. 1c). Compared with the experimental measurements [1; 2], DFT calculations predict a relatively strong anisotropy with \(A_{\gamma}>1.1\), where electronic and structural relaxation of the edges are taken into account [10; 11; 12; 13; 14; 15; 16][17]. The DFT predictions are also quantitatively different from the AIREBO and ReaxFF results. Recently, the implementation of neural network-based force fields [30] shows the capability to resolve the accuracy-cost dilemma and led to significant progress in several fields [31; 32; 33; 34]. However, the lack of a reasonable description of the non-equilibrium nature and exploration of the full space of strain states leaves the atomistic approach to fracture still immature. Here we develop a neural network-based force field for fracture (NN-F\({}^{3}\)) for 2D crystals including graphene and h-BN based on first-principles calculations and an active-learning framework [30]. The tensorial nature of strain states and the undercoordination nature of cleavaged edges [14] demand a large training set of DFT data and are addressed by an active-learning workflow [35]. Our training sets for NN-F\({}^{3}\) include structures with strained lattices (the uniaxial strain in the range of \(0-0.25\) along different lattice directions), cleaved edges (zigzag and armchair segments as well as the kinks between them), and cracks (\(203,554\) datasets in total). The Deep Potential Smooth Edition (DeepPot-SE) model [36; 37] is used to train NN-F\({}^{3}\), the performance of which is validated by reporting mean absolute errors (MAEs) of the energies per atom, the edge energy densities, and the interatomic forces below \(2\,\mathrm{meV}\), \(2.2\,\mathrm{meV}\mathrm{/\AA}\) and \(43\,\mathrm{meV}\mathrm{/\AA}\), respectively. The relative error (RE) in the stress-strain relations is under \(2\%\) (Fig. 1d). The workflow thus assures an accurate description of the crack-tip processes [17]. _Fracture Anisotropy of Graphene-_ The fracture of graphene is explored by quasi-static uniaxial tension using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [17; 38]. In order to host relatively long cracks, wide samples (\(W\approx 50\,\mathrm{nm}\)) are constructed (Fig. 2). One atom at the left edge is removed to initialize the crack. Periodic boundary conditions (PBCs) are enforced along the tensile direction. The span \(L\) is in the range of \(4-8\) nm (and \(15-20\) nm to see the size dependence) to accommodate different lattice orientations (\(\theta_{\mathrm{Z}}\in[0^{\circ},30^{\circ}]\), measured from the zigzag motif, Fig. 2a, b). MD simulation results show that the cracks are straight at the sample scale (Figs. 3) [17]. Their overall orientations are denoted by \(\theta_{\mathrm{Edge}}\) (measured from the zigzag motif, see Fig. 2a). However, the cracks may deflect at the lattice scale, leaving kinks between the zigzag and armchair segments behind (Fig. 2c). The relations between \(\theta_{\mathrm{Edge}}\) and \(\theta_{\mathrm{Z}}\) are summarized in Fig. 2b, which can be classified into three regimes. For \(\theta_{\mathrm{Z}}\in[0^{\circ},10^{\circ}]\) or \([27^{\circ},30^{\circ}]\), the crack advances along the zigzag (\(\theta_{\mathrm{Edge}}=0^{\circ}\)) or armchair (\(\theta_{\mathrm{Edge}}=30^{\circ}\)) direction, respectively. For \(\theta_{\mathrm{Z}}\in[10^{\circ},27^{\circ}]\), the crack advances between them (\(\theta_{\mathrm{Edge}}\in(0^{\circ},30^{\circ})\)). Cleavage of zigzag edges dominates if the loading direction is uniformly sampled, which is attributed to the fact of \(G_{\mathrm{Z}}<G_{\mathrm{A}}\)[17]. This finding conforms with the observations in the peeling experiments of HOPG where the polycrystalline texture is randomly oriented [2]. However, the energy densities of relaxed edge \(\gamma(\theta)\) obtained from DFT calculations display an opposite trend of \(\gamma_{\mathrm{Z}}>\gamma_{\mathrm{A}}\)[10; 11; 12; 13; 14; 15; 16; 17]. This inconsistency indicates that \(\gamma(\theta)\) fails to correctly characterize the anisotropy in fracture resistance. The crack driving force under uniaxial tensile stress \(\sigma_{y}\) is \(G\left(\theta_{\mathrm{Edge}}\right)\sim\cos^{2}\left(\theta_{\mathrm{Z}}- \theta_{\mathrm{Edge}}\right)\sigma_{y}^{2}\)[17]. Following the criterion of maximum ERR (MERR), the crack will advance in the direction with \(G\left(\theta_{\mathrm{Edge}}\right)\geq G_{\mathrm{c}}\left(\theta_{ \mathrm{Edge}}\right)\). In the honeycomb lattice of graphene, the cleaved edges consist of zigzag and armchair segments, and the value of \(G_{\mathrm{c}}(\theta_{\mathrm{Edge}})\) can be estimated as the average value of \(G_{\mathrm{A}}\) and \(G_{\mathrm{Z}}\) weighted by their lengths [2; 14], that is \[G_{\mathrm{c}}\left(\theta_{\mathrm{Edge}}\right)=2G_{\mathrm{A}}\left[\sin \left(\theta_{\mathrm{Edge}}\right)+A_{G}\sin\left(30^{\circ}-\theta_{\mathrm{ Edge}}\right)\right]. \tag{1}\] This result presumes that the formation and interaction energies of lattice kinks are negligible in comparison with the edge energies [14; 39]. The direction of crack propagation, \(\theta_{\mathrm{Edge}}\), can thus be obtained from the lattice orientation, \(\theta_{\mathrm{Z}}\), by finding the minimum of \(\sigma_{y}^{2}\) that satisfies \(G=G_{\mathrm{c}}\), that is \[\theta_{\mathrm{Edge}}=\arg\min\sigma_{y}^{2}=\arg\min\frac{G_{\mathrm{c}} \left(\theta_{\mathrm{Edge}}\right)}{\cos^{2}\left(\theta_{\mathrm{Z}}-\theta_ {\mathrm{Edge}}\right)}. \tag{2}\] The predictions using \(A_{G}=0.96\) and \(0.93\) fit the simulation results for \(\theta_{\mathrm{Edge}}\) smaller and larger than the critical value of \(\theta_{\mathrm{c}}=19.11^{\circ}\) (Fig. 2b), respectively. At \(\theta_{\mathrm{Edge}}=19.11^{\circ}\), the numbers of zigzag and armchair segments are the same along the edge (Fig. 2c). The smaller values of \(A_{G}\) at \(\theta_{\mathrm{Edge}}>19.11^{\circ}\) may be attributed to the asymmetry between the A and B sites at the armchair edges (Fig. 2d), which can elevate the fracture toughness Figure 2: (a) The simulation setup of fracture tests. (b) The relation between \(\theta_{\mathrm{Z}}\) and \(\theta_{\mathrm{Edge}}\) obtained from the MD simulations for graphene using NN-F\({}^{3}\) and theoretical predictions using Eq. 2. (c) The atomic-level structure of edges with \(\theta_{\mathrm{Edge}}=19.11^{\circ}\). (d) The asymmetry between A- and B-site atoms along armchair edges at the crack tip. and promote deflection. Similar effects of the edge asymmetry on crack deflection and toughening were also observed in h-BN [17; 23] and WS\({}_{2}\)[27]. _Origin of Edge Kinks-_ The high-fidelity NN-F\({}^{3}\) allows us to explore the edge structures at the atomic level. Zigzag and armchair segments as well as lattice kinks connecting them can be resolved at length scales where fracture mechanics can be applied for analysis. Large-scale MD simulations excluding the size effects show periodic crack patterns [17], and highlight the advantages of NN-F\({}^{3}\) in offering high accuracy at the first-principles level and low computation cost that allows direct simulations up to the experimental scale [1; 17] The criterion of MERR [1; 24; 40] suggests that the direction of a propagating crack defined in the local coordinate system (Fig. 3c and 3d) is \[\alpha=\arg\max\frac{G\left(\alpha\right)}{G_{\mathrm{c}}\left(\alpha\right)}, \tag{3}\] where \(G(\alpha)\) is evaluated by the SIFs in the tensile and shear modes (\(K_{\mathrm{I}}\) and \(K_{\mathrm{II}}\), respectively) as the out-of-plane displacement is ignored [20]. The values of \(K_{\mathrm{I}}\) and \(K_{\mathrm{II}}\) can be determined by fitting the crack-tip displacement field with the Williams power expansion [17; 41]. For loading conditions with crack directions not aligning with the zigzag or armchair motifs, we find that the presence of a mode-II feature could deflect the mode-I crack [1; 21]. The effect can be measured by the ratio \(K_{\mathrm{II}}/K_{\mathrm{I}}\) extracted from MD simulations. Two representative examples are shown in Figs. 3c and 3d, where the cleaved edges are dominated by the zigzag and armchair segments, respectively. The value of \(K_{\mathrm{II}}/K_{\mathrm{I}}\) oscillates as the crack propagates (Figs. 3a and 3b), indicating that the deflection is activated as the ratio approaches the threshold values \(\left(K_{\mathrm{II}}/K_{\mathrm{I}}\right)_{\mathrm{c}}\). The threshold depends on the loading conditions and lattice orientations and is higher for cracks advancing in the zigzag direction than that along the armchair ones. The asymmetry between the A and B sites of the armchair edge further breaks the symmetry (Fig. 2d) and yields two thresholds (Fig. 3b), which confirms the effect of edge asymmetry on \(A_{G}\) (Fig. 2b). To estimate the values of \(\left(K_{\mathrm{II}}/K_{\mathrm{I}}\right)_{\mathrm{c}}\) and their relations with \(A_{G}\), a dimensionless quantity \(\Delta\) is introduced based Figure 3: The ratio of SIFs (\(K_{\mathrm{II}}/K_{\mathrm{I}}\)) along with the advancing crack at \(\theta_{\mathrm{Z}}=10.89^{\circ}\) (a) and \(\theta_{\mathrm{Z}}=25.87^{\circ}\) (b). (c and d) Cleaved edges with zigzag and armchair segments. The domain defined to calculate the SIFs is annotated [17]. (e and f) Theoretical predictions of the relations between \(\Delta\) and \(K_{\mathrm{II}}/K_{\mathrm{I}}\) from Eq. 4 using \(A_{G}=0.96\). (g) Theoretical predictions of the relation between \(A_{G}\) and \(\left(K_{\mathrm{II}}/K_{\mathrm{I}}\right)_{c}\) using Eq. 4. on Eq. 3 as [17] \[\alpha=\arg\max\left[\frac{G\left(\alpha\right)}{G_{\rm c}\left(\alpha\right)} \frac{2G_{\rm A}E}{K_{\rm I}^{2}}\right]=\arg\max\left[\Delta\left(\alpha,A_{G},\frac{K_{\rm II}}{K_{\rm I}}\right)\right], \tag{4}\] The direction of cracks determined from \(A_{G}\) and \(K_{\rm II}/K_{\rm I}\) follows the armchair or zigzag motifs (\(\alpha=0^{\circ}\) or \(30^{\circ}\)) due to the discrete nature of lattices. The relations between \(K_{\rm II}/K_{\rm I}\) and \(\Delta\) in the armchair- and zigzag-dominated regimes with \(A_{G}=0.96\) are summarized in Figs. 3e and 3f, where the thresholds \(\left(K_{\rm II}/K_{\rm I}\right)_{\rm c}\) are identified as 0.092 and 0.18, respectively. Alternatively, the values of \(A_{G}\) can be obtained from \(\left(K_{\rm II}/K_{\rm I}\right)_{\rm c}\) that is directly determined by experiments or simulations (Fig. 3g and Table 1). The results show that the value of \(A_{G}\) does not match the anisotropy measured from the energies of relaxed edges in direct NN-F\({}^{3}\) or DFT calculations, \(A_{\gamma}=\gamma_{\rm Z}/\gamma_{\rm A}=1.113\) (Figs. 1c) [17]. _Energy Densities of Unrelaxed Edges-_ The mismatch between \(A_{G}\) and \(A_{\gamma}\) implies that the anisotropy in the edge energies density fails to capture the atomistic kinetics of fracture, which selects the crack path. Since the work of fracture should not depend on posterior edge relaxation processes after the event of cleavage, the energy densities of unrelaxed edges, \(2\Gamma(\theta)\), are calculated using NN-F\({}^{3}\) or DFT calculations and compared to \(2\gamma(\theta)\) for relaxed edges. The results, \(A_{\Gamma}=0.959<1\), suggest a weak anisotropy in the fracture toughness, agreeing well with the experimental evidence [1; 2] and the simulation results (Fig. 4a). The values of \(2\Gamma_{\rm Z}\) and \(2\Gamma_{\rm A}\) also conform qualitatively well with \(G_{\rm Z}\) and \(G_{\rm A}\), respectively, by ignoring the lattice-trapping effects (Fig. 1b). We investigate several measures of fracture energies [17], and conclude that the energy density of unrelaxed edges, \(A_{\Gamma}\), can be a good indicator of fracture resistance. The consistency between \(A_{G}\) and \(A_{\Gamma}\) indicates that \(2\Gamma\) characterizes the non-equilibrium nature of the fracture. Specifically, for \(\theta_{\rm Edge}<19.11^{\circ}\), the value \(A_{G}=0.959\) fitted from \(\theta_{\rm Edge}-\theta_{\rm Z}\) relation matches well with \(A_{\Gamma}\) (Fig. 2b). For \(\theta_{\rm Edge}>19.11\), the fitting result of \(A_{G}=0.93\) is slightly smaller than \(A_{\Gamma}=0.959\), which is attributed to the nature of asymmetric fracture where cracks advancing along the armchair motif prefer to deflect into the zigzag directions (Fig. 3b). The relations between \(\theta_{\rm Edge}\) and \(\theta_{\rm Z}\) summarized in Fig. 5 show that predictions by assuming \(A_{G}=A_{\gamma}\) do not prefer zigzag edges, while the results using \(A_{G}=A_{\Gamma}\) confirm the experimental results [2]. We also find that the energy densities of unrelaxed edges are very close to the measured fracture toughness (\(G_{\rm c}\left(\theta\right)\approx 2\Gamma\left(\theta\right)\)) (Figs. 1b, 2b and 3), although the strain states at the crack tip are different from that in lattice decohesion [43]. _Conclusion-_ Using a high-fidelity neural network-based force field developed in this work, we find that the kinetics of fracture is much determined by the intermediate, unrelaxed states of the crack tip. The energy density of relaxed edges widely used in the literature fails to offer a reasonable measure of fracture toughness. Instead, the \(\theta_{\rm Edge}-\theta_{\rm Z}\) relation, \(\left(K_{\rm II}/K_{\rm I}\right)_{\rm c}\), and \(\Gamma\left(\theta\right)\) offer reasonable measures of the fracture anisotropy, that are, \(A_{G}=0.96\), \(A_{G}=0.935-0.966\), and \(A_{\Gamma}=0.959\) (for graphene), respectively. The first two measures can be obtained from experiments or simulations with the atomic-level resolution, while the third one can be considered as material parameters and determined by first-principles calculations. This work highlights the multiscale and non-equilibrium nature of the fracture and the theory and methodology developed for graphene are extended to other 2D crystals such as h-BN and MoS\({}_{2}\) (Figs. 4b-4e) [17]. This study was supported by the National Natural Figure 4: (a) Energy densities of relaxed (\(\gamma\)) and unrelaxed (\(\Gamma\)) graphene edges. TEM images of edges in 2D graphene (b), h-BN (c), and MoS\({}_{2}\) (d) crystals adapted from [8; 23; 42]. (e) Values of \(A_{\gamma}\) and \(A_{\Gamma}\) of graphene, h-BN and MoS\({}_{2}\) obtained from DFT calculations. Figure 5: \(\theta_{\rm Edge}-\theta_{\rm Z}\) relations of graphene obtained by assuming \(A_{G}=A_{\gamma}\) (dash line) and \(A_{G}=A_{\Gamma}\) (solid line) [17]. The data points are measurements from the peeling experiments of HOPG [2]. \begin{table} \begin{tabular}{c c c|c} \(\theta_{\rm Z}\) & \# kinks & \(\left(K_{\rm II}/K_{\rm I}\right)_{\rm c}\) & \(A_{G}\) \\ \hline \(25.87^{\circ}\) & 8 & \(\left[0.080,0.100\right]\) & \(\left[0.940,0.966\right]\) \\ \(10.89^{\circ}\) & 13 & \(\left[0.205,0.210\right]\) & \(\left[0.935,0.940\right]\) \\ \(9.82^{\circ}\) & 2 & \(\left[0.195,0.200\right]\) & \(\left[0.945,0.950\right]\) \\ \end{tabular} \end{table} Table 1: The number of kinks in the sample with width \(W\approx 50\) nm, and the relation between \(\left(K_{\rm II}/K_{\rm I}\right)_{\rm c}\) and \(A_{G}\) under loading conditions defined by \(\theta_{\rm Z}\). Science Foundation of China through grants 11825203, 11832010, 11921002, and 52090032. The computation was performed on the Explorer 100 cluster system of the Tsinghua National Laboratory for Information Science and Technology.
2301.04874
Twistor fibers in hypersurfaces of the flag threefold
We study surfaces of bidegree (1,d) contained in the flag threefold in relation to the twistor projection. In particular, we focus on the number and the arrangement of twistor fibers contained in such surfaces. First, we prove that there is no irreducible surface of bidegree (1,d) containing d+2 twistor fibers in general position. On the other hand, given any collection of (d+1) twistor fibers satisfying a mild natural constraint, we prove the existence of a surface of bidegree (1,d) that contains them. We improve our results for d=2 or d=3, by removing all the generality hypotheses.
Amedeo Altavilla, Edoardo Ballico, Maria Chiara Brambilla
2023-01-12T08:39:42Z
http://arxiv.org/abs/2301.04874v2
# Twistor fibers in \((1,d)\)-surfaces of the flag threefold ###### Abstract. We study surfaces of bidegree \((1,d)\) contained in the flag threefold under the action of the twistor projection. First, we prove that there is no integral surfaces of bidegree \((1,d)\) containing \(d+2\) twistor fibers such that three of them are not collinear. Then, fixed any union of \(0\leq n\leq d+1\) non-three-by-three collinear twistor fibers, we show that there is an integral \((1,d)\)-surface containing them and no other twistor fibers. The result is also true for \(d+2\) twistor fibers with additional suitable hypotheses. Later, we focus on surfaces of low bidegrees and prove that, for any set of \(0\leq n\leq 3\) (resp. \(n=4\)) twistor fibers, there is a smooth (resp. integral) surface of bidegree \((1,2)\) containing them and no other twistor fiber. Finally, we prove that there is no integral \((1,d)\)-surface, for \(d=2,3\), containing \(d+3\) twistor fibers. Key words and phrases:flag threefold, twistor projection, twistor fiber, surfaces, bidegree 2010 Mathematics Subject Classification: Primary: 32L25, 14M15; Secondary: 14D21, 14J26 All the authors are partially supported by GNSAGA. The first named author is partially supported by the INdAM project 'Teoria delle funzioni ipercomplesse e applicazioni'. Introduction Let \(\mathbb{F}\) be a smooth cone in \(\mathbb{F}\). We denote by \(\mathcal{C}(1)=\mathcal{C}(1)\) the set of smooth conics in \(\mathbb{F}\) and by \(\mathcal{C}(n)\), \(n\geq 2\), the set of \(n\) pairwise disjoint smooth conics. In an analogous way we define \(\mathcal{T}(1)=\mathcal{T}\subset\mathcal{C}\) as the set of twistor fibers and \(\mathcal{T}(n)\subset\mathcal{C}(n)\), \(n\geq 2\), as the set of \(n\) pairwise disjoint twistor fibers. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, is a Zariski closed condition. To be more precise, in Definition 2.7, we define the set \(\mathcal{C}^{*}(n)\) which parametrizes all \(A\in\mathcal{C}(n)\) such that \(\#(L\cap A)\leq 2\) for all curves \(L\) of bidegree \((1,0)\). Clearly \(\mathcal{C}^{*}(1)=\mathcal{C}(1)\), and \(\mathcal{C}^{*}(2)=\mathcal{C}(2)\), while for \(n\geq 3\) the open set \(\mathcal{C}^{*}(n)\) is given by the set of disjoint smooth conics. We will see in Remark 2.5 that for any couple of different smooth conics, there is a unique bidegree \((1,0)\) curve \(L=\pi_{2}^{-1}(q_{2})\) and a unique bidegree \((0,1)\) curve \(R=\pi_{1}^{-1}(q_{1})\) such that \(L\) and \(R\) intersect both smooth conics. In the case of a couple of twistor fibers we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. To be collinear, for three or more smooth conics, we also have \(R=j(L)\). We say that three or more smooth conics are _collinear_ if there is a \((1,0)\) curve \(L\) which intersects all of them. conics such that no three of them are collinear. Moreover, we set \(\mathcal{T}^{*}(n):=\mathcal{T}(n)\cap\mathcal{C}^{*}(n)\). In Theorem 2.19 we characterize the elements \(A\in\mathcal{C}^{*}(d+1)\) to be those which do not obstruct the linear system \(|\mathcal{I}_{A}(1,d)|\). We now summarize the main results of the paper. In Section 3, we study surfaces of bidegree \((1,d)\) containing a certain number of smooth conics or twistor fibers, and we prove the following two theorems. **Theorem 1.1**.: _For any \(d\in\mathbb{N}\) and \(A\in\mathcal{T}^{*}(d+2)\), there is no integral surfaces of bidegree \((1,d)\) containing \(A\)._ **Theorem 1.2**.: _Fix integer \(d\geq 1\) and \(0\leq n\leq d+2\). There is an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\) containing exactly \(n\) twistor fibers._ We also show, in Theorem 3.4, that the first result is sharp. Indeed, for any \(A\in\mathcal{T}^{*}(n)\), with \(0\leq n\leq d+1\), we are able to find an integral surface of bidegree \((1,d)\) containing \(A\) and no other twistor fibers. This last issue requires some effort and the proof is divided into several particular case. Theorem 1.2 is a consequence of Theorem 3.4 and Theorem 3.8. More precisely, in Theorem 3.4, for \(0\leq n\leq d+1\), we prove that fixed any union \(A\) of \(0\leq n\leq d+1\) non-three-by-three collinear twistor fibers, there is an integral \((1,d)\)-surface containing \(A\) and no other twistor fibers. The extremal case \(n=d+2\) is considered in Theorem 3.8, where we prove that given \(d+2\) general collinear twistor fibers there is an integral surface of bidegree \((1,d)\) containing them. In Section 4, we focus on surfaces of bidegree \((1,2)\) and \((1,3)\). The main results are summarized by the following statements: **Theorem 1.3**.: _Fix \(0\leq n\leq 3\). There is a smooth \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing exactly \(n\) twistor fibers. Moreover, there exists a bidegree \((1,2)\) integral surface containing exactly \(4\) twistor fibers._ **Theorem 1.4**.: _There is no integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing at least \(5\) twistor fibers._ **Theorem 1.5**.: _There is no integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,3)|\) containing at least \(6\) twistor fibers._ The first existence result (Theorem 1.3) follows from Theorem 4.6, for \(0\leq n\leq 3\), and Theorem 4.10, for the case \(n=4\). In the extremal case \(n=4\), we will also show that the surfaces are singular along a line. The two non-existence results (Theorems 1.4 and 1.5) are proved in the last Section 4.2. An essential tool is Lemma 4.11 which states that if a surface of bidegree \((1,d)\) contains \(d+3\) or more collinear twistor fibers, then this surface is reducible and one of its components is a surface of bidegree \((1,1)\) containing \(4\) of the prescribed twistor fibers. We conclude here with a comparison with the other (smooth) algebraic twistor space of a Riemannian \(4\)-manifold, which is the complex projective space. This is the twistor space of the standard \(4\)-sphere [11]. In this case, surfaces of degree \(2\) and \(3\) were studied in some details. In particular, analogously to our case of surfaces of bidegree \((1,1)\), surfaces of degree \(2\) in \(\mathbb{P}^{3}\) might contain \(0,1\) or \(2\) twistor fibers. If a surface of degree \(2\) contains more than \(2\) twistor fibers, then it contains infinitely many of them [13]. For degree \(3\) surfaces, such a maximum is realized for \(5\) twistor fibers [5] which is more than our maximum of \(4\) for surfaces of bidegree \((1,2)\). This difference could be explained by observing a certain unbalancedness of the case of "total degree" \(3\) in the flag threefold. On the other hand, this particular unbalancedness allows us to compute all the Betti numbers in the next section as well as the use of the geometry of the Hirzebruch surfaces. ## 2. Preliminaries and first results In this section, we collect some known results about algebraic curves and surfaces in the flag. Then, we give first results on the space of bidegree \((0,d)\) and \((1,d)\) surfaces containing a certain amounts of twistor fibers. In particular, we introduce the concept of _collinear_ smooth conics and give a topological characterization in terms of cohomology of certain ideal sheaves. For most of the known material about \(\mathbb{F}\), we refer mainly to [4] and to [3, Section 2]. However, we recall here some basic notion and results in order to be as more self-contained as possible. Let us consider the multi projective space \(\mathbb{P}^{2}\times\mathbb{P}^{2}\); an element \((p,\ell)\in\mathbb{P}^{2}\times\mathbb{P}^{2}\) will be a couple written in the following form \(p=[p_{0}:p_{1}:p_{2}]\), \(\ell=[\ell_{0}:\ell_{1}:\ell_{2}]^{\top}\), so that \(p\ell=p_{0}\ell_{0}+p_{1}\ell_{1}+p_{2}\ell_{2}\). Even if it is classically embedded in \(\mathbb{P}^{2}\times\mathbb{P}^{2\vee}\), we might see \(\mathbb{F}:=\{(p,\ell)\in\mathbb{P}^{2}\times\mathbb{P}^{2}\,|\,p\ell=0\}\) as a hypersurface of bidegree \((1,1)\) of \(\mathbb{P}^{2}\times\mathbb{P}^{2}\). We denote by \(\Pi_{1}\) and \(\Pi_{2}\) the two standard projections of \(\mathbb{P}^{2}\times\mathbb{P}^{2}\) and we will use small letters for their restrictions, i.e. \(\pi_{i}=\Pi_{i|\mathbb{F}}\), \(i=1,2\). Thus, the two natural projections define a natural notion of bidegree for algebraic surfaces in \(\mathbb{F}\). Moreover, for all \((a,b)\in\mathbb{Z}^{2}\) we have the following natural exact sequence \[0\to\mathcal{O}_{\mathbb{P}^{2}\times\mathbb{P}^{2}}(a-1,b-1)\to\mathcal{O}_{ \mathbb{P}^{2}\times\mathbb{P}^{2}}(a,b)\to\mathcal{O}_{\mathbb{F}}(a,b)\to 0, \tag{1}\] and, for any \((a,b)\in\mathbb{N}^{2}\), we get (see e.g. [4, Lemma 2.3]) \[h^{0}(\mathcal{O}_{\mathbb{F}}(a,b))=\frac{(a+1)(b+1)(a+b+2)}{2}\qquad\text{ and }\qquad h^{1}(\mathcal{O}_{\mathbb{F}}(a,b))=0. \tag{2}\] It will be useful to recall from [4, Proposition 3.11] the multiplication rules in the Chow ring: \[\mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(1,0) \cdot\mathcal{O}_{\mathbb{F}}(1,0)=0, \mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(0,1) \cdot\mathcal{O}_{\mathbb{F}}(1,0) =1,\] \[\mathcal{O}_{\mathbb{F}}(0,1)\cdot\mathcal{O}_{\mathbb{F}}(1,0) \cdot\mathcal{O}_{\mathbb{F}}(0,1) =1, \mathcal{O}_{\mathbb{F}}(0,1)\cdot\mathcal{O}_{\mathbb{F}}(0,1) \cdot\mathcal{O}_{\mathbb{F}}(0,1) =0. \tag{3}\] ### Curves in \(\mathbb{F}\) and smooth conics Let us recall a notion of bidegree for the family of algebraic curves in \(\mathbb{F}\) already given in [4, 3]. **Definition 2.1**.: Let \(C\subset\mathbb{F}\) be an integral algebraic curve. We define the bidegree of \(C\) as the couple of positive integer numbers \((d_{1},d_{2})\), where \(d_{i}=0\) if \(\pi_{i}(C)=\{x\}\), otherwise \(d_{i}=\deg(\pi_{i}(C))\deg(\pi_{i|C})\). If a curve \(D\) has irreducible components \(C_{1},\ldots,C_{s}\) then the bidegree of \(D\) is taken to be the sum of the bidegrees of \(C_{1},\ldots,C_{s}\). Recall from [3, Remark 2.4] that if a curve \(C\) is such that \(C\cdot\mathcal{O}_{\mathbb{F}}(1,0)=d_{1}\) and \(C\cdot\mathcal{O}_{\mathbb{F}}(0,1)=d_{2}\), then it has bidegree \((d_{1},d_{2})\). From the previous table of multiplication, we can easily derive the following formula. **Lemma 2.2**.: _For any choice of non-negative integers \(a,b,c,d\), the one-dimensional cycle \(\mathcal{O}_{\mathbb{F}}(a,b)\cdot\mathcal{O}_{\mathbb{F}}(c,d)\) has bidegree_ \[(ad+b(c+d),a(c+d)+bc).\] Proof.: We have \(\mathcal{O}_{\mathbb{F}}(a,b)\cdot\mathcal{O}_{\mathbb{F}}(c,d)=ac\mathcal{O }_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(1,0)+(ad+bc)\mathcal{O}_{ \mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(0,1)+bd\mathcal{O}_{\mathbb{F}} (0,1)\cdot\mathcal{O}_{\mathbb{F}}(0,1)\). Hence the thesis is easily obtained by recalling that \(\mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(1,0)\) (resp. \(\mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(0,1)\), resp. \(\mathcal{O}_{\mathbb{F}}(0,1)\cdot\mathcal{O}_{\mathbb{F}}(0,1)\)) is a one-dimensional cycle of bidegree \((0,1)\) (resp. bidegree \((1,1)\), resp. bidegree \((1,0)\)). **Remark 2.3**.: Notice that the fibers of \(\pi_{1}\) are algebraic curves of bidegree \((0,1)\), while those of \(\pi_{2}\) have bidegree \((1,0)\) (see, e.g. [4, Section 3]). Moreover, all bidegree \((0,1)\) curves can be seen as complete intersections between two different \((1,0)\) surfaces (and analogously for bidegree \((1,0)\) curves). Among all algebraic curves in \(\mathbb{F}\) we focus our attention on the family of bidegree \((1,1)\) curves. These are geometrically described in [4, Section 3.1] and are parameterized by \((q,m)\in\mathbb{P}^{2}\times\mathbb{P}^{2}\). In fact, as anticipated in the introduction, any of these curves can be written as \[L_{q,m}:=\{(p,\ell)\in\mathbb{F}\,|\,p\in m,\,\ell\ni q\}=\{(p,\ell)\in\mathbb{F }\,|\,q\ell=0,\,pm=0\}.\] There are two types of these curves: the smooth and irreducible ones (when \(qm\neq 0\)) and the union of a \((1,0)\) and of a \((0,1)\) intersecting at a point (when \(qm=0\), i.e. \((q,m)\in\mathbb{F}\)). In any case, each bidegree \((1,1)\) curve can be seen as the complete intersection of a surface of bidegree \((1,0)\) with one of bidegree \((0,1)\). As already mentioned in the introduction, the \(4\)-dimensional family of smooth irreducible \((1,1)\) curves will be denoted by \(\mathcal{C}\). The elements of \(\mathcal{C}\) will be called _smooth conics_. **Remark 2.4**.: From the very definition of smooth integral conics, it is clear that, for any \(C\in\mathcal{C}\) we have that \(\pi_{i}(C)\) is a line in \(\mathbb{P}^{2}\). **Remark 2.5**.: Notice that for any two different elements \(L_{q,m},L_{q^{\prime},m^{\prime}}\in\mathcal{C}\) there exist a unique curve \(L\) of bidegree \((1,0)\) and a unique \(R\) of bidegree \((0,1)\) such that \(L\) and \(R\) meets both \(L_{q,m}\) and \(L_{q^{\prime},m^{\prime}}\) at a point (see Figure 1). From the analysis made in [4, Section 3.1] it is easy to see that \(L=\pi_{2}^{-1}(q\times q^{\prime})\) and \(R=\pi_{1}^{-1}(m\times m^{\prime})\), where \(\times\) stands for the standard (formal) cross product. Equivalently, \(L=\pi_{2}^{-1}(\operatorname{Sing}(\pi_{2}(A))\) and \(R=\pi_{1}^{-1}(\operatorname{Sing}(\pi_{1}(A))\). We say that three disjoint smooth conics are _collinear_ if they intersect the same \((1,0)\) curve \(L\). The fibers of the twistor projection \(\pi:\mathbb{F}\to\mathbb{P}^{2}\) (see [4, Section 5]) form a subset \(\mathcal{T}\) of the family of conics \(\mathcal{C}\). The twistor fibers are also characterized to be the irreducible elements in \(\mathcal{C}\) that are fixed by the anti-holomorphic involution \(j:\mathbb{F}\to\mathbb{F}\) defined as \[j(p,\ell)=(\overline{\ell},\overline{p}).\] Being the set of fixed point of \(j\), a curve \(L_{q,m}\) belongs to \(\mathcal{T}\) if and only if \(m=\overline{q}\). Moreover, the set \(\mathcal{T}\) is a Zariski dense in \(\mathcal{C}\) (see, e.g. [3, Section 4]). **Remark 2.6**.: If \(L\) is the curve of bidegree \((1,0)\) connecting two different twistor fibers (see Remark 2.5), then the curve of bidegree \((0,1)\) connecting them is exactly \(R=j(L)\). Hence if three twistor fibers are collinear, then they intersect the same \((1,0)\) curve and the same \((0,1)\) curve. Recall from the introduction that, for any positive integer \(n\), \(\mathcal{C}(n)\) denotes the \(4n\)-dimensional set of \(n\) pairwise disjoint elements of \(\mathcal{C}\) and \(\mathcal{T}(n)\) the set of \(n\) pairwise disjoint elements of \(\mathcal{T}\). As before, \(\mathcal{T}(n)\) is a Zariski dense in \(\mathcal{C}(n)\) (see again [3, Section 4]). We now introduce the following crucial definition. Figure 1. Any two smooth conics are connected by a curve of bidegree \((1,0)\) and by a curve of \((0,1)\). **Definition 2.7**.: For any \(n\geq 1\) let \(\mathcal{C}^{*}(n)\) be the set of all \(A\in\mathcal{C}(n)\) such that for any curve \(L\) of bidegree \((1,0)\), it holds \(\#(L\cap D)\leq 2\). Set \(\mathcal{T}^{*}(n):=\mathcal{T}(n)\cap\mathcal{C}^{*}(n)\). Clearly we have \(\mathcal{C}^{*}(n)=\mathcal{C}(n)\) and \(\mathcal{T}^{*}(n)=\mathcal{T}(n)\), for \(n=1,2\). For \(n\geq 3\) the set \(\mathcal{C}(n)\setminus\mathcal{C}^{*}(n)\) parametrizes unions of \(n\) disjoint smooth conics such that at least three of them are collinear, hence \(\mathcal{C}^{*}(n)\) is an open Zariski dense in \(\mathcal{C}(n)\), as well as \(\mathcal{T}^{*}(n)\) in \(\mathcal{T}(n)\). Therefore, for any \(n\geq 1\), all the following inclusions are Zariski dense: \(\mathcal{T}^{*}(n)\subset\mathcal{T}(n)\subset\mathcal{C}(n)\), \(\mathcal{T}^{*}(n)\subset\mathcal{C}^{*}(n)\subset\mathcal{C}(n)\). ### Surfaces of bidegree \((1,0)\) and \((0,1)\) We now turn our attention back to surfaces. We recall from [4, Section 3.2] and [3, Section 2] that \((1,0)\) and \((0,1)\) surfaces are Hirzebruch surfaces of first type. In particular, a surface \(X\) of bidegree \((1,0)\) can be seen as the lift, via \(\pi_{1}\) of a line (and analogously for a bidegree \((0,1)\) surface \(Y\)). Using this description, it is easy to see that any of these surfaces represent the blow up of \(\mathbb{P}^{2}\) at a point. Let \(F_{1}\) be a Hirzebruch surface of type \(1\); we now describe the relation between the generators of the Picard group of \(F_{1}\) and the family of curves in \(\mathbb{F}\) previously described. We recall that \(\operatorname{Pic}(F_{1})=\mathbb{Z}h\oplus\mathbb{Z}f\), where \[h^{2}=-1,\qquad f^{2}=0,\qquad hf=1.\] **Notation 2.8**.: For the following analysis and the rest of the paper \(X\) will denote a surface of bidegree \((1,0)\), while \(Y\) one of bidegree \((0,1)\). Identifying a surface \(X\) with \(F_{1}\) we obtain that \(\mathcal{O}_{X}(1,0)\simeq\mathcal{O}_{F_{1}}(f)\) which in turn corresponds to the set of curves in \(\mathbb{F}\) of bidegree (0,1) contained in \(X\). On the other hand we have that \(\mathcal{O}_{X}(0,1)\simeq\mathcal{O}_{F_{1}}(h+f)\) which corresponds to elements of \(\mathcal{C}\). Hence, for any \(a,b\in\mathbb{Z}\) and for any \(\alpha,\beta\in\mathbb{Z}\), we obtain the following two relations \[\mathcal{O}_{X}(a,b)\cong\mathcal{O}_{F_{1}}(bh+(a+b)f),\quad\text{ and } \quad\mathcal{O}_{F_{1}}(\alpha h+\beta f)\cong\mathcal{O}_{X}(\beta-\alpha, \alpha). \tag{4}\] For a surface \(Y\) of bidegree \((0,1)\) we can derive similar formulae: \[\mathcal{O}_{Y}(a,b)\cong\mathcal{O}_{F_{1}}(ah+(a+b)f),\quad\text{ and } \quad\mathcal{O}_{F_{1}}(\alpha h+\beta f)\cong\mathcal{O}_{Y}(\alpha,\beta- \alpha), \tag{5}\] for any \(a,b\in\mathbb{Z}\) and for any \(\alpha,\beta\in\mathbb{Z}\). **Remark 2.9**.: Let \(X\) be a surface of bidegree \((1,0)\). Then \(X\) does not contain any element of \(\mathcal{C}(2)\). In fact, any element of \(\mathcal{C}\) in \(X\) corresponds to an element of \(\mathcal{O}_{F_{1}}(h+f)\) and any two elements of type \(h+f\) meets. The same holds for surfaces \(Y\) of bidegree \((0,1)\). In particular, for each bidegree \((1,0)\) or \((0,1)\) surface there is exactly one twistor fiber contained in it. We now recall from [3, Lemma 2.5] that, for any \(a,b\geq 0\), using the exact sequence \[0\to\mathcal{O}_{\mathbb{F}}(a-1,b)\to\mathcal{O}_{\mathbb{F}}(a,b)\to \mathcal{O}_{X}(a,b)\to 0,\] and its analogous for \(Y\), we have that \[h^{0}(\mathcal{O}_{X}(a,b))=a(b+1)+\binom{b+2}{2},\quad h^{0}(\mathcal{O}_{Y}( a,b))=b(a+1)+\binom{a+2}{2},\] while \[h^{1}(\mathcal{O}_{X}(a,b))=h^{1}(\mathcal{O}_{Y}(a,b))=0.\] Moreover, if \(a>0,b>0\), the line bundles \(\mathcal{O}_{X}(a,b)\) and \(\mathcal{O}_{Y}(a,b)\) are very ample. ### Surfaces of bidegree \((0,d)\) and \((1,d)\) We now pass to study higher bidegree surfaces. We start with some consideration about bidegree \((0,d)\) surfaces. **Remark 2.10**.: As described in [4, Section 3.3], any integral surface \(S\) of bidegree \((0,d)\) is equal to \(\pi_{1}^{-1}(C)\) for some degree \(d\) integral curve \(C\). Therefore, for \(d\geq 2\), no integral \(S\in|\mathcal{O}_{\mathbb{F}}(0,d)|\) contains a smooth conic. Otherwise, thanks to Remark 2.4, \(\pi_{i}(S)\) would contain a line, but \(\pi_{1}(S)\) is an integral curve of degree \(d\). For \(n\geq 2\) we now compute how many (non integral) bidegree \((0,d)\) surfaces contain a fixed element of \(\mathcal{C}(n)\). First we set the following notation. **Notation 2.11**.: We will denote by \(\mathcal{I}_{U,V}\) the ideal sheaf of a scheme \(U\) contained in a projective variety \(V\); whenever \(V=\mathbb{F}\) we will omit it. So, in particular, if \(A\in\mathcal{C}(n)\) we will write \(\mathcal{I}_{A}:=\mathcal{I}_{A,\mathbb{F}}\). **Lemma 2.12**.: _Fix \(d\geq 0\), \(n\geq 1\), and \(A\in\mathcal{C}(n)\). We have_ \[h^{0}(\mathcal{I}_{A}(0,d))=\frac{(d-n+2)(d-n+1)}{2}\] _and_ \[h^{1}(\mathcal{I}_{A}(0,d))=\begin{cases}\frac{n(n-1)}{2}&\text{ if }n\leq d+1\\ n(d+1)-\frac{(d+2)(d+1)}{2}&\text{ if }n\geq d+1.\end{cases}\] Proof.: Recall that \(\mathcal{O}_{\mathbb{F}}(0,d)=\pi_{1}^{*}(\mathcal{O}_{\mathbb{P}^{2}}(d))\), and \(\mathcal{I}_{A}(0,d)=\pi_{1}^{*}(\mathcal{I}_{T,\mathbb{P}^{2}}(d))\) where \(T=\pi_{1}(A)\) is a union of \(n\) distinct lines in \(\mathbb{P}^{2}\). In general, we have that: \[h^{0}(\mathcal{O}_{\mathbb{F}}(0,d))=\binom{d+2}{2},\quad h^{0}(\mathcal{I}_{ A}(0,d))=\binom{d+2-n}{2},\quad h^{0}(\mathcal{O}_{A}(0,d))=n(d+1),\] hence, using the exact sequence \[0\to\mathcal{I}_{A}(0,d)\to\mathcal{O}_{\mathbb{F}}(0,d)\to\mathcal{O}_{A}(0,d)\to 0,\] since \(h^{1}(\mathcal{O}_{\mathbb{F}}(0,d))=0\) and \(\binom{d+2-n}{2}=0\) if \(n\geq d+1\), we get the result. **Remark 2.13**.: As a direct consequence of the previous lemma, we can state that, for any \(C\in\mathcal{C}\), there is only one surface \(Y\) in \(|\mathcal{I}_{C}(0,1)|\) and, analogously, only one surface \(X\) in \(|\mathcal{I}_{C}(1,0)|\). **Remark 2.14**.: If \(A\in\mathcal{C}(n)\), we consider the following exact sequence: \[0\to\mathcal{I}_{A}(a,b)\to\mathcal{O}_{\mathbb{F}}(a,b)\to\mathcal{O}_{A}(a,b )\to 0.\] Since the conics in \(A\) are all disjoint, we have that for any \((a,b)\in\mathbb{N}^{2}\) \[h^{0}(\mathcal{O}_{A}(a,b))=n(a+b+1), \tag{6}\] (see, e.g. [3, Sequence (7) and proof of Theorem 4.4]). Recall from Formula (2) that \(h^{0}(\mathcal{O}_{\mathbb{F}}(a,b))=\frac{(a+1)(b+1)(a+b+2)}{2}\). Therefore, as \(h^{1}(\mathcal{O}_{A}(a,b))=h^{1}(\mathcal{O}_{\mathbb{F}}(a,b))=0\), for any \(A\in\mathcal{C}(n)\) and \(d\geq 0\), we have \[\chi(\mathcal{I}_{A}(1,d))=h^{0}(\mathcal{I}_{A}(1,d))-h^{1}(\mathcal{I}_{A}(1,d))=(d+1)(d+3)-n(d+2). \tag{7}\] Hence, if \(n=d+2\), we have \(h^{0}(\mathcal{O}_{A}(1,d))=(d+2)^{2}\) and so \[h^{0}(\mathcal{O}_{\mathbb{F}}(1,d))=(d+1)(d+3)=(d+2)^{2}-1=h^{0}(\mathcal{O} _{A}(1,d))-1 \tag{8}\] and, if \(n=d+1\): \[h^{0}(\mathcal{O}_{\mathbb{F}}(1,d))=h^{0}(\mathcal{O}_{A}(1,d))+d+1.\] In the following we will implicitly make use of the following simple observation. **Remark 2.15**.: Since the elements of \(\mathcal{T}(1)\) are the fibers of a map, the twistor map, each \(p\in\mathbb{F}\) is contained in a unique element, \(C\), of \(\mathcal{T}\). Since \(j(C)=C\), \(j(p)\in C\). Analogously, note that \(p\) is contained in a unique curve of bidegree \((1,0)\) and a unique curve of bidegree \((0,1)\), \(\pi_{2}^{-1}(\pi_{2}(p))\) and \(\pi_{1}^{-1}(\pi_{1}(p))\). The following remark will be used several times in the next pages. **Remark 2.16**.: Fix a positive integer \(d\) and an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\). Since \(\pi_{1|X}\) is birational onto its image, \(S\) is rational. By Bezout Theorem, for any \(p\in\mathbb{P}^{2}\), the bidegree \((1,0)\) curve \(\pi_{2}^{-1}(p)\) either is contained in \(S\), or intersects \(S\) in a single point (scheme-theoretically). We close this subsection with a technical result that will be used in the next pages. In [3, Proposition 4.1] we proved that there exists at most one surface of bidegree \((a,b)\) containing a number equal or greater than \(a^{2}+ab+b^{2}\) of smooth conics. We will now generalize this result to a more general context. **Proposition 2.17**.: _Fix \((a,b,c,d)\in\mathbb{N}^{4}\) such that \((a,b)\neq(0,0)\), \(c>0\), \(d>0\). Take \(A\in\mathcal{C}(n)\). Assume the existence of an integral \(S\in|\mathcal{O}_{\mathbb{F}}(a,b)|\) containing \(A\) and assume one of the following conditions:_ 1. \(ad+b(c+d)<n\)_;_ 2. \(a(c+d)+bc<n\)_;_ 3. \(ad+b(c+d)=a(c+d)+bc=n\)_._ _Then each element of \(|\mathcal{I}_{A}(c,d)|\) contains \(S\) and in particular \(c\geq a\) and \(d\geq b\)._ Proof.: Assume by contradiction the existence of \(S^{\prime}\in|\mathcal{I}_{A}(c,d)|\) such that \(S^{\prime}\nsubseteq S\). We have \[\mathcal{O}_{\mathbb{F}}(a,b)\cdot\mathcal{O}_{\mathbb{F}}(c,d)= ac\mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(1,0)+(ad+ bc)\mathcal{O}_{\mathbb{F}}(1,0)\cdot\mathcal{O}_{\mathbb{F}}(0,1)\] \[+bd\mathcal{O}_{\mathbb{F}}(0,1)\cdot\mathcal{O}_{\mathbb{F}}(0,1).\] Since \(S^{\prime}\nsubseteq S\), the intersection \(S\cap S^{\prime}\) is a curve of bidegree \((ad+b(c+d),a(c+d)+bc)\) (see Lemma 2.2). Since \(c>0\) and \(d>0\), then \(\mathcal{O}_{\mathbb{F}}(c,d)\) is ample. Moreover, since \(S\) is irreducible, \(A\) has bidegree \((n,n)\) and \(A\) is not connected, then the intersection contains some more component in addition to \(A\). Hence, either \(ad+bc+bd>n\) and \(ac+ad+bc\geq n\) or \(ad+bc+bd\geq n\) and \(ac+ad+bc>n\). ### Non-collinear smooth conics We now want to characterize the conics in \(\mathcal{C}^{*}(n)\) in terms of cohomology. We start showing that the vanishing of certain cohomology groups implies that an element \(A\in\mathcal{C}(n)\) lies in \(\mathcal{C}^{*}(n)\). **Lemma 2.18**.: _Fix \(d\geq 0\), \(3\leq n\leq d+1\) and \(A\in\mathcal{C}(n)\). If \(h^{1}(\mathcal{I}_{A}(1,d))=0\) then \(A\in\mathcal{C}^{*}(n)\)._ Proof.: Assume, by contradiction, that there exists a curve \(L\) of bidegree \((1,0)\) such that \(\#(L\cap A)\geq 3\) and consider the exact sequence defining \(\mathcal{I}_{A}\): \[0\rightarrow\mathcal{I}_{A}(1,d)\rightarrow\mathcal{O}_{\mathbb{F}}(1,d) \rightarrow\mathcal{O}_{A}(1,d)\to 0. \tag{9}\] We will prove that the restriction map \(H^{0}(\mathcal{O}_{\mathbb{F}}(1,d))\to H^{0}(\mathcal{O}_{A}(1,d))\) is not surjective, which implies that \(h^{1}(\mathcal{I}_{A}(1,d))>0\). In fact, assume that it is surjective. Then, consider the following diagram As the vertical maps are surjective, then the map \(H^{0}(\mathcal{O}_{\mathbb{F}}(1,d))\to H^{0}(\mathcal{O}_{A\cap L}(1,d))\), induced by the composition, is surjective too and hence of rank at least \(3\) (since \(L\cap A\) has cardinality at least \(3\) and the irreducible components of \(A\) are pairwise disjoint). On the other hand the restriction map \(H^{0}(\mathcal{O}_{\mathbb{F}}(1,d))\to H^{0}(\mathcal{O}_{L}(1,d))\) has rank \(2\) and this gives a contradiction. Before completing the characterization of the conics in \(\mathcal{C}^{*}(d+1)\), we expose a general construction that will be used in several following discussions. Let \(A\in\mathcal{C}(n)\) and let \(C\) be any connected component of \(A\). Set \(B:=A\setminus C\). Then, for any \(a,b\geq 0\), if \(Y\in|\mathcal{O}_{C}(0,1)|\), we have the following residual exact sequence \[0\to\mathcal{I}_{Res_{Y}(A)}(a,b-1)\to\mathcal{I}_{A}(a,b)\to\mathcal{I}_{A \cap Y,Y}(a,b)\to 0,\] but as \(Res_{Y}(A)=B\) and \(A\cap Y=(B\cap Y)\cup C\), we have \[0\to\mathcal{I}_{B}(a,b-1)\to\mathcal{I}_{A}(a,b)\to\mathcal{I}_{(B\cap Y) \cup C,Y}(a,b)\to 0. \tag{10}\] Clearly an analogous sequence can be written for \(X\in|\mathcal{O}_{C}(1,0)|\). **Theorem 2.19**.: _Fix \(d\geq 0\) and \(A\in\mathcal{C}(d+1)\). We have \(A\in\mathcal{C}^{*}(d+1)\) if and only if \(h^{1}(\mathcal{I}_{A}(1,d))=0\)._ Proof.: By the previous lemma we need only to prove that \(A\in\mathcal{C}^{*}(d+1)\) satisfies \(h^{1}(\mathcal{I}_{A}(1,d))=0\). We use induction on \(d\geq 0\). The case \(d=0\) is true by Lemma 2.12. Therefore, we may assume \(d>0\) and use induction on \(d\). Let \(C\) be a connected component of \(A\), set \(B:=A\setminus C\) and cal \(Y\) the only element of \(|\mathcal{I}_{C}(0,1)|\). Consider the residual exact sequence (10), with \(a=1\) and \(b=d\). Since \(A\in\mathcal{C}(d+1)\), \(C\cap B=\emptyset\) and \(B\cap Y\) is formed by \(d\) different points, up to the identification of \(D\) with \(F_{1}\) we have \[\mathcal{I}_{(B\cap Y)\cup C,Y}(1,d)\cong\mathcal{I}_{(B\cap Y)\cup C,F_{1}}(1,d)(h+(d+1)f)\cong\mathcal{I}_{B\cap Y,F_{1}}(df)\,.\] Using (10) and induction we are left to prove that \(h^{1}(\mathcal{I}_{B\cap Y,F_{1}}(df))=0\) if and only if \(A\in\mathcal{C}^{*}(d+1)\). Consider now the following exact sequence \[0\to\mathcal{I}_{B\cap Y,F_{1}}(df)\to\mathcal{O}_{F_{1}}(df)\to\mathcal{O}_{B \cap Y}(df)\to 0. \tag{11}\] Since \(h^{0}(\mathcal{O}_{F_{1}}(df))=d+1\) and \(h^{0}(\mathcal{O}_{B\cap Y}(df))=d\), we have \(h^{1}(\mathcal{I}_{B\cap Y,F_{1}}(df))>0\) if and only if \(h^{0}(\mathcal{I}_{B\cap Y,F_{1}}(df))\geq 2\). The last inequality means that there exist at least two different sets of \(d\) fibers containing the set of \(d\) points \(B\cap Y\). This is equivalent to the fact that there exists a fiber \(L\in|f|\) such that \(\#(B\cap L)\geq 2\). Since \(L\) is a curve of bidegree \((1,0)\) in \(\mathbb{F}\), then \(L\cdot Y=0\) in the intersection ring of \(\mathbb{F}\), therefore \(L\subset Y\) and hence we get \(L\cap C\neq\emptyset\). Thus \(\#(L\cap A)\geq 3\), which means that \(A\not\in\mathcal{C}^{*}(d+1)\). **Corollary 2.20**.: _Fix \(d\geq 0\), \(0\leq n\leq d+1\) and \(A\in\mathcal{C}^{*}(n)\). Then \(h^{1}(\mathcal{I}_{A}(1,d))=0\). In particular, for \(n=0,1,2\) and for any \(A\in\mathcal{C}(n)\), we have_ \[h^{1}(\mathcal{I}_{A}(1,1))=0\quad\text{ and }\quad h^{0}(\mathcal{I}_{A}(1,1))= 8-3n.\] Proof.: By using [3, Remark 4.3], we easily get the first part of the statement. The second one, follows from Formula (7). We point out that since \(\mathcal{T}(n)\) is a Zariski dense of \(\mathcal{C}(n)\), then the characterization given by Theorem 2.19 also holds for the set \(\mathcal{T}^{*}(n)\). **Lemma 2.21**.: _Fix an integer \(d\geq 0\) and \(A\in\mathcal{C}^{*}(d+2)\). Then \(h^{1}(\mathcal{I}_{A}(1,d))\leq 1\)._ Proof.: The lemma is true for \(d=0\), because \(h^{0}(\mathcal{I}_{A}(1,0))=0\) (see Remark 2.9). We assume \(d>0\) and use induction on \(d\). Let \(C\) be a connected component of \(A\), set \(B:=A\setminus C\) and call \(Y\) the only element of \(|\mathcal{I}_{C}(0,1)|\). Consider the residual exact sequence (10), with \(a=1\) and \(b=d\). Since \(A\in\mathcal{C}(d+2)\), \(C\cap B=\emptyset\) and \(B\cap Y\) is formed by \(d+1\) different points, up to the identification of \(D\) with \(F_{1}\) we have \[\mathcal{I}_{(B\cap Y)\cup C,Y}(1,d)\cong\mathcal{I}_{(B\cap Y)\cup C,F_{1}}(1,d)(h+(d+1)f)\cong\mathcal{I}_{B\cap Y,F_{1}}(df)\,.\] Using (10) and induction we are left to prove that \(h^{1}(\mathcal{I}_{B\cap Y,F_{1}}(df))=0\) if \(A\in\mathcal{C}^{*}(d+2)\). Consider now the following exact sequence \[0\to\mathcal{I}_{B\cap Y,F_{1}}(df)\to\mathcal{O}_{F_{1}}(df)\to\mathcal{O}_{B \cap Y}(df)\to 0\,. \tag{12}\] Since \(h^{0}(\mathcal{O}_{F_{1}}(df))=d+1\) and \(h^{0}(\mathcal{O}_{B\cap Y}(df))=d+1\), we have \(h^{1}(\mathcal{I}_{B\cap Y,F_{1}})>0\) if and only if \(h^{0}(\mathcal{I}_{B\cap Y,F_{1}}(df))>0\). This is equivalent to the fact that there exists a fiber \(L\in|f|\) such that \(\#(B\cap L)\geq 2\). Since \(L\) is a curve of bidegree \((1,0)\) in \(\mathbb{F}\), then \(L\cdot Y=0\) in the intersection ring of \(\mathbb{F}\), therefore \(L\subset Y\) and hence we get \(L\cap C\neq\emptyset\). Thus \(\#(L\cap A)\geq 3\), which means that \(A\not\in\mathcal{C}^{*}(d+2)\). As said in Remark 2.5, for any element \(A\in\mathcal{C}(2)\), there is a unique curve \(L\) of bidegree \((1,0)\) and a unique \(R\) of bidegree \((0,1)\) such that both intersect the elements of \(A\) at a point. As described in the following result, it turns out that \(A\cup L\cup R\) is the base locus of \(|\mathcal{I}_{A}(1,1)|\). **Proposition 2.22**.: _For any \(A\in\mathcal{C}(2)\), we have that_ 1. _the general element in_ \(|\mathcal{I}_{A}(1,1)|\) _is integral;_ 2. _the base locus_ \(\mathcal{B}\) _of_ \(|\mathcal{I}_{A}(1,1)|\) _is_ \(A\cup L\cup R\)_, where_ \(L\) _is and_ \(R\) _are the curves described in Remark_ 2.5_._ Proof.: Since \(A\in\mathcal{C}(2)\), by Corollary 2.20 we have \(h^{1}(\mathcal{I}_{A}(1,1))=0\) and \(h^{0}(\mathcal{I}_{A}(1,1))=2\). Call \(C_{1}\) and \(C_{2}\) the connected components of \(A\). Denote by \(X_{i}\) the only element of \(|\mathcal{O}_{\mathbb{F}}(1,0)|\) containing \(C_{i}\) and by \(Y_{i}\) the only element of \(|\mathcal{O}_{\mathbb{F}}(0,1)|\) containing \(C_{i}\). The surfaces \(X_{1}\cup Y_{2}\) and \(X_{2}\cup Y_{1}\) are the only reducible elements of \(|\mathcal{I}_{A}(1,1)|\) and hence, the general element in \(|\mathcal{I}_{A}(1,1)|\) is irreducible and _(1)_ is proved. To prove _(2)_ we analyze the base locus \(\mathcal{B}\) of \(|\mathcal{I}_{A}(1,1)|\). If \(S,S^{\prime}\in|\mathcal{I}_{A}(1,1)|\) are irreducible and \(S\neq S^{\prime}\), then the one-dimensional cycle \(S\cap S^{\prime}\) has bidegree \((3,3)\) and it contains \(A\), which has bidegree \((2,2)\). Take \(R:=X_{1}\cap X_{2}\) and \(L:=Y_{1}\cap Y_{2}\), where \(X_{i}\) and \(Y_{i}\) are the surfaces defined in the first part of this proof. The curves \(L\) and \(R\) are exactly the ones stated in Remark 2.5. In particular, \(\#(L\cap A)=\#(R\cap A)=2\) and hence, by Bezout and Remark 2.16, \(L\cup R\subset\mathcal{B}\). Moreover, recall that the reducible surfaces, \(X_{1}\cup Y_{2}\) and \(X_{2}\cup Y_{1}\) belong to \(|\mathcal{I}_{A}(1,1)|\) and their intersection \((X_{1}\cup Y_{2})\cap(X_{2}\cup Y_{1})\) is \(A\cup L\cup R\). Hence the base locus of \(|\mathcal{I}_{A}(1,1)|\) is exactly \(\mathcal{B}=A\cup L\cup R\). **Remark 2.23**.: By generalizing the proof of Proposition 2.22 we can say something about the base locus of \(\mathcal{I}_{A}(1,d)\), for \(A\in\mathcal{C}(n)\). Fix integers \(d>0\) and \(n\geq 2\) and take any \(A\in\mathcal{C}(n)\). Then, the base locus \(\mathcal{B}\) of \(\mathcal{I}_{A}(1,d)\) contains all curves \(L\) of bidegree \((1,0)\) such that \(\#(L\cap A)\geq 2\). If \(A\in\mathcal{C}^{*}(n)\), then there are exactly \(\binom{n}{2}\) such curves \(L\) (the number of lines joining two points in a set of \(n\) general points). If \(A\in\mathcal{T}(n)\), then \(\#(j(L)\cap A)\geq 2\) for all \(L\) such that \(\#(L\cap A)\geq 2\), since \(j(A)=A\). ## 3. Surfaces of bidegree \((1,d)\) In this section we prove Theorems 1.1 and 1.2. In particular, we give several results for the case of a surface of bidegree \((1,d)\). Later, in the following section, we will specialize to the cases \(d=2,3\). The case \(d=1\) was studied in many details in [4] and here we add a simple lemma useful for what follows. **Lemma 3.1**.: _For any \(A\in\mathcal{T}^{*}(3)\), there is no integral surfaces of bidegree \((1,1)\) containing \(A\)._ Proof.: Assume the existence of an integral surface \(M\) of bidegree \((1,1)\) containing \(A\in\mathcal{T}^{*}(3)\). First of all, thanks to [4, Corollary 8.4], we have that \(M=j(M)\). Then, as explained at the beginning of Section 7.1 in [4], either \(M\) is smooth or reducible. But if \(M\) is a smooth \(j\)-invariant surface of bidegree \((1,1)\) containing \(3\) twistor fibers, then it contains infinitely many of them and these are parametrized by a circle (see [4, Theorem 7.2]). However, smooth surfaces of bidegree \((1,1)\) can be seen as the blow-up of \(\mathbb{P}^{2}\) at three points both via \(\pi_{1}\) and \(\pi_{2}\). In particular, up to unitary transformation it is possible to write \(M\) as the set \(\{([p_{0}:p_{1}:p_{2}],[\ell_{0}:\ell_{1}:\ell_{2}])\in\mathbb{F}\,|\,p_{1}\ell_{ 1}+\lambda p_{2}\ell_{2}\}\), with \(\lambda\in\mathbb{R}\setminus\{0,1\}\). In these coordinates, \(M\) contains \(\pi_{\mu}^{-1}([1:0:0]),\pi_{\mu}^{-1}([0:1:0]),\pi_{\mu}^{-1}([0:0:1])\), for \(\mu=1,2\) and, the family of twistor fibers \(\pi^{-1}([q_{0}:q_{1}:q_{2}])\) defined by: \[\begin{cases}q_{0}=0\text{ and }|q_{1}|^{2}\lambda+|q_{2}|^{2}=0&\text{ if } \lambda<0\,,\\ q_{1}=0\text{ and }|q_{2}|^{2}-|q_{0}|^{2}(\lambda-1)=0&\text{ if }\lambda>1 \,,\\ q_{2}=0\text{ and }|q_{1}|^{2}\lambda+|q_{0}|^{2}(\lambda-1)=0&\text{ if }0< \lambda<1\,.\end{cases}\] Take for instance \(\lambda<0\), then any twistor fiber in \(M\) intersects the line \(L=\pi_{2}^{-1}([1:0:0])\) of bidegree \((1,0)\). An analogous consideration holds if \(0<\lambda<1\) or \(\lambda>1\). Hence we get a contradiction. In the previous lemma we show that an integral \((1,1)\) surface cannot contain three general twistor fibers. On the other hand if \(M\) is a \((1,1)\) surface containing a given \(A\in\mathcal{T}(3)\setminus\mathcal{T}^{*}(3)\), then by [4, Corollary 8.3] we have that \(M\) is \(j\)-invariant, hence either it is smooth or reducible. Moreover, if \(M\) contains infinitely many twistor fibers, then all of them intersect a bidegree \((1,0)\) curve \(L\) and its associated \((0,1)\) curve \(R=j(L)\). **Remark 3.2**.: In [4, Section 8.1] we gave examples of bidegree \((1,1)\) smooth surfaces containing exactly \(0\), \(1\) or \(2\) twistor fibers. **Remark 3.3**.: As a smooth surface of bidegree \((1,1)\) is a Del Pezzo surface of degree \(6\), then this is characterized either by the three bidegree \((1,0)\) curves that contains or by the three bidegree \((0,1)\) curves that contains. In fact, recall that these surfaces represent the blow-up of \(\mathbb{P}^{2}\) at three points with respect to either \(\pi_{1}\) or \(\pi_{2}\). We notice that if a smooth surface of bidegree \((1,1)\) is \(j\)-invariant, then, this is uniquely determined by three twistor fibers contained in it and not by the curves \(L\) and \(R=j(L)\) (of bidegree \((1,0)\) and \((0,1)\), respectively), which intersect all the twistor fibers. In fact, in [4], we proved that these kind of surfaces are uniquely determined by the "circle" defined by the infinite family of twistor fibers contained in it and, such a "circle" is properly contained in in the \((1,0)\) line \(L\) which intersects all fibers. Thanks to these first considerations about bidegree \((1,1)\) surfaces, we are ready to give the proof of our first main theorem. Proof of Theorem 1.1:.: Thanks to Remark 2.9 and Lemma 3.1, the result is true for \(d=0\) and \(d=1\). Assume now that \(d\geq 2\) and, by contradiction, that \(S\) is an integral \((1,d)\) surface containing \(A\in\mathcal{T}^{*}(d+2)\). Call \(C\) a connected component of \(A\) and set \(B:=A\setminus C\). Call \(Y\) the only (by Lemma 3.2) element of \(|\mathcal{I}_{C}(0,1)|\) and consider the following exact sequence (which is a particular case of the one in Formula (10)): \[0\to\mathcal{I}_{B}(1,d-1)\to\mathcal{I}_{A}(1,d)\to\mathcal{I}_{(B\cap Y) \cup C,Y}(1,d)\to 0. \tag{13}\] From Formulae (6) and (8), we have \(h^{0}(\mathcal{O}_{A}(1,d))=(d+2)^{2}=h^{0}(\mathcal{O}_{\mathbb{F}}(1,d))+1\). Clearly, we have that \(B\cap Y\) is formed by \(d+1\) points, and, up to the identification of \(Y\) with \(F_{1}\) given in Formula (5), as the curve \(C\) corresponds to an element of type \(h+f\) in \(F_{1}\), we can write \(\mathcal{I}_{(B\cap Y)\cup C,Y}(1,d)\cong\mathcal{I}_{(B\cap Y)\cup C,F_{1}}( h+(d+1)f)\cong\)\(\mathcal{I}_{B\cap Y,F_{1}}(df)\). Since \(A\in\mathcal{T}^{*}(d+2)\) and every element of \(|f|\) meets \(C\) (indeed \((h+f)f=1\)), the restriction to \(B\cap Y\) of the ruling morphism \(D\to\mathbb{P}^{1}\) associated to \(|f|\) is injective. Thus, \(h^{1}(Y,\mathcal{I}_{A\cap Y,Y}(1,d))=0\) and the exact sequence (13) gives \(h^{1}(\mathcal{I}_{B}(1,d-1))\geq h^{1}(\mathcal{I}_{A}(1,d))\geq 2\), where the last is greater or equal than \(2\) because \(\chi(\mathcal{I}_{A}(1,d))=-1\) (see Formula (7)), and we are assuming that \(h^{0}(\mathcal{I}_{A}(1,d))\geq 1\). Thus, we also have \(h^{0}(\mathcal{I}_{B}(1,d-1))>0\). Recall that \(B\in\mathcal{T}^{*}(d+1)\) and hence, by the inductive assumption, \(B\) is not contained in any integral \(E\in|\mathcal{O}_{\mathbb{F}}(1,d-1)|\). Therefore, thanks to Remarks 2.9 and 2.10, there must be an integral \(M\in|\mathcal{O}_{\mathbb{F}}(1,1)|\) containing at least \(3\) connected components of \(B\), say \(B^{\prime}\subset M\) with \(B^{\prime}\) of bidegree \((3,3)\). Hence, by Lemma 3.1, there is a curve \(L\) of bidegree \((1,0)\) such that \(\#(L\cap B^{\prime})=3\). Thus \(A\notin\mathcal{T}^{*}(d+2)\), a contradiction. Having proved that an integral bidegree \((1,d)\) surface cannot contains \(d+2\) (or more) non collinear twistor fibers, we now pass to prove that all the other cases can actually arise. In particular, we prove in the following result a stronger version of Theorem 1.2 in the case \(n\leq d+1\). **Theorem 3.4**.: _Fix integers \(d\geq 1\) and \(0\leq n\leq d+1\). Then, for any \(A\in\mathcal{T}^{*}(n)\) there is an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\) containing \(A\). Moreover, the general \(S\in|\mathcal{I}_{A}(1,d)|\) contains no other twistor fibers._ Proof.: We use induction on the integer \(d\). If \(d=1\) the statement is true by Remark 3.2. Assume now \(d\geq 2\) and take an element \(A\in\mathcal{T}^{*}(n)\). Since \(\mathcal{T}^{*}(n)\) is Zariski dense in \(\mathcal{C}(n)\), \(A\) has the bigraded Hilbert function of a general element of \(\mathcal{C}(n)\). Thus, thanks to Corollary 2.20, we have that \(h^{1}(\mathcal{I}_{A}(1,d))=0\) and, by Formula (7) \[h^{0}(\mathcal{I}_{A}(1,d))=(d+1)(d+3)-n(d+2)=:N_{n}+1.\] Fix a connected component \(C\) of \(A\) and set \(B:=A\setminus C\). If \(Y\) denotes be the only \((0,1)\) surface containing \(C\), Corollary 2.20 entails that \(h^{1}(\mathcal{I}_{B}(1,d-1))=0\) and, again by Formula 7 \[h^{0}(\mathcal{I}_{B}(1,d-1))=d(d+2)-(n-1)(d+1).\] By the inductive assumption, we know that \(|\mathcal{I}_{B}(1,d-1)|\neq\emptyset\) and a general \(W\in|\mathcal{I}_{B}(1,d-1)|\) is irreducible. Thus \(Y\cup W\in|\mathcal{I}_{A}(1,d)|\) and \(Y\cup W\) has \(2\) irreducible components, one of them having bidegree \((1,d-1)\). Let us denote by \(C_{1},\ldots,C_{n}\) the connected components of \(A\), by \(B_{i}:=A\setminus C_{i}\) and by \(Y_{i}\) the unique element in \(|\mathcal{I}_{C_{i}}(0,1)|\). The set of all the reducible surfaces \(W\cup Y_{i}\in|\mathcal{I}_{A}(1,d)|\), where \(W\in|\mathcal{I}_{B_{i}}(1,d-1)|\), is the union of \(e\) projective spaces (one for each choice of \(C_{i}\)), each of them of codimension \(h^{0}(\mathcal{I}_{A}(1,d))-h^{0}(\mathcal{I}_{B}(1,d-1))=d+2-n>0\) in \(|\mathcal{I}_{A}(1,d)|=\mathbb{P}^{N_{n}}\) (in particular of codimension \(1\) if \(n=d+1\)). Therefore, they do not cover all \(|\mathcal{I}_{A}(1,d)|\). We now want to exclude other possible splittings. In particular, we consider reducible surfaces of the form \(W_{1}\cup D_{1}\) with \(W_{1}\) integral, \(D_{1}\) possible reducible of bidegree \((0,x)\) for some \(x\geq 2\) and hence \(W_{1}\) of bidegree \((1,d-x)\). Remark 2.10 shows that only irreducible components of \(D_{1}\) of bidegree \((0,1)\) may contain some component of \(A\). We obtain that the surface is of the form \(W_{1}\cup D_{2}\cup D_{3}\) with \(W_{1}\cup D_{2}\) of bidegree \((1,d-1)\), but, as showed before, these kind of surfaces do not cover all \(|\mathcal{I}_{A}(1,d)|\), hence we have the thesis. Now we prove that a general \(S\in|\mathcal{I}_{A}(1,d)|\) contains no other twistor fibers. We start by analyzing the case \(d=2\) and discussing the cases \(n=1,2,3\) separately. Assume \(n=1\). Fix \(C\in\mathcal{T}(1)\). Let \(\mathcal{C}_{C}(1)\) denote the set of all \(B\in\mathcal{C}(1)\) such that \(B\cap C=\emptyset\). Note that \(\mathcal{T}(1)\setminus\{C\}=\mathcal{C}_{C}(1)\cap\mathcal{T}(1)\). For any \(B\in\mathcal{C}_{C}(1)\) we have \(h^{1}(\mathcal{I}_{C\cup B}(1,2))=0\) and hence \(h^{0}(\mathcal{I}_{C\cup B}(1,2))=h^{0}(\mathcal{I}_{C}(1,2))-4\). Let \(\mathcal{X}_{C}\) be the set of all smooth and integral surfaces of bidegree \((1,2)\) containing \(C\). It is a non-empty Zariski open subset of \(|\mathcal{I}_{C}(1,2)|\). For any \(B\in\mathcal{C}_{C}(1)\) the set of all \(S\in\mathcal{X}_{C}\) containing \(B\) has complex codimension \(4\) and hence real codimension \(8\) as real manifolds. Since \(\mathcal{T}(1)\) has real dimension \(4\), a general \(S\in\mathcal{X}_{C}\) contains no other twistor fiber. Let now \(n=2\). Fix \(A\in\mathcal{T}(2)\) and let \(\mathcal{C}_{A}(1)\) denote the set of all \(B\in\mathcal{C}(1)\) such that \(B\cap A=\emptyset\) and \(B\cup A\in\mathcal{C}^{*}(3)\). Note that \(h^{1}(\mathcal{I}_{A\cup B}(1,2))=0\). So as in the previous step, we get that a sufficiently general \(S\in|\mathcal{I}_{A}(1,2)|\) contains no element of \(\mathcal{T}_{A}(1)\). Assume that \(S\) contains \(B\in\mathcal{T}(1)\) such that there is a curve of bidegree \((1,0)\) intersecting each connected component of \(A\cup B\). Note that \(L\) is uniquely determined by \(A\). Let \(\mathcal{C}(A,L)\) denote the set of all \(B\in\mathcal{C}(1)\) such that \(B\cap A=\emptyset\) and \(L\) meets \(B\) and set \(\mathcal{T}(A,L):=\mathcal{C}(A,L)\cap\mathcal{T}(1)\) For any \(o\in L\setminus(L\cap C)\) the set of all \(B\in\mathcal{C}(A,L)\) containing \(o\) is a non-empty family of complex dimension \(1\), while there is a unique twistor fiber containing \(o\). The set \(\mathcal{C}(A,L)\) is a complex manifold of dimension \(2\), while \(h^{1}(\mathcal{I}_{A\cup B}(1,2))=1\) and hence \(h^{0}(\mathcal{I}_{A\cup B}(1,2))=h^{0}(\mathcal{I}_{A}(1,2))-3\). Since \(\dim\mathcal{C}(A,L)=2\), a general \(S\in|\mathcal{I}_{A}(1,2)|\) contains no element of \(\mathcal{C}(A,L)\). Hence, there is no twistor fiber \(B\) such that \(A\cup B\in\mathcal{T}^{*}(3)\). Assume now that \(n=3\). Start by considering a general \(A\in\mathcal{T}^{*}(3)\). Then we have \(h^{1}(\mathcal{I}_{A}(1,2))=0\) and \(h^{0}(\mathcal{I}_{A}(1,2))=3\). For any \(x\in\{0,1,2,3\}\) let \(\mathcal{C}(A,x)\) denote the set of all \(C\in\mathcal{C}(1)\) such that \(A\cap C=\emptyset\) and \(h^{0}(\mathcal{I}_{A\cup C}(1,2))=x\) and set \(\mathcal{T}(A,x):=\mathcal{C}(A,x)\cap\mathcal{T}(1)\). For a general \(D\in\mathcal{C}(4)\) we have \(h^{0}(\mathcal{I}_{D}(1,1))=0\) (but \(h^{1}(\mathcal{I}_{D}(1,1))=1\)). Fix \(C\in\mathcal{C}(1)\) such that \(C\cap A=\emptyset\), call \(Y\) the only element of \(|\mathcal{I}_{C}(0,1)|\). Since \(C\cap A=\emptyset\), no connected component of \(A\) is contained in \(Y\) and \(Y\cap A\) is formed by \(3\) points, all of them in \(Y\setminus C\). We easily deal with the case \(x=0\) as any curve \(C\in\mathcal{C}(A,0)\) is not contained in any element of \(|\mathcal{I}_{A}(1,2)|\). We have \(\mathcal{O}_{Y}(1,2)(-C)\cong\mathcal{O}_{F_{1}}(2f)\) and hence \(C\in\mathcal{C}(A,0)\) if no curve of bidegree \((1,0)\)\(L\in|f|\) intersects \(2\) of the components of \(A\) (since \(h^{1}(\mathcal{I}_{A}(1,1))=1\) the last statement is only an "if" and not an "if and only if"). A necessary condition for being \(C\in\mathcal{C}(A,2)\) is that \(L\) intersects all connected components of \(A\), but this is excluded because \(A\in\mathcal{T}^{*}(3)\). Since \(A\in\mathcal{T}^{*}(3)\) there are exactly \(3\) curves \(L_{1},L_{2},L_{3}\) of bidegree \((1,0)\) intersecting \(2\) of the connected components of \(A\). We claim that a general \(S\in|\mathcal{I}_{A}(1,2)|\) contains no \(C\in\mathcal{C}(1)\) such that \(C\cap A=\emptyset\). The family of smooth conics which intersect \(L_{i}\) has complex dimension \(2\), while the family of twistor fibers intersecting \(L_{i}\) has real dimension \(2\). As the general \(S\in|\mathcal{I}_{A}(1,2)|\) has only finitely many conics, it only has a finite number of elements in \(\mathcal{C}(A,1)\) and, for the general, none of them is a twistor fiber. We now pass to analyze the case \(d\geq 3\). Assume the general surface of \(|\mathcal{I}_{A}(1,d)|\) contains the twistor fiber \(C\nsubseteq A\). Thus \(A\cap C=\emptyset\). Set \(A^{\prime}:=A\cup C\). Take \(Y\in|\mathcal{O}_{Y}(0,1)|\) containing \(C\) and consider the residual exact sequence \[0\to\mathcal{I}_{A}(1,d-1)\to\mathcal{I}_{A^{\prime}}(1,d)\to\mathcal{I}_{(Y \cap A)\cup C,Y}(1,d)\to 0. \tag{14}\] We have \(\mathcal{I}_{C,Y}(1,d)\cong\mathcal{O}_{F_{1}}(df)\). Assume first \(n\leq d\). If \(A^{\prime}\in\mathcal{T}^{*}(n+1)\), then \(h^{1}(\mathcal{I}_{A}^{\prime}(1,d))=0\) and hence \(h^{0}(\mathcal{I}_{A}^{\prime}(1,d))=h^{0}(\mathcal{I}_{A}(1,d))-d-2\). Since \(\dim\mathcal{C}(1)=4\), for \(d\geq 3\) the general \(S\in|\mathcal{I}_{A}(1,d)|\) contains no \(C\) such that \(A\cup C\in\mathcal{T}^{*}(n+1)\). Now assume \(A^{\prime}\notin\mathcal{T}^{*}(n+1)\). Thus there are connected components \(C^{\prime}\) and \(C^{\prime\prime}\) of \(A\) such that \(C\cup C^{\prime}\cup C^{\prime\prime}\notin\mathcal{T}^{*}(3)\), i.e. \(C\) intersects the unique line \(L\) meeting \(C^{\prime}\) and \(C^{\prime\prime}\). Since \(\dim L=1\), to exclude this case it is sufficient to prove that \(h^{0}(\mathcal{I}_{B}(1,d))\leq h^{0}(\mathcal{I}_{A}(1,d))-2\), i.e. \(h^{0}(F_{1},\mathcal{I}_{A\cap Y}(df))\leq d\). But since \(h^{0}(\mathcal{O}_{F_{1}}(df))=d+1\), then \(\mathcal{O}_{F_{1}}(df)\) is globally generated and \(A\cap Y\neq\emptyset\), therefore \(h^{0}(F_{1},\mathcal{I}_{A\cap Y}(df))\leq d\). Assume now \(n=d+1\) and that \(A^{\prime}\in\mathcal{C}^{*}(d+2)\). By Lemma 2.21 we have \(h^{1}(\mathcal{I}_{A^{\prime}}(1,d))\leq 1\) and hence \(h^{0}(\mathcal{I}_{A^{\prime}}(1,d))\leq h^{0}(\mathcal{I}_{A}(1,d))-d-1\). Now assume \(A^{\prime}\notin\mathcal{C}^{*}(d+2)\). We need \(h^{0}(F_{1},\mathcal{I}_{A\cap Y}(df))\leq d-1\). Let \(\pi:F_{1}\to\mathbb{P}^{1}\) denote the ruling of \(F_{1}\). Since \(\mathcal{O}_{\mathbb{P}^{1}}(d)\) is very ample, \(h^{0}(F_{1},\mathcal{I}_{A\cap Y}(df))\leq d-1\) if and only if \(\#\pi(A\cap Y)\geq 2\), which is true because \(\#(A\cap Y)=d+1\geq 3\) and (since \(A\in\mathcal{C}^{*}(d+1)\) no fiber \(F\) of \(\pi\) contains at least \(3\) points of \(A\)). The only remaining case is now \(d=3\) and \(n=4\), which can be dealt just by adapting the previous argument for \(d=2\) and \(n=3\). Having proved Theorem 1.2 in the case \(n\leq d+1\) for non collinear twistor fibers, we focus now to the case \(n=d+2\). In this setting we need some preliminary results. **Lemma 3.5**.: _Fix \(d>0\), \(n\leq d+2\) and consider a general \(A\in\mathcal{T}(n)\). Then \(h^{1}(\mathcal{I}_{A}(1,d))=0\)._ Proof.: Since \(\mathcal{T}(n)\) is Zariski dense in \(\mathcal{C}(n)\), it is sufficient to prove the statement for a general \(A\in\mathcal{C}(n)\). Clearly, it is sufficient to prove the case \(n=d+2\). Take a connected component \(C\) of \(A\) and set \(B:=A\setminus C\) and \(Y\in|\mathcal{I}_{C}(0,1)|\). Consider the residual exact sequence of \(Y\), as in Formula (13). Recall the correspondence \(Y\cong F_{1}\) given in Formula (5) and let \(\rho:Y\to\mathbb{P}^{1}\) denote its ruling. Since \(A\) is general \(\#\rho(A\cap Y)=d+1\) and hence \(h^{i}(F_{1},\mathcal{I}_{B\cap Y}(df))=0\), for \(i=0,1\). Therefore \(h^{1}(\mathcal{I}_{A}(1,d))=0\). By Theorem 1.1, we already know that an integral \((1,d)\) surface may contain a union of \(d+2\) twistor fibers only if it belongs to \(\mathcal{T}(d+2)\setminus\mathcal{T}^{*}(d+2)\). Therefore, we introduce the following notation for sets of disjoint smooth conics which are collinear. Given a curve \(L\) of bidegree \((1,0)\) and an integer \(n>0\), let \(\mathcal{C}(n,L)\) denote the set of all \(A\in\mathcal{C}(n)\) such that each connected component of \(A\) meets \(L\). The set \(\mathcal{C}(n,L)\) is isomorphic (as real algebraic variety) to the set \(\mathcal{S}(L,n)\) of all subsets of \(L\) with cardinality \(n\) and hence it is irreducible. An analogous definition and observation can be done for a curve \(R\) of bidegree \((0,1)\) and, of course, for the family \(\mathcal{T}\) instead of \(\mathcal{C}\). **Lemma 3.6**.: _Fix integers \(n\geq 3\) and \(d\geq 1\) and take any \(A\in\mathcal{T}(n,L)\). Then,_ \[h^{1}(\mathcal{I}_{A}(1,d))\geq n-2+\max\{0,n-(d+1)\}.\] Proof.: Call \(C_{1},\ldots,C_{n}\) the connected component of \(A\). Let \(S_{i}\subset C_{i}\) be any union of \(d+2\) distinct points on each conic and \(S:=S_{1}\cup\cdots\cup S_{n}\). Since \(C_{i}\) is a smooth rational curve, the restriction map \(H^{0}(\mathcal{O}_{C_{i}}(1,d))\to H^{0}(\mathcal{O}_{S_{i}}(1,d))\) is bijective. Thus, the restriction map \(H^{0}(\mathcal{O}_{A}(1,d))\to H^{0}(\mathcal{O}_{S}(1,d))\) is bijective and \(\chi(\mathcal{O}_{A}(1,d))=\chi(\mathcal{O}_{S}(1,d))\). Thus we get \(\chi(\mathcal{I}_{A}(1,d))=\chi(\mathcal{I}_{S}(1,d))\). Since \(A\in\mathcal{T}(n,L)\), we know that there exists \(L\) of bidegree \((1,0)\) which intersects each conic in \(A\) (and the \((0,1)\) curve \(j(L)\) does the same). Hence we can choose \(S\) such that \(n\) points are on \(L\) and \(n\) points are on \(j(L)\). In other words we assume \((A\cap L)\cup(A\cap j(L))\subseteq S\). By using Bezout and the fact that the bidegree is \((1,d)\), we get that \((n-2)+\max\{0,n-(d+1)\}\) of these points may be omitted without changing the set \(|\mathcal{I}_{S}(1,d)|\), and hence \(H^{0}(\mathcal{I}_{S}(1,d))\). It follows that \(h^{1}(\mathcal{I}_{A}(1,d))\geq n-2+\max\{0,n-(d+1)\}\). **Remark 3.7**.: Thanks to the previous lemma, if \(A\in\mathcal{T}(3,L)\), for some bidegree \((1,0)\) curve \(L\), then \(h^{1}(\mathcal{I}_{A}(1,1))\geq 2\), and hence \(h^{0}(\mathcal{I}_{A}(1,1))\geq 1\). However, since no surface of bidegree \((1,0)\) or \((0,1)\) contains an element of \(\mathcal{T}(2)\), then every \(S\in|\mathcal{I}_{A}(1,1)|\) is irreducible. Therefore, Proposition 2.17 gives that \(|\mathcal{I}_{A}(1,1)|=\{S\}\) and so, thanks to Formula (7), \(h^{1}(\mathcal{I}_{A}(1,1))=2\). Finally, the following result completes the proof of Theorem 1.2, in the case of \(d+2\) twistor fibers. **Theorem 3.8**.: _Fix an integer \(d\geq 2\) and take a general \(A\in\mathcal{T}(d+2,L)\). Then \(h^{0}(\mathcal{I}_{A}(1,d))\geq d\) and the general \(S\in|\mathcal{I}_{A}(1,d)|\) is integral._ Proof.: By applying Lemma 3.6 with \(n=d+2\), we get \(h^{1}(\mathcal{I}_{A}(1,d))\geq d+1\). Since \(\chi(\mathcal{I}_{A}(1,d))=(d+1)(d+3)-(d+2)^{2}=-1\), we get \(h^{0}(\mathcal{I}_{A}(1,d))\geq d\) and hence, \(|\mathcal{I}_{A}(1,d)|\neq\emptyset\). We now prove that a general element in \(|\mathcal{I}_{A}(1,d)|\) is integral. Take \(S\in|\mathcal{I}_{A}(1,d)|\). Every surface of bidegree \((1,0)\) or \((0,1)\) contains at most one connected component of \(A\). Therefore, \(S\) cannot be the union of a surface of bidegree \((1,0)\) and \(d\) of bidegree \((0,1)\). No integral surface of bidegree \((0,x)\), for \(x\geq 2\), contains a twistor fiber. If \(d=2\), then \(h^{0}(\mathcal{I}_{A}(1,2))\geq 2\). However, thanks to Remark 3.7, for every choice of \(C\in A\), there is only one \(M\in|\mathcal{I}_{(A\setminus C)}(1,1)|\). Since we have considered all the possible reducible elements of \(|\mathcal{I}_{A}(1,2)|\), we get the thesis. Now assume \(d>2\). Since \(A\) has finitely many components, it is sufficient to prove that for any \(x\in\{3,\ldots,d-1\}\), any union \(E\) of \(x\) connected components of \(A\) and any connected component \(C\) of \(A\setminus E\) we have \[h^{0}(\mathcal{I}_{E}(1,x-2))<h^{0}(\mathcal{I}_{E\cup C}(1,x-1)), \tag{15}\] and then proceed as in the proof of Theorem 3.4. Let \(Y\) be the only element of \(|\mathcal{O}_{\mathbb{F}}(0,1)|\) containing \(C\). The exact sequence in Formula (10) gives \(h^{0}(\mathcal{I}_{E}(1,x-2))\leq h^{0}(\mathcal{I}_{E\cup C}(1,x-1))\) and equality holds if and only if \(Y\) is in the base locus \(\mathcal{B}\) of \(|\mathcal{I}_{E\cup C}(1,x-1)|\). Call \(C_{1},\ldots,C_{x}\) the components of \(E\) and \(Y_{i}\) the only surface of bidegree \((0,1)\) containing \(C_{i}\). By Remark 2.13 the irreducible surfaces \(Y,Y_{1},\ldots,Y_{x}\) are all different one eachother. For a general \(A\) we get that each integer \(h^{0}(\mathcal{I}_{E}(1,x-2))\) is the same for all union of \(x\) connected components of \(A\). Thus if the inequality is false, then \(\mathcal{B}\) contains the surface \(Y\cup Y_{1}\cup\cdots\cup Y_{x}\) of bidegree \((0,x+1)\), which is a contradiction. ## 4. Surfaces of bidegree \((1,2)\) and \((1,3)\) In this section we specialize our study to the case of surfaces of bidegree \((1,2)\) and \((1,3)\). In particular, we will prove Theorems 1.3, 1.4 and 1.5. Recall, from Formula (7), that for any \(A\in\mathcal{C}(n)\), we have \(\chi(\mathcal{I}_{A}(1,2))=15-4n\), and hence, if \(n\leq 3\), we get \(h^{0}(\mathcal{I}_{A}(1,2))>0\). We also recall that a general \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) contains finitely many smooth conics and, thanks to Theorem 2.19, for every \(B\in\mathcal{C}(2)\) we have \(h^{1}(\mathcal{I}_{B}(1,1))=0\). ### Surfaces of bidegree \((1,2)\) containing \(0\leq n\leq 4\) twistor fibers In this section, we show the existence of a smooth surface of bidegree \((1,2)\) containing exactly \(0,1,2,3\) or \(4\) twistor fibers. In order to analyze the space \(|\mathcal{I}_{A}(1,2)|\) when \(A\) is in \(\mathcal{C}(n)\) (or in \(\mathcal{T}(n)\)), for \(0\leq n\leq 4\), we will need some preliminary results. Note that the extremal case, when \(n=4\), will be treated in a different way. We start by considering \((1,2)\)-surfaces containing three disjoint conics. **Proposition 4.1**.: _Take \(A\in\mathcal{C}(3)\) such that \(h^{0}(\mathcal{I}_{A}(1,1))>0\). Then_ 1. _there exists a curve_ \(L\) _of bidegree_ \((1,0)\) _and a curve_ \(R\) _of bidegree_ \((0,1)\) _such that_ \(A\in\mathcal{C}(3,L)\) _and_ \(A\in\mathcal{C}(3,R)\)__ 2. _there is an integral element in_ \(|\mathcal{I}_{A}(1,2)|\)_;_ 3. \(h^{1}(\mathcal{I}_{A}(1,2))=1\)_;_ 4. _the base locus_ \(\mathcal{B}\) _of_ \(|\mathcal{I}_{A}(1,2)|\) _is_ \(A\cup L\cup R\)_, where_ \(L\) _is and_ \(R\) _are the curves defined in (_1_)._ Proof.: We start arguing as in Remark 3.7. Thanks to Remark 2.9, each surface of bidegree \((1,0)\) or \((0,1)\) does not contain any element of \(\mathcal{C}(2)\), hence, as \(A\in\mathcal{C}(3)\), any element in \(|\mathcal{I}_{A}(1,1)|\) is irreducible. Proposition 2.17 (with \(a=b=c=d=1\)) gives that \(h^{0}(\mathcal{I}_{A}(1,1))=1\) and hence we set \(|\mathcal{I}_{A}(1,1)|=\{M\}\) and by Formula (7) we compute \(h^{1}(\mathcal{I}_{A}(1,1))=2\). We prove now the first statement. Let \(C\) be a connected component of \(A\), set \(B:=A\setminus C\) and denote by \(X\) the only element of \(|\mathcal{I}_{C}(1,0)|\). Performing the same construction that leads to Formula (10), we have \[0\to\mathcal{I}_{B}(0,1)\to\mathcal{I}_{A}(1,1)\to\mathcal{I}_{A\cap X,X}(1,1) \to 0.\] Since \(h^{0}(\mathcal{I}_{B}(0,1))=0\) and \(h^{0}(\mathcal{I}_{A}(1,1))=1\), the previous residual exact sequence gives \[h^{0}(\mathcal{I}_{A\cap X,X}(1,1))\geq 1. \tag{16}\] Thanks to Formula (4), we have that \(\mathcal{O}_{X}(1,1)\simeq\mathcal{O}_{F_{1}}(h+2f)\); moreover, recall from Remark 2.9 that \(C\) is identified with an element of \(|\mathcal{O}_{F_{1}}(h+f)|\). Since \(A\cap X\) is the union of \(C\) and the two points \(B\cap X\), there is a fiber \(L\in|f|\) of the ruling of \(F_{1}\) containing \(B\cap X\). Since \(f(h+f)=1\), we have that \(L\) meets \(C\). Thus \(L\) meets each connected component of \(A\). Taking instead of \(X\) the only element of \(|\mathcal{I}_{C}(0,1)|\) we get the existence of \(R\). We now prove _(2)_. We will show that there is an integral element in \(|\mathcal{I}_{A}(1,2)|\) by showing that the possible reducible cases do not cover the whole family. Remark 2.10 shows that \(A\) is not contained in a surface of bidegree \((1,2)\) with an irreducible component of bidegree \((0,2)\). By Remark 2.9, a bidegree \((0,1)\) or \((0,1)\) surface does not contain any element of \(\mathcal{C}(n)\), with \(n\geq 2\). Hence, there are only finitely many elements of \(|\mathcal{O}_{\mathbb{F}}(1,2)|\) with at least \(3\) irreducible components. Since \(h^{0}(\mathcal{O}_{\mathbb{F}}(0,1))=3\), the set of all reducible elements of \(|\mathcal{O}_{\mathbb{F}}(1,2)|\) with an irreducible component of bidegree \((1,1)\) containing \(A\) is isomorphic to \(\mathbb{P}^{2}\). Thus, in order to prove the existence of an integral element in \(|\mathcal{I}_{A}(1,2)|\), it is sufficient to prove that \(h^{0}(\mathcal{I}_{A}(1,2))\geq 4\). But, using the exact sequence (9), this is equivalent to prove that \(h^{1}(\mathcal{I}_{A}(1,2))\geq 1\), and the last inequality is true, by Theorem 2.19 because \(\#(L\cap A)=3\). To prove _(3)_, i.e. \(h^{1}(\mathcal{I}_{A}(1,2))=1\), it is sufficient to prove that \(h^{1}(\mathcal{I}_{A}(1,2))\leq 1\). As before, take a connected component \(C\) of \(A\) and set \(B=A\setminus C\). Let \(Y\) be the only element of \(|\mathcal{I}_{C}(0,1)|\). In the identification (5) of \(Y\) with \(F_{1}\) we have \[\mathcal{I}_{(B\cap Y)\cup C,Y}(1,2)\cong\mathcal{I}_{(B\cap Y)\cup C,F_{1}}( h+3f)\cong\mathcal{O}_{(B\cap Y),F_{1}}(2f).\] Since \(\#(B\cap Y)=2\) and \(\mathcal{O}_{F_{1}}(2f)\) is globally generated, \(h^{1}(F_{1},\mathcal{I}_{B\cap Y,F_{1}}(2f))\leq 1\). We have \(\operatorname{Res}_{Y}(A)=B\). Thanks to Theorem 2.19, we have \(h^{1}(\mathcal{I}_{B}(1,1))=0\), and the residual exact sequence of \(Y\) \[0\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{(B\cap Y)\cup C,Y}(1,2)\to 0,\] gives \(h^{1}(\mathcal{I}_{A}(1,2))\leq 1\). Finally, we discuss the base locus of \(|\mathcal{I}_{A}(1,2)|\) in order to prove _(4)_. First of all, for any surface \(S\in|\mathcal{I}_{A}(1,2)|\), we clearly have \(A\subset S\). Moreover, as \(\#(L\cap A)=3\) and \(\#(R\cap A)=3\), then by Bezout, both curves are contained in \(S\): in fact, thanks to Remark 2.3 and Formula (3), the general intersection between a curve of bidegree \((1,0)\) and \(S\) consists of one point while the intersection of a curve of bidegree \((0,1)\) and \(S\) consists of two points (see also Remark 2.16). Therefore, \(L\cup R\subset S\) and \(A\cup L\cup R\subset\mathcal{B}\). We now prove that \(\mathcal{B}\subset A\cup L\cup R\). Fix \(p\in\mathcal{B}\backslash(A\cup L\cup R)\). Take a connected component \(C_{i}\), \(i=1,2,3\), of \(A\) and set \(B_{i}:=A\setminus C_{i}\). Let \(Y_{i}\) be the only element of \(|\mathcal{O}_{\mathbb{F}}(0,1)|\) containing \(C_{i}\). By Proposition 2.22\(B_{i}\cup L\cup R\) is the base locus of \(|\mathcal{I}_{B_{i}}(1,1)|\). Thus there is \(S_{i}\in|\mathcal{I}_{B_{i}}(1,1)|\) such that \(p\notin S_{i}\). If \(p\notin Y_{i}\), then \(p\notin\mathcal{B}\). Since \(S_{1}\cap S_{2}\cap S_{3}=L\cup R\), we may take \(i\in\{1,2,3\}\) such that \(p\notin Y_{i}\). Thus \(\mathcal{B}=A\cup L\cup R\). The following remark shows that if \(A\in\mathcal{C}(3)\) satisfies the condition _(1)_ of Theorem 4.1, then the existence of a \((1,1)\)-surface containing \(A\) is granted. In particular, there exists a \((1,1)\)-surface containing any triplets of collinear twistor fibers. **Remark 4.2**.: Take \(A\in\mathcal{C}(3)\) and assume the existence of curves \(L\) of bidegree \((1,0)\) and \(R\) of bidegree \((0,1)\) intersecting each connected component of \(A\). By adapting the proof of Lemma 3.6, since \(\#(L\cap A)=3\) and \(\#(R\cap A)=3\), we have that \(h^{1}(\mathcal{I}_{A}(1,1))\geq 2\). Thus \(h^{0}(\mathcal{I}_{A}(1,1))\geq 1\) and \(A\) satisfies the assumptions of Proposition 4.1. We can even be more specific and say that if \(A\in\mathcal{C}(3)\) (with no assumption on \(L\) or \(R\)), then \(h^{0}(\mathcal{I}_{A}(1,1))\leq 1\) and if \(|\mathcal{I}_{A}(1,1)|\neq\emptyset\), then the only element of \(|\mathcal{I}_{A}(1,1)|\) is integral. This is true because, thanks to Remark 2.9, any reducible element of \(|\mathcal{O}_{\mathbb{F}}(1,1)|\) contains at most \(2\) disjoint smooth conics. Note that if \(A\in\mathcal{T}(3)\) and \(L\) exists, then we may take \(R:=j(L)\). Thus if \(A\in\mathcal{T}(3)\) to get \(h^{0}(\mathcal{I}_{A}(1,1))>0\) it is sufficient to assume \(A\notin\mathcal{T}^{*}(3)\). The following lemma is a sort of vice versa of the previous remark. **Lemma 4.3**.: _Take \(A\in\mathcal{C}^{*}(3)\). Then \(h^{0}(\mathcal{I}_{A}(1,1))=0\) and \(h^{1}(\mathcal{I}_{A}(1,1))=1\)._ Proof.: If \(A\in\mathcal{C}(3)\), thanks to Formula (7), \(\chi(\mathcal{I}_{A}(1,1))=-1\). Hence \(h^{0}(\mathcal{I}_{A}(1,1))=0\) if and only if \(h^{1}(\mathcal{I}_{A}(1,1))=1\). We assume \(h^{0}(\mathcal{I}_{A}(1,1))\neq 0\) and will prove that \(A\notin\mathcal{C}^{*}(3)\). Let \(B\subset A\) be the union of \(2\) connected components of \(A\) and set \(C:=A\setminus B\). Let \(L\) and \(R\) be the curves defined in Remark 2.5 for \(B\in\mathcal{C}(2)\). Take any element \(D\in|\mathcal{I}_{B}(1,1)|\) Since \(\#(B\cap(L\cup R))=2\), \(B\subset D\) and \(D\) has bidegree \((1,1)\), then Bezout theorem implies \(L\cup R\subset D\). By Theorem 2.19 and Proposition 2.22, we have \(h^{1}(\mathcal{I}_{B}(1,1))=0\), \(h^{0}(\mathcal{I}_{B}(1,1))=2\), and the general element \(M\) in \(|\mathcal{I}_{B}(1,1)|\) is integral. Since \(h^{0}(\mathcal{I}_{B}(1,1))=2\) and \(M\) is general, \(C\nsubseteq M\). Consider the following residual exact sequence: \[0\to\mathcal{I}_{C}\to\mathcal{I}_{A}(1,1)\to\mathcal{I}_{B\cup(M\cap C),M}(1, 1)\to 0 \tag{17}\] Since \(M\in|\mathcal{I}_{B}(1,1)|\) and \(h^{1}(\mathcal{O}_{\mathbb{F}})=0\), the exact sequence \[0\to\mathcal{O}_{\mathbb{F}}\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{B,M}(1,1)\to 0\] gives \(h^{0}(M,\mathcal{I}_{B,M}(1,1))=1\). Moreover, \(h^{0}(\mathcal{I}_{C})=0\), and so the sequence (17) and the assumption \(h^{0}(\mathcal{I}_{A}(1,1))\geq 1\) imply \(h^{0}(M,\mathcal{I}_{B\cup(M\cap C),M}(1,1))\geq 1\). By Proposition 2.22, the curve \(A\cup L\cup R\) is the base locus of \(|\mathcal{I}_{B}(1,1)|\) and hence the base locus of \(H^{0}(M,\mathcal{I}_{B,M}(1,1))\) is the curve \(B\cup L\cup R\). Since \(B\cap C=\emptyset\), the degree \(2\) scheme \(C\cap M\) is contained in \(L\cup R\). To get \(A\notin\mathcal{C}^{*}(3)\) we need to prove that \(C\cap L\neq\emptyset\). It is sufficient to observe that \(\deg(C\cap T)\leq 1\) for any curve \(T\) of bidegree \((0,1)\). Indeed, this is true by Remark 2.3 and the fact that \(C\) is the intersection of a surface of bidegree \((1,0)\) and a surface of bidegree \((0,1)\). We now discuss the case of \(A\in\mathcal{C}(2)\) contained in a smooth bidegree \((1,1)\) surface. In this case we will also prove smoothness for the general element in \(|\mathcal{I}_{A}(1,2)|\). **Proposition 4.4**.: _Take any \(A\in\mathcal{C}(2)\) contained in a smooth element of \(|\mathcal{O}_{\mathbb{F}}(1,1)|\). Then we have:_ 1. \(h^{1}(\mathcal{I}_{A}(1,2))=0\) _and_ \(h^{0}(\mathcal{I}_{A}(1,2))=7\)_;_ 2. _the set_ \(A\cup L\) _is contained in the base locus_ \(\mathcal{B}\) _of_ \(|\mathcal{I}_{A}(1,2)|\)_, where_ \(L\) _is the bidegree_ \((1,0)\) _curve described in Remark_ 2.5_;_ 3. _a general_ \(S\in|\mathcal{I}_{A}(1,2)|\) _is smooth._ Proof.: To prove _(1)_ it is sufficient to apply Corollary 2.20 giving \(h^{1}(\mathcal{I}_{A}(1,2))=0\) and Formula (7), which entails \(h^{0}(\mathcal{I}_{A}(1,2))=7\). We now pass to point _(2)_. Take \(L\) and \(R\) as in Remark 2.5. Since \(\mathcal{O}_{\mathbb{F}}(0,1)\) is globally generated, \(\mathcal{B}\subseteq A\cup L\cup R\); moreover, thanks to Remark 2.16 we also have \(A\cup L\subseteq\mathcal{B}\). We are left to prove _(3)_. By Bertini's theorem \(\operatorname{Sing}(S)\subseteq A\cup L\cup R\) for a general \(S\in|\mathcal{I}_{A}(1,2)|\). Fix a smooth \(M\in|\mathcal{I}_{A}(1,1)|\). Take a general \(Y^{\prime}\in|\mathcal{O}_{\mathbb{F}}(0,1)|\). Since \(Y^{\prime}\) is general \(L\cap Y^{\prime}=\emptyset\) (and hence it is not singular at any \(p\in L\)). Thus, up to small deformation, we can say that \(S\) (which is general) is smooth in a neighborhood of \(L\). We are left to exclude the case \(\operatorname{Sing}(S)\subseteq A\cup R\). Fix \(p\in A\cup R\) and let \(2p\) be the \(0\)-dimensional scheme of \(\mathbb{F}\) defined by the ideal \(\mathcal{I}_{p,\mathbb{F}}^{2}\). \(S\) is singular at \(p\) if and only if \(S\in|\mathcal{I}_{2p\cup A}(1,2)|\). To conclude our proof we need to prove that \[h^{0}(\mathcal{I}_{2p\cup A}(1,2))=h^{0}(\mathcal{I}_{A}(1,2))-2,\] for all \(p\in(A\cup R)\setminus A\cap R\) and that, for \(p\in A\cap R\), \(h^{0}(\mathcal{I}_{2p\cup A}(1,2))<h^{0}(\mathcal{I}_{A}(1,2))\). These two statements give the thesis because \((A\cup R)\setminus A\cap R\) and \(A\cap R\) are \(1\)-dimensional and \(0\)-dimensional, respectively, and we are saying that the set of bidegree \((1,2)\) surfaces containing \(A\) and a singular points has codimension \(2\) in the first case and positive codimension in the second one. Let us start by taking \(p\in(A\cup R)\setminus A\cap R\). Since \(p\) is a smooth point of \(A\cup R\), \(\deg(2p\cap(A\cup R))=2\). Consider the exact sequence \[0\to\mathcal{I}_{(A\cup R)\cup 2p}(1,2)\to\mathcal{I}_{A\cup R}(1,2)\to \mathcal{I}_{A\cup R}\otimes\mathcal{O}_{2p}(1,2)\to 0. \tag{18}\] Since \(\deg(2p)=4\) and \(A\) is smooth, we have \(h^{0}(\mathcal{I}_{A\cup R}\otimes\mathcal{O}_{2p}(1,2))=2\) if \(p\in A\cup R\) and \(h^{0}(\mathcal{I}_{A}\otimes\mathcal{O}_{2p}(1,2))=4\) if \(p\in R\). Hence it is sufficient to prove that \[h^{1}(\mathcal{I}_{(A\cup R)\cup 2p}(1,2))=0.\] First of all, assume that \(p\in A\setminus R\). Let \(C\) be the connected component of \(A\) containing \(p\). Set the following notation \(E:=A\setminus C\). As \(R\) is in the base locus of \(\mathcal{I}_{A}(1,1)\) we have that \(h^{0}(\mathcal{I}_{A}(1,1))=h^{0}(\mathcal{I}_{A\cup R}(1,1))\) (see [3, proof of Theorem 1.1]). Moreover, thanks to part _(1)_ and to [3, Remark 4.3], we have \(h^{0}(\mathcal{I}_{A\cup R}(1,1))=h^{0}(\mathcal{I}_{A}(1,1))=h^{0}(\mathcal{ I}_{E}(1,1))-3\). Thus \(p\) is not in the base locus of \(|\mathcal{I}_{E}(1,1)|\). Fix \(M\in|\mathcal{I}_{E}(1,1)|\) such that \(p\notin S\). Let \(Y\) be the surface of \(|\mathcal{O}_{\mathbb{F}}(0,1)|\) containing \(C\) and consider the residual exact sequence with respect to \(Y\): \[0\to\mathcal{I}_{E\cup p}(1,1)\to\mathcal{I}_{A\cup 2p}(1,2)\to\mathcal{I}_{(E \cap Y)\cup C\cup(2p\cap Y),Y}(1,2)\to 0. \tag{19}\] Now we prove that \[h^{1}(\mathcal{I}_{E\cup p}(1,1))=0. \tag{20}\] Recall that \(A=E\cup C\) and \(p\in C\), hence we have the exact sequence \[0\to\mathcal{I}_{A}(1,1)\to\mathcal{I}_{E\cup p}(1,1)\to\mathcal{I}_{p,C}(2)\to 0.\] Thanks to Theorem 2.19 we have that \(h^{1}(\mathcal{I}_{A}(1,1))=0\); on the other hand, since \(C\) is a smooth rational curve, we have \(h^{1}(\mathcal{I}_{p,C}(2))=h^{1}(\mathcal{O}_{C}(1))=0\) and this proves (20). In order to conclude it is sufficient to prove now that \[h^{1}(\mathcal{I}_{(E\cap Y)\cup C\cup(2p\cap Y),Y}(1,2))=0. \tag{21}\] Note that \(\mathcal{I}_{C,Y}(1,2)\cong\mathcal{O}_{F_{1}}(2f)\cong\mathcal{O}_{Y}(0,1)\), hence, by [3, Remark 2.11], we know that \(\mathcal{I}_{C,Y}(1,2)\) is very ample. Therefore we get \(h^{1}(\mathcal{I}_{C\cup(2p\cap Y),Y}(1,2))=0\). Since \(E\cap Y\) consists of a point we conclude that (21) holds. Thus the exact sequence (19) gives \(h^{1}(\mathcal{I}_{A\cup 2p,\mathbb{F}}(1,2))=0\), concluding the proof in the case \(p\in A\setminus R\). Fix \(p\in R\setminus(A\cap R)\) and recall that we need to prove that \(h^{0}(\mathcal{I}_{A\cup 2p}(1,2))=5\). Fix a general \(Y^{\prime}\in|\mathcal{I}_{p}(0,1)|\). Since \(Y^{\prime}\) is general, \(R\nsubseteq Y^{\prime}\) (and also \(L\nsubseteq Y^{\prime}\)). Since \(Y^{\prime}\) is smooth, \(Y^{\prime}\cap 2p=(2p,Y^{\prime})\) is a degree \(3\) scheme and \(\operatorname{Res}_{Y^{\prime}}(2p)=\{p\}\). As \(p\in R\), we have that \(h^{0}(\mathcal{I}_{A\cup\{p\}}(1,1))=h^{0}(\mathcal{I}_{A}(1,1))=2\). Thus, by the residual exact sequence of \(Y^{\prime}\) it is sufficient to prove that \[h^{0}(Y^{\prime},\mathcal{I}_{(A\cap Y^{\prime})\cup(2p,Y^{\prime})}(1,2)) \leq 3.\] Since \(\mathcal{O}_{Y^{\prime}}(1,2)\) is very ample, we have \(h^{0}(Y^{\prime},\mathcal{I}_{(2p,Y^{\prime})}(1,2))=h^{0}(Y^{\prime},\mathcal{ O}_{Y^{\prime}}(1,2))-3=4\). Thus it is sufficient to prove that \(A\cap Y^{\prime}\) is not contained in the base locus, \(\mathcal{B}^{\prime}\), of \(|\mathcal{O}_{(2p,Y^{\prime})}(1,2)|\). In the identification between \(Y^{\prime}\) and \(F_{1}\) we have \(\mathcal{O}_{Y^{\prime}}(1,2)\cong\mathcal{O}_{F_{1}}(h+3f)\). Let \(N\) be the only element of \(|f|\) containing \(p\). We have \(N\cong\mathbb{P}^{1}\) and \(h^{1}(N,\mathcal{I}_{2p\cap N}(1,2))=0\), but \(N\subseteq\mathcal{B}^{\prime}\). Since \(\mathcal{O}_{F_{1}}(h+2f)\) is very ample, \(\mathcal{I}_{p}(h+2f)\) has only \(p\) in its base locus. Thus \(\mathcal{B}^{\prime}=N\) and so, since \(R\nsubseteq Y^{\prime}\) (and also \(L\nsubseteq Y^{\prime}\)), \(\mathcal{B}^{\prime}\) cannot contain both points of \(A\cap Y^{\prime}\). The last case is \(p\in A\cap R\). To prove our claim, i.e. that \(h^{0}(\mathcal{I}_{2p\cup A}(1,2))<h^{0}(\mathcal{I}_{A}(1,2))\), it is sufficient to use (18). Hence, a general \(S\) is smooth. The following result is analogous to Proposition 4.1, when we choose the conics to be twistor fibers. **Proposition 4.5**.: _Take \(A\in\mathcal{T}(3)\) such that \(h^{0}(\mathcal{I}_{A}(1,1))=0\). Then we have the following:_ 1. \(h^{1}(\mathcal{I}_{A}(1,2))=0\) _and hence_ \(h^{0}(\mathcal{I}_{A}(1,2))=3\)_;_ 2. _there is an integral_ \(S\in|\mathcal{I}_{A}(1,2)|\)_;_ 3. _the base locus of_ \(|\mathcal{I}_{A}(1,2)|\) _is contained in the union of_ \(A\) _and_ \(3\) _distinct curves of bidegree_ \((1,0)\) _ 4. _for a sufficiently general_ \(A\) _(contained in a dense euclidean open subset of_ \(\mathcal{T}(3)\)_), we may take a smooth_ \(S\in|\mathcal{I}_{A}(1,2)|\)_._ Thanks to Lemma 3.1, the hypothesis \(A\in\mathcal{T}(3)\) such that \(h^{0}(\mathcal{I}_{A}(1,1))=0\) in the previous statement, implies that the conics in \(A\) do not belong to any infinite family of twistor fibers contained in a smooth \(j\)-invariant surface of bidegree \((1,1)\). Proof.: We start by proving _(1)_. Fix a connected component \(C\) of \(A\) and call \(D\) the only element of \(|\mathcal{I}_{C}(0,1)|\). Set \(B:=A\setminus C\). To get \(h^{1}(\mathcal{I}_{A}(1,2))=0\) mimicking the proof of Proposition 4.1 it is sufficient to prove that \(h^{1}(F_{1},\mathcal{I}_{B\cap D}(2f))=0\). Assume \(h^{1}(F_{1},\mathcal{I}_{B\cap D}(2f))>0\), i.e. assume the existence of \(T\in|\mathcal{O}_{F_{1}}(f)|\) containing the \(2\) points \(B\cap D\). Since \(C\in|\mathcal{O}_{F_{1}}(h+f)|\), \(C\cap T\neq\emptyset\). Thus \(T\) meets each connected component of \(A\). Remark 4.2 gives \(h^{0}(\mathcal{I}_{A}(1,1))>0\), a contradiction. To prove _(2)_ it is sufficient to show that the reducible cases do not cover the whole \(|\mathcal{I}_{A}(1,2)|\). In fact, reasoning as in the proof of Proposition 4.1, the only possible splitting are of the form \((1,0)+(0,1)+(0,1)\), which are in a finite number, or \((1,1)+(0,1)\), where the bidegree \((1,1)\) component contains \(2\) connected components of \(A\) and the remainder bidegree \((0,1)\) part is uniquely determined. Now, \(h^{0}(\mathcal{I}_{B}(1,1))=2\), so, the set of all reducible elements of \(|\mathcal{I}_{A}(1,2)|\) with an irreducible component of bidegree \((1,1)\) does not cover \(|\mathcal{I}_{A}(1,2)|\). We now prove _(3)_ and _(4)_. Since \(h^{0}(\mathcal{I}_{A}(1,1))=0\) and \(A\) is \(j\)-invariant, neither \(\pi_{1}(A)\) nor \(\pi_{2}(A)\) has a triple points (both have \(3\) double points). Set \(L_{1}\cup L_{2}\cup L_{3}:=\pi_{2}^{-1}(\operatorname{Sing}(\pi_{2}(A)))\) and \(R_{1}\cup R_{2}\cup R_{3}:=\pi_{1}^{-1}(\operatorname{Sing}(\pi_{1}(A)))\). Since \(\#(L_{i}\cap A)=\#(R_{i}\cap A)=2\), \(L_{1}\cup L_{2}\cup L_{3}\) are in the base locus of \(|\mathcal{I}_{A}(1,2)|\) and each \(L_{i}\) and each \(R_{i}\) meets exactly \(2\) connected components of \(A\). To prove the existence of a smooth element, it is sufficient to reason as in the proof of Proposition 4.4 case _(3)_. We are now ready to prove the first part of Theorem 1.3. **Theorem 4.6**.: _Fix \(n\in\{0,1,2,3\}\). There is a smooth \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing exactly \(n\) twistor fibers._ Proof.: A general \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) contains only finitely many smooth conics. Since the set of all twistor fibers has real codimension \(4\) in the space of all smooth conics, a general \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) contains no twistor fiber. Now we prove the case \(n=1\). Fix a twistor fiber \(C\) and take a general \(S\in|\mathcal{I}_{C}(1,2)|\). Assume that \(S\) contains another twistor fiber, \(E\). We have \(h^{1}(\mathcal{I}_{C}(1,2))=h^{1}(\mathcal{I}_{C\cup E}(1,2))=0\) (Theorem 2.19 and Remark 2.14). Thus \(|\mathcal{I}_{C\cup E}(1,2)|\) is a \(4\)-codimensional complex projective subspace of \(|\mathcal{I}_{C}(1,2)|\) (this is explained by the equality \(h^{0}(\mathcal{I}_{C\cup E}(1,2))=h^{0}(\mathcal{I}_{C}(1,2))-4\) contained in [3, proof of Theorem 1.1]). However \(\mathcal{T}(1)\) is a real \(4\)-dimensional space. So a general \(S\in|\mathcal{I}_{C}(1,2)|\) contains no other twistor fiber. Note that \(C\) is the base locus of \(|\mathcal{I}_{C}(1,2)|\). By Bertini theorem a general \(S\in|\mathcal{I}_{C}(1,2)|\) is smooth outside \(C\). Fix \(p\in C\) and let \(2p\) the closed subscheme of \(\mathbb{F}\) with \((\mathcal{I}_{p})^{2}\) as its ideal sheaf. Recall that \(2p\subset S\) if and only if \(p\in\operatorname{Sing}(S)\). Since \(\dim C=1\) to get that \(S\) is smooth it is sufficient to prove that \(h^{0}(\mathcal{I}_{2p\cup C}(1,2))\leq h^{0}(\mathcal{I}_{C}(1,2))-2=9\). This follows from the proof of Proposition 4.4 case _(3)_. The case \(n=2\) is true by Proposition 4.4 with \(\mathcal{T}(2)\) instead of \(\mathcal{C}(2)\). The case \(n=3\) is true by Proposition 4.5. In the remainder of the section, we will construct a smooth \((1,2)\)-surface containing \(4\) twistor fibers. The following lemma, in the case \(d=2\), says that if an integral \((1,2)\)-surface contains \(4\) disjoint smooth conics, then these conics are not general, because three of them must be collinear. **Lemma 4.7**.: _Let \(d\geq 2\) and \(A\in\mathcal{C}(d+2)\). If there is an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\), then \(A\notin\mathcal{C}^{*}(d+2)\)_ Proof.: We prove the lemma by induction on \(d\). We start with the case \(d=2\). Assume that \(A\in\mathcal{C}^{*}(4)\), i.e. there is no union \(B\) of \(3\) of the connected components of \(A\) such that \(\#(L\cap B)=3\) for some curve \(L\) of bidegree \((1,0)\). Fix a connected component \(C\) of \(A\) and set \(B:=A\setminus C\). Call \(Y\) the only element of \(|\mathcal{I}_{C}(0,1)|\). Remark 2.9 gives \(\operatorname{Res}_{Y}(A)=B\). By assumption and Lemma 4.3, \(h^{0}(\mathcal{I}_{B}(1,1))=0\). Since \(h^{0}(\mathcal{I}_{A}(1,2))\neq 0\), the residual exact sequence \[0\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{A\cap Y,Y}(1,2 )\to 0\,,\] gives \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,2))>0\) (otherwise \(|\mathcal{I}_{A}(1,2)|=\emptyset\)). The scheme \(A\cap Y\) is the union of \(C\) and the \(3\) points \(B\cap Y\). In the identification of \(Y\) with \(F_{1}\) the line bundle \(\mathcal{O}_{Y}(1,2)\) goes to the line bundle \(\mathcal{O}_{F_{1}}(h+3f)\) and \(C\) goes to an element of \(|h+f|\). Thus \(h^{0}(F_{1},\mathcal{I}_{B\cap Y,F_{1}}(2f))>0\). Hence at least \(2\) of the \(3\) points \(B\cap Y\) are in the same fiber \(\hat{L}\) of the ruling \(|f|\) of \(F_{1}\). Since \(\hat{L}\cap C\neq\emptyset\), \(\hat{L}\) is a curve of bidegree \((1,0)\) meeting at least \(3\) connected components of \(A\). Call \(B^{\prime}\) a union of \(3\) components of \(A\) intersecting \(\hat{L}\). The curves \(B^{\prime}\) and \(\hat{L}\) give a contradiction. Assume now that the result is true for \(d+1\). Notice that, as a byproduct of the previous part, if \(B\in\mathcal{C}^{*}(d+1)\), then \(h^{0}(\mathcal{I}_{B}(1,d-1))=0\). Assume \(A\in\mathcal{C}^{*}(d+2)\) and that there is an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\). Fix a connected component \(C\) of \(A\) and set \(B:=A\setminus C\). Take a surface \(Y\) of bidegree \((0,1)\) containing \(C\). By means of the sequence in Formula (10), we either have \(h^{0}(\mathcal{I}_{B}(1,d-1))>0\) or \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,d))>0\). Since \(A\in\mathcal{C}^{*}(d+2)\), \(B\in\mathcal{C}^{*}(d+1)\) and hence, thanks to the inductive assumption, we have \(h^{0}(\mathcal{I}_{B}(1,d-1))=0\). The scheme \(A\cap Y\) is the union of \(C\) and the scheme \(B\cap Y\) with \(A\cap B\cap Y=\emptyset\). Up to the identification of \(Y\) and \(F_{1}\) we have \(\mathcal{O}_{Y}(1,d)(-C)\cong\mathcal{O}_{F_{1}}(df)\). Since \(Y\) has bidegree \((0,1)\) each connected component of \(B\) is either contained in \(Y\) or it intersects transversely \(Y\) at a unique point. By Remark 2.9, the set \(B\cap Y\) is formed by \(d+1\) points. Thus \(\mathcal{I}_{A\cap Y,Y}(1,d)\cong\mathcal{I}_{B\cap Y}(df)\). We saw \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,d))>0\) and this is true if and only if there are \(u_{1},\ldots u_{d+1}\in B\cap Y\) and \(F\in|f|\) such that that \(u_{i}\neq u_{j}\), for \(i\neq j\) and \(\{u_{1},\ldots,u_{d+1}\}\subset F\). The set \(F\cap C\) is a unique point, \(o\), and \(o\notin\{u_{1},\ldots,u_{d+1}\}\), because \(B\cap C=\emptyset\). The curve \(F\) has bidegree \((0,1)\) and hence \(A\notin\mathcal{C}^{*}(d+2)\), a contradiction. Thanks to the previous result, if an integral \((1,2)\)-surface contains \(4\) disjoint smooth conics, then these are in special position. We now show that if these \(4\) conics are twistor fibers, then their position is very special. We begin by introducing the following notation. For \(n\geq 4\), we denote by \(\mathcal{C}(n)^{-}\) the set of elements \(A\in\mathcal{C}(n)\) for which there exists a bidegree \((1,0)\) curve \(L\) such that \(A\in\mathcal{C}(n,L)\). The set \(\mathcal{C}(n)^{-}\) parametrizes the families of \(n\) collinear disjoint smooth conics. For \(n\geq 4\) we also write \(\mathcal{T}(n)^{-}:=\mathcal{T}(n)\cap\mathcal{C}(n)^{-}\). The families \(\mathcal{C}(n)^{-}\) and \(\mathcal{T}(n)^{-}\) are Zariski closed in \(\mathcal{C}(n)\) and \(\mathcal{T}(n)\), respectively. The following lemma shows that if an integral \((1,2)\)-surface contains \(4\) twistor fibers, then they are all collinear. **Lemma 4.8**.: _Take an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing \(A\in\mathcal{T}(4)\). Then \(A\in\mathcal{T}(4)^{-}\)_ Proof.: Assume the existence of an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing \(A\in\mathcal{T}(4)\). By Lemma 4.7 there is a union \(B\) of \(3\) of the connected components of \(A\) such that \(B\in\mathcal{T}(3)\setminus\mathcal{T}^{*}(3)\), i.e., there exists a bidegree \((1,0)\) curve \(L\), such that \(B\in\mathcal{T}(3,L)\) and hence, thanks to Remark 3.7\(h^{0}(\mathcal{I}_{B}(1,1))>0\). However, the same remark tells us that \(h^{0}(\mathcal{I}_{B}(1,1))=1\), \(h^{1}(\mathcal{I}_{B}(1,1))=2\) and that the only element \(M\) of \(|\mathcal{I}_{B}(1,1)|\) is integral. As usual, set \(C:=A\setminus B\). As in Remark 4.2 since \(L\) of bidegree \((1,0)\) meets each connected components of \(B\), then \(R:=j(L)\), of bidegree \((0,1)\), do the same. Thanks to Remark 2.16 we get \(B\cup L\cup R\subset M\). Since \(S\) and \(M\) are integral, thanks to Lemma 2.2, the one-dimensional scheme \(S\cap M\) has bidegree \((5,4)\). Since \(B\cup L\cup R\) has bidegree \((4,4)\), then \(C\nsubseteq M\). Let \(Y\) be only element of \(|\mathcal{I}_{C}(0,1)|\). Since \(B\subset M\cup Y\), then \(M\cup Y\in|\mathcal{I}_{A}(1,2)|\). Moreover, as \(S\) is irreducible, then \(S\neq M\cup Y\), and hence \(h^{0}(\mathcal{I}_{A}(1,2))\geq 2\), i.e. \(h^{1}(\mathcal{I}_{A}(1,2))\geq 3\). Since \(h^{1}(\mathcal{I}_{B}(1,1))=2\), the residual exact sequence \[0\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{A\cap Y,Y}(1,2) \to 0,\] gives \(h^{1}(Y,\mathcal{I}_{A\cap Y,Y}(1,2))>0\). As in the proof of Lemma 4.7 we obtain the following inequality \(h^{1}(F_{1},\mathcal{I}_{B\cap Y}(2f))>0\), i.e. there is a curve \(\hat{L}\in|f|\) of bidegree \((1,0)\) intersecting at least \(2\) of the connected components of \(B\). Call \(B^{\prime}\) the union of \(2\) of the connected components of \(B\) intersecting \(\hat{L}\). Since \(\hat{L}\cap C\neq\emptyset\) and \(B^{\prime}\cup C\) is \(j\)-invariant, each connected component of \(B^{\prime}\cup C\) meets \(j(\hat{L})\). Remark 4.2, Proposition 4.1 and Bezout imply the existence of an integral surface \(M^{\prime}\) of bidegree \((1,1)\) containing \(B^{\prime}\cup C\cup\hat{L}\cup j(\hat{L})\). Since \(B^{\prime}\subset M^{\prime}\), \(\hat{L}\) and \(j(\hat{L})\) contain at least \(2\) points of \(M^{\prime}\), then \(B^{\prime}\cup\hat{L}\cup j(\hat{L})\subset M\). But by Remark 2.5 there is a unique curve of bidegree \((1,0)\) intersecting two different smooth conics, hence \(\hat{L}=L\) and both \(L\) and \(j(L)\) intersect each connected component of \(B\). Thus \(L\) intersects each connected component of \(A\), i.e. \(A\in\mathcal{C}(4)^{-}\). As a byproduct of the proof of the previous result, we get the following lemma. It essentially says that there are infinitely many integral \((1,2)\)-surfaces containing \(4\) collinear twistor fibers. **Lemma 4.9**.: _Take \(A\in\mathcal{T}(4)^{-}\) and assume that \(A\) is not contained in a surface of bidegree \((1,1)\). Then \(\dim|\mathcal{I}_{A}(1,2)|=1\) and \(|\mathcal{I}_{A}(1,2)|\) contains exactly \(4\) reducible elements of \(|\mathcal{O}_{\mathbb{F}}(1,2)|\)._ Proof.: Call \(L\) the curve of bidegree \((1,0)\) intersecting each connected component of \(A\). Since each connected component of \(A\) is \(j\)-invariant, \(j(L)\) intersects each connected component of \(A\). In the proof of Lemma 4.8 we showed that \(h^{0}(\mathcal{I}_{A}(1,2))\geq 2\). From the lines of that proof, it is possible to derive that only \(4\) elements in \(|\mathcal{I}_{A}(1,2)|\) are reducible and they are all obtained fixing a connected component \(C\) of \(A\) and taking the union of the unique surface \(M_{C}\) of bidegree \((1,1)\) containing \(A\setminus C\) and the unique surface \(Y_{C}\) of bidegree \((0,1)\) containing \(C\). To conclude the proof it is sufficient to prove that \(h^{0}(\mathcal{I}_{A}(1,2))\leq 2\). Take a connected component \(C\) of \(A\) and consider the residual exact sequence \[0\to\mathcal{I}_{C}(0,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{M_{C}\cap A,M_{ C}}(1,2)\to 0. \tag{22}\] We have \(h^{0}(\mathcal{I}_{C}(0,1))=1\), because the intersection of \(2\) different elements of \(|\mathcal{O}_{Y}(0,1)|\) is a curve of bidegree \((1,0)\). Thus by (22) to conclude the proof it is sufficient to prove that the image \(\mathcal{V}\) of \(H^{0}(\mathcal{I}_{A}(1,2))\) in \(H^{0}(M_{C},\mathcal{I}_{M_{C}\cap A,M_{C}}(1,2))\) has dimension at most \(1\). Bezout gives that \(A\cup L\cup j(L)\) is contained in the base locus of \(|\mathcal{I}_{B,M_{C}}(1,2)|\). Every \(D\in|\mathcal{V}|\) has bidegree \((5,4)\) as a curve of \(\mathbb{F}\) and hence a general \(D\in|\mathcal{V}|\) is the union (counting multiplicities as divisors of the smooth surface \(M_{C}\)) of \(A\cup L\cup j(L)\) and a curve \(E\) of bidegree \((1,0)\)) as a curve of \(\mathbb{F}\). Recall that \(M_{C}\) is the blow up of \(\mathbb{P}^{2}\) at \(3\) non collinear points and that these \(3\) exceptional divisors are the only curve of \(M_{C}\) with bidegree \((1,0)\). Since \(M_{C}\) has only finitely many curves of bidegree \((1,0)\), \(D\) is the same for all non-zero elements of \(\mathcal{V}\) and hence \(\dim\mathcal{V}=1\). The following result completes the proof of Theorem 1.3. **Theorem 4.10**.: _There are integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing exactly \(4\) twistor fibers and for any such \(S\) and \(A\in\mathcal{T}(4)\) with \(A\subset S\), there is a curve \(L\) of bidegree \((1,0)\) intersecting each connected component of \(A\). Moreover, \(h^{0}(\mathcal{I}_{A}(1,2))=2\) and each \(S\in|\mathcal{I}_{A}(1,2)|\) is singular along \(L\)._ Proof.: By Theorem 1.4 no integral surface of bidegree \((1,2)\) contains at least \(5\) twistor fiber. The curve \(L\) exists by Lemma 4.8. Now we reverse the construction. We start with \(A\in\mathcal{T}(4,L)^{-}\). Let \(2L\) denote the closed subscheme of the "double line". To prove that each \(S\in|\mathcal{I}_{A}(1,2)|\) is singular at each point of \(L\) it is sufficient to prove that \(h^{0}(\mathcal{I}_{A}(1,2))=h^{0}(\mathcal{I}_{A\cup 2L}(1,2))\). Lemma 4.9 gives \(h^{0}(\mathcal{I}_{A}(1,2))=2\). Hence, it is sufficient to prove that \(h^{0}(\mathcal{I}_{A\cup 2L}(1,2))>1\). For any connected component \(C\) of \(A\) let \(M_{C}\) the only surface of bidegree \((1,1)\) containing \(A\setminus C\) and let \(Y_{C}\) the only surface of bidegree \((0,1)\). Since \(C\cap L\neq\emptyset\), \(L\cap Y_{C}\neq\emptyset\). Since \(Y_{C}\) has bidegree \((0,1)\) and \(L\) bidegree \((1,0)\), we get \(L\subset Y_{C}\). Thus \(L\subseteq M_{C}\cap Y_{C}\) and hence \(|\mathcal{I}_{A\cup 2L}(1,2)|\) contains at least the \(4\) reducible elements of \(|\mathcal{I}_{A}(1,2)|\). Hence \(h^{0}(\mathcal{I}_{A\cup 2L}(1,2))>1\). ### Non existence results for surfaces of bidegree \((1,2)\) and \((1,3)\) In this last part, we prove our two last main results, i.e. Theorems 1.4 and 1.5. For any \(A\in\mathcal{T}(n)^{-}\), \(n\geq 4\), let us call \(L\) and \(R:=j(L)\) the curves of bidegree \((1,0)\) e \((0,1)\), respectively, intersecting all the connected components of \(A\). In view of our goal, we need to discuss the reducibility of some surfaces containing a certain amount of twistor fibers. First of all, fix an integer \(n\geq 2\), take \(B\in\mathcal{T}(4)\) such that \(h^{0}(\mathcal{I}_{B}(1,1))>0\) and call \(M\) the unique (see e.g. Remark 4.2) surface of bidegree \((1,1)\) containing \(B\). Since each element of \(\mathcal{C}(1)\) is contained in an element of \(|\mathcal{O}_{\mathbb{F}}(0,1)|\) for each \(E\in\mathcal{T}(n-1)\) there is a reducible element \(W\in|\mathcal{O}_{\mathbb{F}}(1,k)|\), union of \(M\) and \(n-1\) surfaces of bidegree \((0,1)\) such that \(B\cup E\subset W\). The following lemma is a sort of viceversa of this remark. Moreover, it will be a key tool in the last two proofs. **Lemma 4.11**.: _If \(d\geq 2\) and \(A\in\mathcal{T}(d+3)^{-}\) is such that \(h^{0}(\mathcal{I}_{A}(1,d))>0\), then each element of \(|\mathcal{I}_{A}(1,d)|\) has an irreducible component \(M\) of bidegree \((1,1)\) containing at least \(4\) connected components of \(A\). In particular, for any \(n\geq d+3\), there is no integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,d)|\) containing \(A\in\mathcal{T}(n)^{-}\)._ Proof.: In order to prove the last statement, it is sufficient to do the case \(n=d+3\) and thus it is sufficient to prove the first assertion. We use induction on \(d\geq 2\). Let us assume first \(d=2\). Take \(A\in\mathcal{T}(5)^{-}\) and let \(L\) and \(j(L)\) be the curves of bidegree \((1,0)\) and \((0,1)\) intersecting all the connected components of \(A\). Fix a connected component \(C\) of \(A\) and set \(B:=A\setminus C\). Since \(C\cap L\neq\emptyset\), the curve \(C\cup L\) is a connected and nodal curve of bidegree \((2,1)\) with arithmetic genus \(0\). Hence \(h^{0}(\mathcal{O}_{C\cup L}(0,1))=2\). Thus there is \(Y\in|\mathcal{I}_{C\cup L}(0,1)|\) and such a \(Y\) is unique. Since any two smooth conics of \(Y\) meet, no component of \(B\) is contained in \(Y\). Hence \(B\cap Y\) is formed by \(4\) points of \(L\setminus(L\cap C)\). Recall that \(\mathcal{O}_{Y}(1,2)\cong\mathcal{O}_{F_{1}}(h+3f)\) and that \(C\in|\mathcal{O}_{F_{1}}(h+f)|\) and thus \(\mathcal{I}_{A\cap Y,Y}\cong\mathcal{I}_{B\cap L,Y}(2f)\). Since each element of \(|f|\) contains a unique point of \(L\) we have that \(h^{0}(D,\mathcal{I}_{A\cap Y,Y}(1,2))=0\). The residual exact sequence of \(Y\) \[0\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{A\cap Y,Y}(1,2 )\to 0,\] gives an isomorphism \(\varphi:H^{0}(\mathcal{I}_{B}(1,1))\to H^{0}(\mathcal{I}_{A}(1,2))\). If \(h^{0}(\mathcal{I}_{B}(1,1))=0\), then \(h^{0}(\mathcal{I}_{A}(1,2))=0\). Now assume \(h^{0}(\mathcal{I}_{B}(1,1))\neq 0\). The isomorphism \(\varphi\) says that each \(W\in|\mathcal{I}_{A}(1,2)|\) has \(Y\) as an irreducible component, say \(W=Y\cup W_{1}\) with \(W_{1}\in|\mathcal{I}_{B}(1,1)|\), and hence we have the thesis. Assume now \(d\geq 3\) and use induction on \(d\). By reasoning as in the base case, take \(A\in\mathcal{T}(d+3)^{-}\) and use the exact sequence \[0\to\mathcal{I}_{B}(1,d-1)\to\mathcal{I}_{A}(1,d)\to\mathcal{I}_{A\cap Y,Y}(1, d)\to 0,\] to prove that \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,d))=0\) and hence that there is an isomorphism \(\varphi:H^{0}(\mathcal{I}_{B}(1,d-1))\to H^{0}(\mathcal{I}_{A}(1,d))\). Now, again, if \(h^{0}(\mathcal{I}_{B}(1,d-1))=0\), then \(h^{0}(\mathcal{I}_{A}(1,d))=0\). Hence, assume \(h^{0}(\mathcal{I}_{B}(1,d-1))\neq 0\). The isomorphism \(\varphi\) says that each \(S\in|\mathcal{I}_{A}(1,d)|\) has \(Y\) as an irreducible component, i.e. \(S=D\cup S_{1}\) with \(S_{1}\in|\mathcal{I}_{B}(1,d-1)|\). The inductive assumption says that \(S_{1}\) has an irreducible component \(M\) of bidegree \((1,1)\) containing at least \(4\) components of \(B\). We now have all the ingredients to prove Theorems 1.4 and 1.5. First we prove that no integral surface of bidegree \((1,2)\) contains \(5\) twistor fibers. Proof of Theorem 1.4.: Assume the existence of an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,2)|\) containing \(A\in\mathcal{T}(5)\). Lemma 4.8 shows that for any union \(A^{\prime}\subset A\) of \(4\) components of \(A\) there is a union \(A^{\prime\prime}\subset A^{\prime}\) of \(3\) connected components intersecting some \(L\) of bidegree \((1,0)\). Let \(L\) be a curve of bidegree \((1,0)\) intersecting the maximal number, \(z\), of components of \(A\). Clearly \(z\geq 3\). By Lemma 4.11 to get a contradiction it is sufficient to prove that \(z\geq 5\). Assume \(z\in\{3,4\}\). Take any ordering \(C_{1},\ldots,C_{5}\) of the connected components of \(A\) and set \(B_{i}:=\pi_{1}(C_{i})\), \(1\leq i\leq 5\). Each \(B_{i}\) is a line of \(\mathbb{P}^{2}\). Since any two conics contained in an element of \(|\mathcal{O}_{\mathbb{F}}(1,0)|\) meet, \(B_{1},\ldots,B_{5}\) are \(5\) different lines of \(\mathbb{P}^{2}\). For any \(i<j<h\), there is a curve \(T\) of bidegree \((1,0)\) intersecting \(C_{i}\), \(C_{j}\) and \(C_{h}\) if and only if \(B_{h}\) contains the point \(B_{i}\cap B_{j}\) and in this case \(L=\pi_{1}^{-1}(C_{i}\cap C_{j})\). With no loss of generality we may assume that \(L\) meets \(C_{1},\ldots,C_{z}\). (a) Assume \(z=3\) and hence \(B_{1}\cap B_{2}\in B_{3}\). Applying Lemma 4.8 to \(C_{1}\cup C_{2}\cup C_{4}\cup C_{5}\) we have one of the following mutually exclusive relations: \[\begin{array}{l}B_{1}\cap B_{2}\cap B_{4}\neq\emptyset,\ \ \ \ \ B_{1}\cap B_{2}\cap B_{5}\neq\emptyset,\\ B_{1}\cap B_{4}\cap B_{5}\neq\emptyset,\ \ \ \ \ B_{2}\cap B_{4}\cap B_{5}\neq\emptyset. \end{array}\] Since \(B_{1}\cap B_{2}\in B_{3}\) and \(z=3\), we can exclude the first two cases, i.e. we have \[B_{1}\cap B_{2}\cap B_{4}=B_{1}\cap B_{2}\cap B_{5}=\emptyset.\] Thus, either \(B_{1}\cap B_{4}\cap B_{5}\neq\emptyset\) or \(B_{2}\cap B_{4}\cap B_{5}\neq\emptyset\). Exchanging if necessary \(C_{1}\) and \(C_{2}\) we may assume \(B_{1}\cap B_{4}\cap B_{5}\neq\emptyset\), i.e. \(B_{4}\cap B_{5}\in B_{1}\), and hence \(B_{2}\cap B_{4}\cap B_{5}=\emptyset\). Since \(B_{2}\cap B_{4}\cap B_{5}=\emptyset\), applying Lemma 4.8 to \(C_{2}\cup C_{3}\cup C_{4}\cup C_{5}\) we have one of the following mutually esclusive relations \[\begin{array}{l}B_{2}\cap B_{3}\cap B_{4}\neq\emptyset,\ \ \ \ \ B_{2}\cap B_{3}\cap B_{5}\neq \emptyset,\\ B_{3}\cap B_{4}\cap B_{5}\neq\emptyset.\end{array}\] Since \(B_{4}\cap B_{5}\in B_{1}\) and \(z=3\), \(B_{3}\cap B_{4}\cap B_{5}=\emptyset\). Since \(B_{2}\cap B_{3}\in B_{1}\) and \(z=3\), \(B_{2}\cap B_{3}\cap B_{5}=\ \ B_{2}\cap B_{3}\cap B_{4}=\emptyset\), a contradiction. (b) Assume \(z=4\). Since \(C_{1}\cap L\neq\emptyset\), the curve \(C_{1}\cup L\) is a connected and nodal curve of arithmetic genus \(0\) and bidegree \((2,1)\). Thus \(h^{0}(\mathcal{O}_{C_{1}\cup L}(0,1))=2\). Thus there is \(Y\in|\mathcal{I}_{C_{1}\cup L}(0,1)|\). Since \(S\) is irreducible, \(Y\) is not an irreducible component of \(S\). Thus the residual exact sequence of \(Y\) \[0\to\mathcal{I}_{B}(1,1)\to\mathcal{I}_{A}(1,2)\to\mathcal{I}_{A\cap Y,Y}(1,2 )\to 0,\] gives \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,2))\neq 0\). Up to the isomorphism of \(Y\) and \(F_{1}\) we have \(L=h\), \(C_{1}\in|\mathcal{O}_{F_{1}}(h+f)|\) and \(\mathcal{O}_{Y}(1,2)\cong\mathcal{O}_{F_{1}}(h+3f)\). Since \((A\setminus C_{1})\cap C_{1}=\emptyset\), \(\mathcal{I}_{A\cap Y,Y}(1,3)\cong\mathcal{I}_{(A\setminus C_{1})\cap D,D}(2f)\). Since \(L\) meets \(C_{1},\ldots,C_{4}\), \((A\setminus C_{1})\cap D\) contains a set \(F\subset L\) such that \(\#F=3\). Since each element of \(|f|\) contains a unique point of \(L\), \(h^{0}(Y,\mathcal{I}_{A\setminus C)\cap Y,Y}(2f))=0\), a contradiction. We now conclude our paper with the proof of Theorem 1.5 which concerns surfaces of bidegree \((1,3)\). Proof of Theorem 1.5:.: Assume the existence of \(A\in\mathcal{T}(6)\) and of an integral \(S\in|\mathcal{O}_{\mathbb{F}}(1,3)|\) containing \(A\). By Lemma 4.11 to get a contradiction it is sufficient to prove the existence of a curve \(L\) of bidegree \((1,0)\) such that all the components of \(A\) intersects \(L\). By Lemma 4.7 for any union \(A^{\prime}\subset A\) of \(5\) components of \(A\) there is a union \(A^{\prime\prime}\subset A^{\prime}\) of \(3\) connected components intersecting some \(L\) of bidegree \((1,0)\). Let \(L\) be a curve of bidegree \((1,0)\) intersecting the maximal number, \(z\), of components of \(A\). We have that \(z\geq 3\). Hence, by Lemma 4.11 it is sufficient to prove that \(z\geq 6\). Assume then that \(z\leq 5\). We will now exclude all the cases \(z=3,4,5\). For any connected component \(C\) of \(A\), Lemma 4.7 tells us that there is a curve \(L\) of bidegree \((1,0)\) intersecting at least \(3\) connected components of \(A\setminus C\). In particular there is an integral \(M\in|\mathcal{O}_{\mathbb{F}}(1,1)|\) containing at least \(3\) components of \(A\) (see Remark 4.2). Note that \(j(M)=M\). We may take \(M\) with the additional condition that it contains the maximal number \(e\) of components of \(A\). Let \(E\) be the union of the components of \(A\) contained in \(M\). Thus \(3\leq e\leq z\leq 5\). Since each twistor fiber is \(j\)-invariant, \(j(L)\) meets each connected component of \(E\). Bezout gives \(L\cup j(L)\subset M\) and \(L\subset S\). If \(e\geq 4\) Bezout gives \(j(L)\subset S\). However, the one-dimensional cycle \(M\cap S\) has bidegree \((7,5)\) and thus \(e\leq 4\). Set \(\Sigma:=S\cap M\) (as a scheme-theoretic intersection). Since the one-dimensional scheme \(\Sigma\) is the complete intersection of \(\mathbb{F}\) with \(2\) very ample divisors, \(h^{0}(\mathcal{O}_{\Sigma})=1\). Set \(F:=A\setminus E\). (a) Assume \(e=4\). Hence \(E\cup L\cup j(L)\subset\Sigma\). Since \(E\cup L\cup j(L)\) has bidegree \((5,5)\) and \(h^{0}(\mathcal{O}_{\Sigma})=1\), \(\Sigma\) is the union of \(E\cup L\cup j(L)\) and a multiple structure on \(L\). Note that \(\Sigma\in|\mathcal{O}_{\mathbb{F}}(1,3)|\) and that \(\Sigma\) contains \(E\cup j(L)\) with multiplicity \(1\) and \(L\) with multiplicity \(3\) (as divisors of the smooth surface \(M\)). Since \(\Sigma\) has multidegree \((7,5)\), \(\Sigma=3L\cup j(L)\cup E\). Note that \(\Sigma\) contains the degree \(4\) zero-dimensional scheme \(F\cap M\). Since \(F\cap E=\emptyset\), \(F\cap(j(L)\cup L)\neq\emptyset\). Thus at least one irreducible component, \(T\), of \(F\) meets \(L\cup j(L)\). Since \(j(T)=T\), \(T\cap L\neq\emptyset\). Thus \(z=5\). Let \(C\) be a component of \(E\). Since \(C\cap L\neq\emptyset\), \(C\cup L\) is a connected and nodal curve of bidegree \((2,1)\) with arithmetic genus \(0\). Thus \(h^{0}(\mathcal{O}_{C\cup L}(0,1))=2\). Thus there is \(Y\in|\mathcal{I}_{C\cup L}(0,1)|\neq 0\). Since \(S\) is irreducible, \(Y\) is not an irreducible component of \(S\). Thus the residual exact sequence of \(Y\) gives \(h^{0}(Y,\mathcal{I}_{A\cap Y,Y}(1,3))\neq 0\). Up to the isomorphism of \(Y\) and \(F_{1}\) we have \(L=h\), \(C\in|\mathcal{O}_{F_{1}}(h+f)|\) and \(\mathcal{O}_{Y}(1,3)\cong\mathcal{O}_{F_{1}}(h+4f)\). Since \((A\setminus C)\cap C=\emptyset\), \(\mathcal{I}_{A\cap Y,Y}(1,3)\cong\mathcal{I}_{(A\setminus C)\cap Y,Y}(3f)\). Since \(z=5\), \((A\setminus C)\cap Y\) contains a set \(H\subset L\) such that \(\#H=4\). Since each element of \(|f|\) contains a unique point of \(L\), \(h^{0}(Y,\mathcal{I}_{A\setminus C)\cap Y,Y}(3f))=0\), a contradiction. (b) Assume \(e=3\). Fix a connected component \(C\) of \(E\) and set \(B:=A\setminus C\). Set \(\{Y\}:=|\mathcal{I}_{C}(0,1)|\). As in step (a) we have that \(L\subset Y\). The following exact sequence \[0\to\mathcal{I}_{B}(1,2)\to\mathcal{I}_{A}(1,3)\to\mathcal{I}_{C\cup(B\cap Y),Y}(1,3)\to 0 \tag{23}\] is the residual exact sequence of \(Y\). Since \(Y\) is not an irreducible component of \(S\), we have \(h^{0}(Y,\mathcal{I}_{C\cup(B\cap Y),Y}(1,3))>0\). As in step (a) we have \(\mathcal{I}_{C\cup(B\cap Y),Y}(1,3)\cong\mathcal{I}_{B\cap Y,F_{1}}(3f)\). We now have two possibilities: either \(h^{0}(\mathcal{I}_{B}(1,2))=0\) or \(h^{0}(\mathcal{I}_{B}(1,2))>0\). (b1) Assume for the moment \(h^{0}(\mathcal{I}_{B}(1,2))=0\). Thus \(h^{0}(Y,\mathcal{I}_{C\cup(B\cap Y),Y}(1,3))\geq 2\) and \(h^{1}(Y,\mathcal{I}_{C\cup(B\cap Y),Y}(1,3))\geq 3\). Up to the identification of \(Y\) and \(F_{1}\) we have \(\mathcal{I}_{C,Y}(1,3)\cong\mathcal{O}_{F_{1}}(3f)\). Hence the \(5\) points \(B\cap Y\) give at most one condition to the linear system \(|\mathcal{O}_{F_{1}}(3f)|\). Thus there is \(J\in|\mathcal{O}_{F_{1}}(f)|\) such that \(B\cap Y\subset J\). Note that \(J\) is a curve of bidegree \((1,0)\). The maximality of the integer \(e\) gives a contradiction. (b2) Assume that \(h^{0}(\mathcal{I}_{B}(1,2))>0\). By Theorem 1.4 any surface containing B is reducible, say \(M_{1}\cup Y\) with \(M_{1}\) irreducible of bidegree \((1,1)\) containing at least \(4\) components of B. Thus \(e\geq 4\), a contradiction.
2304.08648
Dynamic Vector Bin Packing for Online Resource Allocation in the Cloud
Several cloud-based applications, such as cloud gaming, rent servers to execute jobs which arrive in an online fashion. Each job has a resource demand and must be dispatched to a cloud server which has enough resources to execute the job, which departs after its completion. Under the `pay-as-you-go' billing model, the server rental cost is proportional to the total time that servers are actively running jobs. The problem of efficiently allocating a sequence of online jobs to servers without exceeding the resource capacity of any server while minimizing total server usage time can be modelled as a variant of the dynamic bin packing problem (DBP), called MinUsageTime DBP. In this work, we initiate the study of the problem with multi-dimensional resource demands (e.g. CPU/GPU usage, memory requirement, bandwidth usage, etc.), called MinUsageTime Dynamic Vector Bin Packing (DVBP). We study the competitive ratio (CR) of Any Fit packing algorithms for this problem. We show almost-tight bounds on the CR of three specific Any Fit packing algorithms, namely First Fit, Next Fit, and Move To Front. We prove that the CR of Move To Front is at most $(2\mu+1)d +1$, where $\mu$ is the ratio of the max/min item durations. For $d=1$, this significantly improves the previously known upper bound of $6\mu+7$ (Kamali & Lopez-Ortiz, 2015). We then prove the CR of First Fit and Next Fit are bounded by $(\mu+2)d+1$ and $2\mu d+1$, respectively. Next, we prove a lower bound of $(\mu+1)d$ on the CR of any Any Fit packing algorithm, an improved lower bound of $2\mu d$ for Next Fit, and a lower bound of $2\mu$ for Move To Front in the 1-D case. All our bounds improve or match the best-known bounds for the 1-D case. Finally, we experimentally study the average-case performance of these algorithms on randomly generated synthetic data, and observe that Move To Front outperforms other Any Fit packing algorithms.
Aniket Murhekar, David Arbour, Tung Mai, Anup Rao
2023-04-17T22:53:47Z
http://arxiv.org/abs/2304.08648v1
# Dynamic Vector Bin Packing for ###### Abstract Several cloud-based applications, such as cloud gaming, rent servers to execute jobs which arrive in an online fashion. Each job has a resource demand, such as GPU requirement, and must be dispatched to a cloud server which has enough resources to execute the job, which departs after its completion. Under the "pay-as-you-go" billing model, the server rental cost is proportional to the total time that servers are actively running jobs. The problem of efficiently allocating a sequence of online jobs to servers without exceeding the resource capacity of any server while minimizing total server usage time can be modelled as a variant of the dynamic bin packing problem (DBP), called MinUsageTime DBP [21]. In this work, we initiate the study of the problem with multi-dimensional resource demands (e.g. CPU/GPU usage, memory requirement, bandwidth usage, etc.), called MinUsageTime Dynamic Vector Bin Packing (DVBP). We study the competitive ratio (CR) of Any Fit packing algorithms for this problem. We show almost-tight bounds on the CR of three specific Any Fit packing algorithms, namely First Fit, Next Fit, and Move To Front. We prove that the CR of Move To Front is at most \((2\mu+1)d+1\), where \(\mu\) is the ratio of the max/min item durations. For \(d=1\), this implies a significant improvement over the previously known upper bound of \(6\mu+7\)[18]. We then prove the CR of First Fit and Next Fit are bounded by \((\mu+2)d+1\) and \(2\mu d+1\), respectively. Next, we prove a lower bound of \((\mu+1)d\) on the CR of any Any Fit packing algorithm, an improved lower bound of \(2\mu d\) for Next Fit, and a lower bound of \(2\mu\) for Move To Front in the 1-D case. All our bounds improve or match the best-known bounds for the 1-D case. Finally, we experimentally study the average-case performance of these algorithms on randomly generated synthetic data, and observe that Move To Front outperforms other Any Fit packing algorithms. ## 1 Introduction Bin packing is an extensively studied problem in combinatorial optimization [11]. The goal of the classical bin packing problem is to pack a given set of items with different sizes into the smallest number of identical bins such that the total size of items in each bin does not exceed the capacity of the bin. The dynamic bin packing problem (DBP) [9] is a generalization of the classical bin packing problem, where items can arrive and depart over time, and the objective is to minimize the number of bins used over time. Dynamic bin packing naturally models several resource allocation problems, including those arising in cloud computing [30, 19]. Motivated by cloud computing applications where the goal is to dispatch jobs arriving in an online fashion to servers, with the objective of minimizing the server usage time, Li, Tang, and Cai [21] introduced a variant of dynamic bin packing called _MinUsageTime Dynamic Bin Packing_. In this variant, items appear in an online fashion and must be packed into resource-bounded bins. When an item (job) arrives, it must immediately be dispatched to a bin (server) which has enough resources to accommodate (execute) the job. The objective is to minimize the _total time_ that bins are _active_, i.e., contain at least one active item that has not yet departed. Moreover, due to overheads involved in migrating jobs from one server to another, it is assumed that the placement of an item to a bin is irrevocable. The objective function, the total usage time of the bins, naturally models the power consumption or rental cost of the servers. Below we discuss two concrete applications motivating the MinUsageTime Dynamic Bin Packing problem, one faced by cloud service provider and the other by the cloud service user. Virtual machine placement on physical servers.A popular way that cloud resource providers offer their services to users is through the use of Virtual Machines (VMs). Users can request VMs with certain resource demands, and in turn cloud resource managers place these VMs on physical servers with sufficient resource capacity to serve the VM requests. Minimizing the total usage time of the physical machines can directly lead to power and cost savings on the cloud provider end [5, 23]. As [15] suggests, even a 1% improvement in packing efficiency can lead to cost savings of roughly $100 million per year for Microsoft Azure. By viewing the VM requests as items and the physical servers as bins, the problem of minimizing the usage time of physical machines therefore directly translates to the MinUsageTime DBP problem. Cloud gaming and other cloud user applications.Several organizations offer their services to customers by renting servers (as VMs) from on-demand public cloud providers such as Amazon EC2. They are typically charged according to their server usage times in hourly or monthly basis following the "pay-as-you-go" billing model [26]. Minimizing the organization's server renting cost is therefore equivalent to minimizing the usage time of the rented servers, thus reducing to the MinUsageTime DBP problem where customer jobs are viewed as items and rented servers as bins [21, 18, 32, 22, 28, 2]. Organizations such as GaiKai [12], OnLive[24], and StreamMyGame [31] offer cloud based gaming services where computer games run on rented cloud servers, thereby saving players from the overheads involved in set-up and maintenance of the hardware/software infrastructure required for the game. A request from a customer to play a game is dispatched to a gaming server which has enough resources such as GPU or bandwidth to run the game instance, which runs until the customer stops playing the game. In this context, the gaming service providers can greatly benefit by employing efficient algorithms that dispatch customers' game requests to rented servers minimize the server rental cost. MinUsageTime Dynamic Bin Packing is therefore a problem of commercial and industrial importance, and has consequently also received theoretical interest in recent years to analyze the performance of online algorithms for the problem [21, 18, 32, 22, 28, 27, 2, 5]. The performance of an online algorithm is usually measured in terms of its _competitive ratio_ (CR) [4], which is the worst-case ratio between the quality (e.g. total server renting cost) of algorithm's solution to the quality of the solution produced by an optimal, offline algorithm. In this paper, we study the _non-clairvoyant_ version of the problem, wherein the departure time of an item is unknown upon its arrival. In the context of cloud gaming, this models customers being able to play games for durations unknown to the cloud gaming service. Existing work has primarily focused on Any Fit packing algorithms, which is a well-studied family of algorithms for the classical bin packing problem. An Any Fit packing algorithm is an algorithm that opens a new bin only when an incoming item cannot be packed in any of the existing open bins. Any Fit packing algorithms are useful and well-studied because they take decisions based on the current system state and not its history, leading to a desirable simplicity in implementation and explainability, and a low computational and memory footprint. Li, Tang, and Cai [21, 22] showed that the competitive ratio of any Any Fit packing algorithm for the MinUsageTime DBP problem is at least \(\mu+1\), where \(\mu\) is the ratio of the max/min item durations. A series of works [21, 22, 32, 28] showed that the competitive ratio of First Fit, a specific Any Fit packing algorithm which tries to pack a new item into the earliest opened bin that can accommodate the item, is at most \(\mu+3\). Likewise, Next Fit, which keeps only one open bin at a time to pack items, was shown to have a CR of at least \(2\mu\)[32, 28] and at most \(2\mu+1\)[18]. On the other hand, Best Fit, which tries to pack a new item into bin with highest load, was shown to have an unbounded CR [22]. Kamali and Lopez-Ortiz [18] studied another Any Fit packing algorithm called Move To Front, which tries to pack a new item into the bin which was most recently used. They showed that Move To Front has an (asymptotic) competitive ratio of at most \(6\mu+7\), and conjectured that the CR is at most \(2\mu+1\). They also performed an average-case experimental study of these algorithms, and found that Move To Front had the best average-case performance, closely followed by First Fit and Best Fit. Modelling Multi-dimensional Resources Demands.All of the above previous works assumed the item sizes to be one-dimensional. They assume items/jobs have a single dominant resource, such as CPU or GPU demand. However, in practice, the resources demands of an item/job such as a VM request or a game instance are multi-dimensional, e.g., CPU and GPU usage, memory requirement, bandwidth usage, etc. In the bin packing literature, the multi-dimensional version is a problem of great significance and is extensively studied [8, 25]. The multidimensional nature of demand usually makes the problem much more challenging [33]. In this work, we study the generalization of the MinUsageTime DBP problem called _MinUsageTime Dynamic Vector Bin Packing_ (DVBP) where the sizes of items and bins are \(d\)-dimensional vectors. The design of online algorithms for DVBP and the analysis of their competitive ratios is therefore a natural and practically important problem, and was indicated as an important direction for future work by previous papers [27, 32, 28]. ### Our Contributions In this work, we initiate the study of the multi-dimensional version of the MinUsageTime DBP problem, called MinUsageTime Dynamic Vector Bin Packing (henceforth referred to simply as DVBP), where item and bin sizes are \(d\)-dimensional vectors. We analyze the competitive ratios of Any Fit packing algorithms for the problem, including four specific algorithms: First Fit, Next Fit, Best Fit and Move To Front. Table 1 summarizes the best known bounds on the CR of these algorithms, and contrasts our results with previous work. Our contributions are summarized below. * We prove an upper bound of \((2\mu+1)d+1\) on the competitive ratio of Move To Front for DVBP. For \(d=1\), this implies a significant improvement on the previously known upper bound of \(6\mu+7\) shown by Kamali and Lopez-Ortiz [18] to \(2\mu+2\), and nearly settles their conjecture of the CR being \(2\mu+1\). Central to our result is a novel decomposition of the usage periods of the bins used by Move To Front into two classes of intervals, and carefully analyzing the cumulative cost of intervals in each class. * We prove an upper bound of \((\mu+2)d+1\) on the competitive ratio of First Fit, and of \(2\mu d+1\) for Next Fit. These results rely on new lower bounds on the cost of the optimum solution for the \(d\)-D case. Our upper bounds then follow by combining these bounds with analysis techniques inspired from upper bound results for the 1-D case [28, 18]. Note that the competitive ratio of Best Fit is unbounded even for the 1-D case [22]. * We prove a lower bound of \((\mu+1)d\) on the competitive ratio of any Any Fit packing algorithm for DVBP. We also show a lower bound of \(2\mu d\) on the CR of Next Fit and of \(\max\{2\mu,(\mu+1)d\}\) for Move To Front. In conjunction with our upper bound results, these results show almost-tightness for the CR of First Fit and Next Fit for the \(d\)-D case, and of Move To Front for the 1-D case. Our results improve or match all known lower bounds for the 1-D case [21, 22, 28, 32, 18]. Due to the multi-dimensionality of the problem, lower bound results of the 1-D case do not directly translate to the \(d\)-D case, and hence we design new constructions to establish the lower bounds. At a high level, our constructions use carefully-designed sequences of items which force an Any Fit algorithm to open \(\Omega(d\cdot k)\) bins for a parameter \(k\), each of which contain an item of small size but long-duration, thus leading to a cost of \(\approx\mu\) per bin. The optimal solution however packs all the small items into a single bin with cost \(\approx\mu\) and the other items into \(k\) bins with cost \(\approx 1\), resulting in a total cost of \(O(k+\mu)\), thus implying CR of \(\Omega(\mu d)\). * We perform an average-case experimental study of these algorithms on randomly generated synthetic data. We observe that Move To Front outperforms other Any Fit packing algorithms, with First Fit and Best Fit also performing well on average. Given its bounded competitive ratio indicating good performance against adversarial examples, as well as good average-case performance, our theoretical and experimental results lead us to concur with the recommendation of [18] that Move To Front is the algorithm of choice for practical solutions to the DVBP problem, even in higher dimensions. ### Further Related Work Classical bin packing is known to be NP-hard even in the offline case [13]. There is extensive work on designing algorithms with good competitive ratio for online versions of this problem [17, 10, 11], \begin{table} \begin{tabular}{|p{113.8pt}||p{56.9pt}||p{56.9pt}||p{56.9pt}|p{56.9pt}|} \hline **Algorithm** & **Lower Bound** & **Upper Bound** & **Lower Bound** & **Upper Bound** \\ & (\(d=1\)) & (\(d=1\)) & (\(d\geq 1\)) & (\(d\geq 1\)) \\ \hline \hline Any Fit & \(\mu+1\)[22, 28] & \(\infty\) & (\(\mu+1\))\(d\) (Thm. 5) & \(\infty\) \\ \hline Move To Front & \(2\mu\) (Thm. 8) & \(2\mu\)+\(2\) (Thm. 2), improves [18] & \(\max\{2\mu,(\mu+1)d\}\) & (\(2\mu+1\))\(d+1\) (Thm. 8) & (Thm. 2) \\ \hline First Fit & \(\mu+1\)[22, 28] & \(\mu+3\)[28] & (\(\mu+1\))\(d\) (Thm. 5) & (\(\mu+2\))\(d+1\) (Thm. 3) \\ \hline Next Fit & \(2\mu\)[32] & \(2\mu+1\)[18] & \(2\mu d\) (Thm. 6) & \(2\mu d+1\) (Thm. 4) \\ \hline Best Fit & Unbounded [22] & \(\infty\) & Unbounded [22] & \(\infty\) \\ \hline \end{tabular} \end{table} Table 1: Summary of the best known upper and lower bounds on the competitive ratio of algorithms for the MinUsageTime Dynamic Vector Bin Packing problem in \(d\) dimensions. \(\mu\) denotes the ratio of max/min item durations. Colored cells highlight our results. with \(1.54037\) and \(1.58889\) being the best-known lower and upper bounds [3, 29]. In online vector bin packing, the item sizes are \(d\)-dimensional vectors. Garey et al. [14] showed that a generalization of First Fit has a CR of \(d+0.7\), and Azar et al. [1] showed an information-theoretic lower bound of \(\Omega(d^{1-\varepsilon})\). For further results on multi-dimensional versions of bin packing, we refer the reader to the survey [8]. On the practical side, Panigrahy et al. [25] studied heuristics for the offline vector bin packing problem. Dynamic bin packing with the objective of minimizing the number of bins is also the subject of several works [9, 7, 6, 16]. Coffman et al. [9] showed that First Fit has a competitive ratio of between \(2.75\) to \(2.897\), and Wong et al. [34] showed a lower bound of \(2.667\) on the CR of any online algorithm. A further generalization called the fully dynamic bin packing problem, in which already packed items can be moved to different bins, has also been studied in [16]. The MinUsageTime dynamic bin packing problem has been studied in several recent works [21, 18, 32, 22, 28, 27, 2, 5]; Table 1 cites the relevant prior work on the non-clairvoyant version of the problem. In the clairvoyant version of the problem the departure time of an item is known when it arrives [27, 2]. This problem is known to have an algorithm with a \(O(\sqrt{\log\mu})\) competitive ratio, with a matching lower bound [2]. The interval scheduling problem [20] is also closely related; see [27, 5] and references therein. In the presence of additional information about future load, algorithms with improved CR were presented by [5]. To the best of our knowledge, the multi-dimensional version of the MinUsageTime DBP problem has not been studied, though it finds mention as a direction for future work in [27, 32, 28]. Organization.The rest of the paper is organized as follows. Section 2 introduces notation, relevant definitions, packing algorithms, and useful preliminary observations. Section 3, 4, and 5 establish upper bounds on the competitive ratios of Move To Front, First Fit, and Next Fit, respectively. Section 6 presents lower bounds on competitive ratio of any Any Fit packing algorithm and certain improved lower bounds for specific algorithms. Section 7 discusses our experimental results examining the average-case performance of various Any Fit packing algorithms on randomly generated synthetic data. Finally, some concluding remarks and directions for future work are presented in Section 8. ## 2 Notation and Preliminaries For \(n\in\mathbb{N}\), let \([n]\) denote the set \(\{1,2,\ldots,n\}\). The \(L_{\infty}\) norm of a vector a vector \(\mathbf{v}\in\mathbb{R}_{\geq 0}^{d}\) is denoted by \(\|\mathbf{v}\|_{\infty}\) and equals \(\max_{i\in[d]}\mathbf{v}_{j}\). We will use the following simple properties of the \(L_{\infty}\) norm, which are proved in Appendix A.1 for completeness. **Proposition 1**.: _The \(L_{\infty}\) norm satisfies the following._ 1. _For a vector_ \(\mathbf{v}\in\mathbb{R}_{\geq 0}^{d}\) _and a constant_ \(c\geq 0\)_,_ \(\|c\cdot\mathbf{v}\|_{\infty}=c\cdot\|\mathbf{v}\|_{\infty}\)_._ 2. _For any set of vectors_ \(\mathbf{v}_{1},\ldots,\mathbf{v}_{n}\in\mathbb{R}_{\geq 0}^{d}\)_, we have:_ \[\left\|\sum_{i=1}^{n}\mathbf{v}_{i}\right\|_{\infty}\leq\sum_{i=1}^{n}\| \mathbf{v}_{i}\|_{\infty}\leq d\cdot\left\|\sum_{i=1}^{n}\mathbf{v}_{i}\right\| _{\infty}.\] ### Problem Definition We now formally define the online MinUsageTime Dynamic Vector Bin Packing (DVBP) problem. Problem Instance.Let \(d\in\mathbb{N}\) denote the number of resource dimensions, i.e., CPU, memory, I/O, etc. We let \(\mathcal{R}\) denote the list of items. Each item \(r\in\mathcal{R}\) is specified by a tuple \((a(r),e(r),\mathbf{s}(r))\), where \(a(r),e(r)\in\mathbb{Q}_{\geq 0}\) and \(\mathbf{s}(r)\) denote the arrival time, departure time, and the size of the item, respectively. Note that each item has multi-dimensional resource demands, i.e., \(\mathbf{s}(r)\in\mathbb{R}_{\geq 0}^{d}\) where \(\mathbf{s}(r)_{j}\) denotes the size of the item in the \(j^{th}\) dimension, for \(j\in[d]\). Without loss of generality, we assume that bins have unit capacity in each dimension, i.e., the size of a bin is \(\mathbf{1}^{d}\) and that \(\mathbf{s}(r)\in[0,1]^{d}\) for each \(r\in\mathcal{R}\) by normalization. Further, let \(\mathbf{s}(\mathcal{R})=\sum_{r\in\mathcal{R}}\mathbf{s}(r)\). For an item \(r\in\mathcal{R}\), let \(I(r)=[a(r),e(r))\) denote the _active interval1_ of item \(r\), and we say that item \(r\) is _active_ in the interval \(I(r)\). Let \(\ell(I(r))=e(r)-a(r)\) denote the length of interval \(I(r)\), i.e., the _duration_ of item \(r\). W.l.o.g, we assume \(\min_{r\in\mathcal{R}}\ell(I(r))=1\), and define \(\mu:=\max_{r\in\mathcal{R}}\ell(I(r))\). Thus, \(\mu\) denotes the ratio of the max/min item durations. Finally, let \(\mathsf{span}(\mathcal{R})=\ell(\cup_{r\in\mathcal{R}}I(r))\) denote the total length of time for which at least one item of \(\mathcal{R}\) is active. Footnote 1: For technical reasons \(I(r)\) is half open, i.e., the item \(r\) has departed at time \(e(r)\). Problem Objective.We focus on the non-clairvoyant setting without recourse. This means that an online algorithm must pack an item immediately into a single bin when it arrives, and that the algorithm cannot repack items. Moreover, when an item arrives the algorithm does not have any knowledge of when it will depart. Let \(P_{\mathcal{A},\mathcal{R}}\) denote the _packing_ of the items \(\mathcal{R}\) by the algorithm \(\mathcal{A}\). Let \(B_{1},\ldots,B_{m}\) be the bins opened by \(\mathcal{A}\), and let \(R_{i}\) be the items placed on bin \(B_{i}\). We assume the cost of using a bin for an interval \(I\) equals its length \(\ell(I)\). Then the cost of the packing \(P_{\mathcal{A},\mathcal{R}}\) is defined as the total usage time of all the bins, i.e., \[\mathsf{cost}(\mathcal{A},\mathcal{R})=\sum_{i=1}^{m}\mathsf{ span}(R_{i}). \tag{1}\] With this problem objective, our goal is to compute a packing of \(\mathcal{R}\) that minimizes the above cost. An empty bin is _opened_ the first time it receives an item, and remains _open_ as long as it contains an active item. When an open bin becomes empty, i.e., all items packed in it depart, we say that it is _closed_. We can assume that once a bin is closed, it is never opened again, i.e., no item is packed in it again. This assumption is justified because bins are indistinguishable, and an idle bin has zero cost. Thus, a bin which has two usage periods \([a,b)\) and \([c,d)\) separated by an idle period \([b,c)\) can be replaced by two bins active between \([a,b)\) and \([c,d)\) respectively, without any change in the cost. Thus we can assume that the usage period of each bin is a single interval. Likewise, we assume that \(\cup_{r\in\mathcal{R}}I(r)\) equals the single interval \([0,\mathsf{span}(\mathcal{R}))\), otherwise we can consider each interval of \(\cup_{r\in\mathcal{R}}I(r)\) as a separate sub-problem. ### Any Fit Packing Algorithms We now discuss the Any Fit family of algorithms, which are adaptations of standard bin packing algorithms to the DVBP problem. An Any Fit packing algorithm maintains a list \(L\) of open bins, and does not open a new bin upon the arrival of an item \(r\), if \(r\) can be packed into an open bin in \(L\). Its pseudocode is given in Algorithm 1. Different Any Fit packing algorithms differ in how an open bin \(b\in L\) is selected to accommodate an item \(r\) (Line 4), and how the list \(L\) is modified (Lines 9 and 12). In this work, we focus on the following four Any Fit packing algorithms: * _Move To Front_. Bins in the list \(L\) are maintained in order of most-recent usage. Thus, when an item \(r\) arrives, \(r\) is placed in the bin \(b\) which appears earliest in \(L\) and can accommodate \(r\), else a new bin \(b\) is opened. Immediately \(b\) is moved to the front of the list \(L\) as it is the most recently used bin. * _First Fit_. Bins in the list \(L\) are maintained in increasing order of opening time. Thus, an item \(r\) is placed in the earliest open bin that can hold \(r\). * _Next Fit_. At any given time, \(|L|=1\), i.e., at each time Next Fit maintains one open bin in \(L\) as a designated _current_ bin. When an item \(r\) does not fit into the current bin, the current bin is _released_ and a new bin is opened to pack \(r\) and is made the current bin. * _Best Fit_. An item \(r\) is placed in the "most-loaded" bin. When \(d=1\), the load of a bin containing a set of items \(R\) is simply \(\mathbf{s}(R)\). For \(d\geq 2\), there is no unique way of computing the load \(w(R)\) of set \(R\) from the load vector \(\mathbf{s}(R)\). A few options are: * Max load, i.e., \(w(R)=\|\mathbf{s}(R)\|_{\infty}\), * Sum of loads, i.e., \(w(R)=\|\mathbf{s}(R)\|_{1}\), * \(L_{p}\)-norm of the load, i.e., \(w(R)=\|\mathbf{s}(R)\|_{p}\), for \(p\geq 2\). Competitive Ratio.We measure the performance of an online algorithm \(\mathcal{A}\) by its _competitive ratio_, i.e., the worst-case ratio between the cost of the packing produced by algorithm \(\mathcal{A}\) and the cost of the packing produced by the optimal offline algorithm which can repack items [4]. For a list of items \(\mathcal{R}\), denote the optimal, offline cost by \(\mathsf{OPT}(\mathcal{R})\). An algorithm \(\mathcal{A}\) is said to be \(\alpha\)-competitive (for \(\alpha\geq 1\)) if for all item lists \(\mathcal{R}\), we have \(\mathsf{cost}(\mathcal{A},\mathcal{R})\leq\alpha\cdot\mathsf{OPT}(\mathcal{R})\). Naturally, we desire algorithms where \(\alpha\) is as small as possible. ### Lower bounds on the Optimum Cost To analyze the competitive ratio of online algorithms, it is useful to place lower bounds on the optimum cost. To this end, let \(\mathbf{s}(\mathcal{R},t)=\sum_{r\in\mathcal{R}:t\in I(r)}\mathbf{s}(r)\) denote the total size of items that are active at time \(t\). Let \(\mathsf{OPT}(\mathcal{R},t)\) denote the number of bins the optimal offline algorithm has open at time \(t\), equivalently, it is the smallest number of bins into which all items active at time \(t\) can be repacked. Then: \[\mathsf{OPT}(\mathcal{R})=\int_{\min_{r\in\mathcal{R}}a(r)}^{\max_{r\in \mathcal{R}}e(r)}\mathsf{OPT}(\mathcal{R},t)dt. \tag{2}\] The following lemma presents \(d\)-dimensional generalizations of lower bounds on OPT introduced in earlier works [22, 28]. **Lemma 1**.: _The following are lower bounds on \(\mathsf{OPT}(\mathcal{R})\)._ 1. \(\mathsf{OPT}(\mathcal{R})\geq\int_{\min_{r\in\mathcal{R}}a(r)}^{\max_{r\in \mathcal{R}}e(r)}\lceil\|\mathbf{s}(\mathcal{R},t)\|_{\infty}\rceil\,dt\)__ 2. \(\mathsf{OPT}(\mathcal{R})\geq\frac{1}{d}\sum_{r\in\mathcal{R}}\lVert \mathbf{s}(r)\rVert_{\infty}\cdot\ell(I(r))\)__ 3. \(\mathsf{OPT}(\mathcal{R})\geq\mathsf{span}(\mathcal{R})\)__ Proof.: The definition of \(\mathbf{s}(\mathcal{R},t)\) and the size of bins being \(\mathbf{1}^{d}\) implies that any algorithm needs at least \(\lceil\mathbf{s}(\mathcal{R},t)_{j}\rceil\) bins to pack the total load on the \(j^{th}\) dimension, for any \(j\in[d]\). Thus, \(\mathsf{OPT}(\mathcal{R},t)\geq\max_{j\in[d]}\lvert\mathbf{s}(\mathcal{R},t )_{j}\rceil=\lceil\|\mathbf{s}(\mathcal{R},t)\|_{\infty}\rceil\). Using (2), we obtain (i). Define the _time-space utilization_ of an item \(r\) as \(u(r)=\lVert\mathbf{s}(r)\rVert_{\infty}\cdot\ell(I(r))\). The following shows that the total time-space utilization of all items is a lower bound on \(d\cdot\mathsf{OPT}\), thus proving (ii). \[\mathsf{OPT}(\mathcal{R}) \geq\int_{\min_{r\in\mathcal{R}}a(r)}^{\max_{r\in\mathcal{R}}e(r )}\lVert\mathbf{s}(\mathcal{R},t)\rVert_{\infty}\,dt\] (using (i)) \[\geq\int_{\min_{r\in\mathcal{R}}a(r)}^{\max_{r\in\mathcal{R}}e(r )}\bigg{\rVert}\sum_{r:t\in I(r)}\mathbf{s}(r)\bigg{\rVert}_{\infty}dt\] (def of \[\mathbf{s}(\mathcal{R},t)\] ) \[\geq\frac{1}{d}\int_{\min_{r\in\mathcal{R}}a(r)}^{\max_{r\in \mathcal{R}}e(r)}\sum_{r:t\in I(r)}\lVert\mathbf{s}(r)\rVert_{\infty}\,dt\] (using Prop 1) \[=\frac{1}{d}\sum_{r\in\mathcal{R}}\lVert\mathbf{s}(r)\rVert_{ \infty}\cdot\ell(I(r)).\] (swap order) Lastly, observe that since at least one bin is needed for each time \(t\) that an item is active, we have \(\mathsf{OPT}(\mathcal{R},t)\geq 1\) for each time instant \(t\in[0,\mathsf{span}(\mathcal{R}))\). Together with (2), this implies (iii). Note that the lower bound (i) is tighter than both (ii) and (iii). ## 3 Upper Bound on the Competitive Ratio of Move To Front In this section we prove the first main result of our paper. **Theorem 2**.: _The competitive ratio of Move To Front for the MinUsageTime Dynamic Vector Bin packing problem in \(d\)-dimensions is at most \((2\mu+1)d+1\)._ For \(d=1\), our result implies that Move To Front has a competitive ratio of at most \(2\mu+2\). This significantly improves the result of Kamali and Lopez-Ortiz [18], who showed that Move To Front has an _asymptotic_ competitive ratio of \(6\mu+7\), i.e., for any item list \(\mathcal{R}\), they showed \(\mathsf{cost}(\mathrm{MF},\mathcal{R})\leq(6\mu+7)\cdot\mathsf{OPT}(\mathcal{ R})+3(\mu+1)\). Our result also nearly settles their conjecture of the CR being \(2\mu+1\). Their analysis decomposes the active span into segments of length \((\mu+1)\) and compares the cost of OPT with the cost of Move To Front in each such interval. It turns out that this decomposition is sub-optimal. Instead, we directly use the nature of the Move To Front algorithm and develop a novel decomposition of the usage periods of each bin \(B\) into intervals based on whether or not in the interval \(B\) is the most recently used bin. We now prove Theorem 2. Suppose Move To Front uses \(m\) bins \(B_{1},B_{2},\ldots,B_{m}\) on an input sequence \(\mathcal{R}\). As mentioned earlier, we can assume that \(\cup_{r\in\mathcal{R}}I(r)=[0,\mathsf{span}(\mathcal{R}))\) and that the usage period of each bin is an interval For \(i\in[m]\), let \(I_{i}=\mathsf{span}(R_{i})\) denote the usage period/active interval of bin \(B_{i}\), where \(R_{i}\) is the set of items packed in \(B_{i}\). The cost of Move To Front (MF) can be expressed as \(\mathsf{cost}(MF,\mathcal{R})=\sum_{i=1}^{m}\ell(I_{i})\). Recall that Move To Front maintains a list \(L\) of open bins in the order of their most-recent usage. We say a bin is a _leader_ at time \(t\) if it is in the front of the list \(L\) at time \(t\). We call an interval \(I\) a _leading interval_ for bin \(B\) if \(B\) is a leader at every time instant in \(I\). If Move To Front packs an item into a bin \(B\), then \(B\) is immediately made the leader. Thus, if a bin \(B\) is not a leader at time \(t\), then it cannot accept a new item at \(t\). Based on the above definition, we partition the active interval of each bin \(B\) into intervals which alternate between leading intervals for \(B\) and non-leading intervals for \(B\). Clearly the time at which a bin is opened begins a leading period for the bin. Thus, for each \(i\in[m]\), the interval \(I_{i}\) is sequentially partitioned into \(2n_{i}\) (half-open) intervals as \(I_{i}=P_{i,1}\cup Q_{i,1}\cup P_{i,2}\cup Q_{i,2}\cup\cdots\cup P_{i,n_{i}} \cup Q_{i,n_{i}}\), where each \(P_{i,j}\) is a leading interval for bin \(B_{i}\) and \(Q_{i,j}\) is a non-leading interval for \(j\in[n_{i}]\). Since empty intervals have zero cost, we can assume that all intervals except perhaps the last non-leading intervals of each bin are non-empty, i.e., perhaps \(Q_{i,n_{i}}=\emptyset\). This decomposition is illustrated in Figure 1 with red/thick lines representing leading intervals and blue/thin lines representing non-leading intervals. Using this decomposition, one can write the cost as: \[\mathsf{cost}(MF,\mathcal{R})=\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\bigg{(}\ell(P_ {i,j})+\ell(Q_{i,j})\bigg{)}. \tag{3}\] We analyze the two summands of (3) separately. First we show: **Claim 1**.: \(\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\ell(P_{i,j})\leq\mathsf{OPT}(\mathcal{R})\)_._ Proof.: At each time \(t\), exactly one bin is the leader, hence the leading intervals of bins \(B_{i}\) and \(B_{i^{\prime}}\) are disjoint, i.e, \(P_{i,j}\cap P_{i^{\prime},j^{\prime}}=\emptyset\) for any \(i,i^{\prime}\in[m]\), \(j\in[n_{i}]\), and \(j^{\prime}\in[n_{i^{\prime}}]\). Since at each time \(t\in[0,\mathsf{span}(\mathcal{R}))\) some bin is the leader, one can immediately observe that all the leading intervals partition the interval \([0,\mathsf{span}(\mathcal{R}))\) (see Figure 1). Combined with Lemma 1 (iii), we arrive at Claim 1: \[\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\ell(P_{i,j})=\mathsf{span}(\mathcal{R})\leq \mathsf{OPT}(\mathcal{R}).\qed\] We now analyze \(\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\ell(Q_{i,j})\). For some \(i\in[m]\) and \(j\in[n_{i}]\), consider a non-leading interval \(Q_{i,j}\) beginning at time \(t_{i,j}\), which is preceded by a leading interval \(P_{i,j}\) which ends at \(t_{i,j}\). The reason that bin \(B_{i}\) ceased to be a leader at time \(t_{i,j}\) is because some other bin \(B_{i^{\prime}}\) received a new item \(r_{i,j}\) and became the leader at time \(t_{i,j}\). Thus, the algorithm was unable to pack item \(r_{i,j}\) in bin \(B_{i}\), the previous leader. Let \(R_{i,j}\subseteq R_{i}\) be the set of items active in bin \(i\) at the start of the interval Figure 1: Shows the usage periods of 3 bins used by Move To Front decomposed into leading (red/thick intervals) and non-leading intervals (blue/thin intervals). The span is also indicated. \(Q_{i,j}\), i.e. at time \(t_{i,j}\). Then it must be mean that for some dimension \(k\in[d]\), \((\mathbf{s}(r_{i,j})+\mathbf{s}(R_{i,j}))_{k}>1\), or equivalently \(\|\mathbf{s}(r_{i,j})+\mathbf{s}(R_{i,j})\|_{\infty}>1\). Together with Proposition 1, we obtain: \[\begin{split}&\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\ell(Q_{i,j})<\sum_{i=1 }^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(r_{i,j})+\mathbf{s}(R_{i,j})\rVert_{ \infty}\cdot\ell(Q_{i,j})\\ &\leq\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(r_{i,j}) \rVert_{\infty}\cdot\ell(Q_{i,j})+\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert \mathbf{s}(R_{i,j})\rVert_{\infty}\cdot\ell(Q_{i,j}),\end{split} \tag{4}\] We analyze the two summands of (4) separately. First we show: **Claim 2**.: \(\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(r_{i,j})\rVert_{\infty}\cdot \ell(Q_{i,j})\leq\mu\cdot d\cdot\mathsf{OPT}(\mathcal{R})\)_._ Proof.: Observe that since no new item is packed in a bin \(B_{i}\) during a non-leading interval \(Q_{i,j}\), we have \(\ell(Q_{i,j})\leq\mu\), since each item has a duration of at most \(\mu\). Moreover, the items \(r_{i,j}\) are distinct, since each \(r_{i,j}\) is uniquely associated with the interval \(Q_{i,j}\). Using these observations, we obtain the claim as follows: \[\begin{split}&\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(r_{ i,j})\rVert_{\infty}\cdot\ell(Q_{i,j})\leq\sum_{i=1}^{m}\sum_{j=1}^{n_{i}} \lVert\mathbf{s}(r_{i,j})\rVert_{\infty}\cdot\mu\\ &\leq\mu\cdot\bigg{(}\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert \mathbf{s}(r_{i,j})\rVert_{\infty}\cdot\ell(I(r_{i,j}))\bigg{)}\quad\text{( since $\ell(I(r))\geq 1$)}\\ &\leq\mu\cdot\bigg{(}\sum_{r\in\mathcal{R}}\lVert\mathbf{s}(r) \rVert_{\infty}\cdot\ell(I(r))\bigg{)}\\ &\leq\mu\cdot d\cdot\mathsf{OPT}(\mathcal{R}).\qquad\qquad\qquad \qquad\qquad\qquad\text{(using Lem.~{}\ref{lem:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq: here we used \(\ell(Q_{i,j^{+}})\leq\mu\). Using the above in eq. (5), we obtain: \[\sum_{i=1}^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(R_{i,j})\rVert_{ \infty}\cdot\ell(Q_{i,j})\] \[\leq\sum_{i=1}^{m}\sum_{r\in R_{i}}\lVert\mathbf{s}(r)\rVert_{ \infty}\cdot\bigg{(}\sum_{j\in[n_{i}]:r\in R_{i,j}}\ell(Q_{i,j})\bigg{)} (\text{from (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeq **Claim 4**.: \(\sum_{i=1}^{m}\ell(Q_{i})=\mathsf{span}(\mathcal{R})\leq\mathsf{OPT}(\mathcal{R})\)_._ Proof.: The claim follows directly from the definition of the decomposition (see Fig. 2) and Lemma 1 (iii). Let us now define \(R_{i}^{\prime}\subseteq R_{i}\) to be an inclusion-wise minimal cover of the interval \(P_{i}\). That is, the union of active intervals of items in \(R_{i}^{\prime}\) covers \(P_{i}\), but any \(J\subset R_{i}^{\prime}\) does not cover \(P_{i}\). Let \(r_{i,1},\ldots,r_{i,n_{i}}\) be the \(n_{i}\) items in \(R_{i}^{\prime}\), sorted by their arrival time. By the minimality of \(R_{i}^{\prime}\), each item in \(R_{i}^{\prime}\) has a distinct arrival time. Thus we can index the items so that \(a(r_{i,1})<a(r_{i,2})<\cdots<a(r_{i,n_{i}})\). Moreover, the minimality of \(R_{i}^{\prime}\) also implies that ending times of the items are in sorted order, i.e., \(e(r_{i,1})<e(r_{i,2})<\cdots<e(r_{i,n_{i}})\); if not, an item can be removed from \(R_{i}^{\prime}\) while still covering \(P_{i}\), thus contradicting the minimality of \(R_{i}^{\prime}\). We now decompose each non-empty interval \(P_{i}\) into \(n_{i}\) disjoint periods \(P_{i}=P_{i,1}\cup\cdots\cup P_{i,n_{i}}\) where \(P_{i,j}=[a(r_{i,j}),a(r_{i,j+1}))\) for \(1\leq j<n_{i}\) and \(P_{i,n_{i}}=[a(r_{i,n_{i}}),\min(I_{i}^{+},t_{i}))\). Since this is a partition of \(P_{i}\), we have \(\ell(P_{i})=\sum_{j=1}^{n_{i}}\ell(P_{i,j})\) for each \(i\geq 2\). For an item \(r_{i,j}\in R_{i}^{\prime}\), we refer to the largest index bin with index less than \(i\) which is open at time \(a(r_{i,j})\) as the _blocking bin2_\(B(i,j)\) for the item \(r_{i,j}\) and the interval \(P_{i,j}\). Note that since an item \(r_{i,j}\) is placed in bin \(B_{i}\), all previously opened bins including the blocking bin \(B(i,j)\) could not pack \(r_{i,j}\) when it arrived. Thus: Footnote 2: [28] use the terminology supplier bin instead \[\|\mathbf{s}(r_{i,j})+\mathbf{s}(R_{i,j})\|_{\infty}>1,\] where \(R_{i,j}\) is the set of items in \(B(i,j)\) that are active at time \(a(r_{i,j})\). Using this, we have: \[\begin{split}&\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\ell(P_{i,j})< \sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(r_{i,j})+\mathbf{s}(R_{i,j})\|_{ \infty}\cdot\ell(P_{i,j})\\ &\leq\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(r_{i,j})\|_{ \infty}\cdot\ell(P_{i,j})+\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(R_{i,j} )\|_{\infty}\cdot\ell(P_{i,j}),\end{split} \tag{8}\] We analyze the summands of (8) separately. We first have: **Claim 5**.: \(\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(r_{i,j})\|_{\infty}\cdot\ell(P_{ i,j})\leq d\cdot\mathsf{OPT}(\mathcal{R})\)_._ Proof.: By definition of \(P_{i,j}\), we have \(P_{i,j}\subseteq I(r_{i,j})\). Thus, \(\ell(P_{i,j})\leq\ell(I(r_{i,j}))\). Lemma 1 (ii) then proves the claim. The next claim analyzes the second summand of (8). **Claim 6**.: \(\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(R_{i,j})\|_{\infty}\cdot\ell(P_{ i,j})\leq(\mu+1)\cdot d\cdot\mathsf{OPT}(\mathcal{R})\)_._ Proof.: Let \(\hat{R}=\cup_{i=2}^{m}\cup_{j=1}^{n_{i}}R_{i,j}\) be the set of all items belonging to bins considered as blocking bins by items in \(\{R_{i}^{\prime}\}_{i\geq 2}\). We have: \[\begin{split}&\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\|\mathbf{s}(R_{i,j })\|_{\infty}\cdot\ell(P_{i,j})\leq\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\sum_{r\in R _{i,j}}\|\mathbf{s}(r)\|_{\infty}\cdot\ell(P_{i,j})\\ &=\sum_{r\in\hat{R}}\|\mathbf{s}(r)\|_{\infty}\cdot\bigg{(}\sum_{(i,j):r\in R_{i,j}}\ell(P_{i,j})\bigg{)},\end{split} \tag{9}\] where the last inequality follows by changing the order of summation. Now for a fixed \(r\in\hat{R}\) which is packed in some bin \(B\), consider two distinct items \(r_{i,j}\) and \(r_{i^{\prime},j^{\prime}}\) s.t. \(r\in R_{i,j}\cap R_{i^{\prime},j^{\prime}}\). We will show that \(P_{i,j}\cap P_{i^{\prime},j^{\prime}}=\emptyset\). * For \(i=i^{\prime}\), this follows from the fact that \(\{P_{i,j}\}_{j=1}^{n_{i}}\) partitions \(P_{i}\). * For \(i\neq i^{\prime}\), let \(i<i^{\prime}\) w.l.o.g. Then since \(r_{i,j}\) and \(r_{i^{\prime},j^{\prime}}\) have the same blocking bin \(B\), it must be the case that when \(r_{i^{\prime},j^{\prime}}\) arrives, \(B_{i}\) must be closed, otherwise \(B_{i}\) would be the blocking bin for \(r_{i^{\prime},j^{\prime}}\). Thus, \(r_{i,j}\) must have departed when \(r_{i^{\prime},j^{\prime}}\) arrives, implying that \(P_{i,j}\cap P_{i^{\prime},j^{\prime}}=\emptyset\). Thus for a given \(r\in\hat{R}\), the set of intervals \(P_{i,j}\) s.t. \(r\in R_{i,j}\) are pairwise disjoint. Hence we can observe that for each \(r\in\hat{R}\): \[\sum_{(i,j):r\in R_{i,j}}\ell(P_{i,j})\leq\max_{(i,j):r\in R_{i,j}}e(r_{i,j})- \min_{(i,j):r\in R_{i,j}}a(r_{i,j}). \tag{10}\] Note that since each \(r\in\hat{R}\) is active at the arrival time of an item \(r_{i,j}\) s.t. \(r\in R_{i,j}\), we have \(a(r)\leq a(r_{i,j})\leq e(r)\). Thus, \(e(r_{i,j})\leq\mu+a(r_{i,j})\leq\mu+e(r)\). Putting these in (10), we obtain: \[\sum_{(i,j):r\in R_{i,j}}\ell(P_{i,j})\leq\mu+e(r)-a(r)\leq\mu+\ell(I(r))\leq( \mu+1)\cdot\ell(I(r)).\] Using the above in (9) with Lemma 1 (ii), we see that \[\sum_{i=2}^{m}\sum_{j=1}^{n_{i}}\lVert\mathbf{s}(R_{i,j})\rVert _{\infty}\cdot\ell(P_{i,j})\leq\sum_{r\in\hat{R}}\lVert\mathbf{s}(r)\rVert_{ \infty}\cdot\bigg{(}\sum_{(i,j):r\in R_{i,j}}\ell(P_{i,j})\bigg{)}\] \[\leq(\mu+1)\cdot\sum_{r\in\hat{R}}\lVert\mathbf{s}(r)\rVert_{ \infty}\cdot\ell(I(r))\leq(\mu+1)\cdot d\cdot\mathsf{OPT}(\mathcal{R}),\] thus proving the claim. Claims 4, 5 and 6 together with equations (7) and (8) imply: \[\mathsf{cost}(FF,\mathcal{R})\leq((\mu+2)d+1)\cdot\mathsf{OPT}(\mathcal{R}),\] thus proving Theorem 3. ## 5 Upper Bound on the Competitive Ratio of Next Fit In this section, we prove an upper bound on the CR of Next Fit. **Theorem 4**.: _The competitive ratio of Next Fit for the MinUsageTime Dynamic Vector Bin packing problem is at most \(2\mu d+1\)._ Let \(B_{1},\ldots,B_{m}\) be the bins used by Next Fit (NF) to pack an item sequence \(\mathcal{R}\). As before, for \(i\in[m]\), let \(R_{i}\subseteq\mathcal{R}\) be the items packed in bin \(B_{i}\), and let \(I_{i}\) denote the active interval of \(B_{i}\). We have \(\mathsf{cost}(NF,\mathcal{R})=\sum_{i=1}^{m}\ell(I_{i})\). Recall that Next Fit maintains one current bin at a time into which it tries to pack incoming items. Following [18], we decompose the usage period \(I_{i}\) of a bin \(B_{i}\) into two intervals \(P_{i}\) and \(Q_{i}\) based on when Next Fit considered \(B_{i}\) as the current bin. We decompose the interval \(I_{i}=[I_{i}^{-},I_{i}^{+})\) as \(I_{i}=P_{i}\cup Q_{i}\), where \(P_{i}=[I_{i}^{-},t_{i})\) and \(Q_{i}=[t_{i},I_{i}^{+})\), with \(t_{i}\in I_{i}\) denoting the time at which \(B_{i}\) was released. Thus, \(P_{i}\) is the time period when \(B_{i}\) was considered the current bin and \(Q_{i}\) is the time period when \(B_{i}\) ceased to the current bin. Using the above interval-decomposition, we can write the cost as \(\mathsf{cost}(NF,\mathcal{R})=\sum_{i=1}^{m}\ell(P_{i})+\sum_{i=1}^{m}\ell(Q_{i})\). Note that at each time \(t\), exactly one bin is current, hence \(P_{i}\cap P_{i^{\prime}}=\emptyset\) for all \(i\neq i^{\prime}\). Further at each time some bin is current, hence we conclude that the intervals \(\{P_{i}\}_{i\in[m]}\) partition the interval \([0,\mathsf{span}(\mathcal{R})]\). Together with Lemma 1 (iii), this gives: \[\sum_{i=1}^{m}\ell(P_{i})=\mathsf{span}(\mathcal{R})\leq\mathsf{OPT}(\mathcal{ R}). \tag{11}\] Next, observe that at a bin \(B_{i}\) was released at time \(t_{i}\) because an item \(r_{i}\) could not be packed into \(B_{i}\). This means: \[\|\mathbf{s}(R_{i}^{\prime})+\mathbf{s}(r_{i})\|_{\infty}>1, \tag{12}\] where \(R_{i}^{\prime}\subseteq R_{i}\) denotes the items packed in \(B_{i}\) which are active at \(t_{i}\). Moreover, since a bin \(B_{i}\) is released at \(t_{i}\), it does not receive any new item in the period \(Q_{i}\). Thus \(\ell(Q_{i})\leq\mu\), for each \(i\in[m]\). We use these observations to prove the following. \[\sum_{i=1}^{m}\ell(Q_{i})<\sum_{i=1}^{m}\|\mathbf{s}(R_{i}^{\prime })+\mathbf{s}(r_{i})\|_{\infty}\cdot\ell(Q_{i}) \text{(using (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq: For \(i\in[d]\), the group \(G_{i}\) is the set \(\{2m-1:(i-1)\cdot k+1\leq m\leq i\cdot k\}\) of \(k\) items, i.e., odd-indexed items in the range \([2(i-1)k+1,2ik]\). The size of an item \(j\in G_{i}\) is the vector with \((1-d\varepsilon)\) in the \(i^{th}\) dimension and \(\varepsilon\) everywhere else, i.e., \[\mathbf{s}(j)=\begin{bmatrix}\varepsilon&\cdots&(1-d\varepsilon)&\cdots& \varepsilon\end{bmatrix}.\] The items \(\mathcal{R}_{0}=\{1,2,\ldots,2dk\}\) arrive in that order at time \(0\), and their active interval is \([0,1)\). Consider the execution of an Any Fit packing algorithm \(\mathcal{A}\). Items 1 and 2 are initially placed into a single bin \(B_{1}\) after which the bin is loaded at \((1-d\varepsilon+d\varepsilon-\varepsilon^{\prime})=1-\varepsilon^{\prime}\) in dimension \(1\). Now no item \(j\) for \(j\geq 3\) cannot be packed into \(B_{1}\) since the load in dimension \(1\) would exceed the capacity as we have \(1-\varepsilon^{\prime}+\varepsilon>1\) and \(1-\varepsilon^{\prime}+(d\varepsilon-\varepsilon^{\prime})>1\). Hence another bin \(B_{2}\) is opened. Continuing in this manner, one can observe that at least \(dk\) bins \(B_{1},B_{2},\ldots,B_{dk}\) are created, with bins \(B_{(i-1)k+1},\ldots,B_{ik}\) being loaded at level \((1-\varepsilon^{\prime})\) in dimension \(i\) and at \((\varepsilon+d\varepsilon-\varepsilon^{\prime})\) in other dimensions, for \(i\in[d]\). Thus, \(\mathsf{cost}(\mathcal{A},\mathcal{R}_{0})\geq dk\). On the other hand, \(\mathsf{OPT}(\mathcal{R}_{0})\leq k+1\). This is because all the even-indexed items of \(G_{0}\) can be packed into one bin \(B_{0}\) since \((d\varepsilon-\varepsilon^{\prime})\cdot(dk)<1\). The remaining items can be packed into \(k\) bins, each of which contains exactly one item from \(G_{i}\), for each \(i\in[d]\). This is a feasible packing since the load on the \(j^{th}\) dimension of any such bin is \((1-d\varepsilon)+(d-1)\cdot\varepsilon=1-\varepsilon<1\). We now introduce a sequence \(\mathcal{R}_{1}\) of \(dk\) identical items, each of which are loaded at \(\varepsilon^{\prime}\) in each dimension. These items arrive just before any items of \(\mathcal{R}_{0}\) depart and their active interval is \([1,\mu+1)\). Consider the execution of \(\mathcal{A}\) on \(\mathcal{R}_{0}\cup\mathcal{R}_{1}\). As argued earlier, \(\mathcal{A}\) opens at least \(dk\) bins which are loaded at at most \((1-\varepsilon^{\prime})\) in each dimension (since \((1+\varepsilon)d<1\), and exactly at \((1-\varepsilon^{\prime})\) in one dimension. Thus, each item of \(\mathcal{R}_{1}\) will be packed in a separate bin by \(\mathcal{A}\). This is because \(\mathcal{A}\) is an Any Fit packing algorithm and will not open a new bin since the \(dk\) items of \(\mathcal{R}_{1}\) can be packed in the \(dk\) bins created by \(\mathcal{A}\) while packing \(\mathcal{R}_{0}\). Subsequently, items in \(\mathcal{R}_{0}\) depart, and each of the \(dk\) bins contain one item each from \(\mathcal{R}_{1}\) in the period \([1,\mu+1)\). Thus we have \(\mathsf{cost}(\mathcal{A},\mathcal{R}_{0}\cup\mathcal{R}_{1})\geq dk(1+\mu)\). On the other hand, the optimal algorithm can pack all items of \(\mathcal{R}_{1}\) into the bin \(B_{0}\) which held all even-indexed items, since \((d\varepsilon-\varepsilon^{\prime})dk+dk\cdot\varepsilon^{\prime}=d^{2} \varepsilon k<1\). Thus only bin \(B_{0}\) has an usage period of length \(\mu+1\) while the remaining \(k\) bins have a usage period of length \(1\) since they only contain items from \(\mathcal{R}_{0}\). Thus, \(\mathsf{OPT}(\mathcal{R}_{0}\cup\mathcal{R}_{1})\leq k+1+\mu\). Thus the competitive ratio of \(\mathcal{A}\) is: \[CR(\mathcal{A})\geq\frac{\mathsf{cost}(\mathcal{A},\mathcal{R}_{0}\cup \mathcal{R}_{1})}{\mathsf{OPT}(\mathcal{R}_{0}\cup\mathcal{R}_{1})}\geq\frac{ dk(\mu+1)}{k+\mu+1}=\frac{(\mu+1)d}{1+(\mu+1)/k}\] Since \(k\) is an arbitrary parameter, in the limit \(k\to\infty\), we have \(CR(\mathcal{A})\geq(\mu+1)d\) for any Any Fit packing algorithm \(\mathcal{A}\), proving the claimed lower bound. The execution of any Any Fit packing algorithm \(\mathcal{A}\) on the item list \(\mathcal{R}_{0}\cup\mathcal{R}_{1}\) in illustrated in Figure 3. We now prove a stronger lower bound against Next Fit using a different construction. **Theorem 6**.: _The competitive ratio of Next Fit for the MinUsageTime Dynamic Vector Bin Packing problem is at least \(2\mu d\)._ Proof.: We present the following worst-case adversarial example against which Next Fit has a competitive ratio at least \(2\mu d\). Let \(k\geq 2\) be an even integer. Let \(\varepsilon,\varepsilon^{\prime}\in(0,1)\) be such that \(\varepsilon^{\prime}>2d\varepsilon\) and \(\varepsilon^{\prime}dk<1\). We construct a sequence of \(n=2\cdot d\cdot k\) items labelled as \(\mathcal{R}=\{1,2,\ldots,2dk\}\). As before, we partition the items into groups \(G_{0},G_{1},\ldots,G_{d}\), where \(G_{0}=\{j\in[2dk]:j\text{ is even}\}\) is the set of even-indexed items. The size of an item \(j\in G_{0}\) is \(\varepsilon^{\prime}\cdot\mathbf{1}^{d}\), i.e., \[\mathbf{s}(j)=\begin{bmatrix}\varepsilon^{\prime}&\varepsilon^{\prime}&\cdots& \varepsilon^{\prime}\end{bmatrix}.\] For \(i\in[d]\), the group \(G_{i}\) is the set \(\{2m-1:(i-1)\cdot k+1\leq m\leq i\cdot k\}\) of \(k\) items, i.e., odd-indexed items in the range \([2(i-1)k+1,2ik]\). The size of an item \(j\in G_{i}\) is the vector with \((1-d\varepsilon)\) in the \(i^{th}\) dimension and \(\varepsilon\) everywhere else, i.e., \[\mathbf{s}(j)=\begin{bmatrix}\varepsilon&\cdots&(\tfrac{1}{2}-d\varepsilon)& \cdots&\varepsilon\end{bmatrix}.\] Items \(\mathcal{R}=\{1,2,\ldots,2dk\}\) arrive in that order at time \(0\). The active interval of items in \(G_{0}\) is \([0,\mu)\) and of those in \(\cup_{i=1}^{d}G_{i}\) is \([0,1)\). Consider the execution of Next Fit (NF) on \(\mathcal{R}\). Items 1 and 2 are placed in one bin \(B_{1}\), which is then loaded at \(1/2-d\varepsilon+\varepsilon^{\prime}\). Item 3 does not fit in \(B_{1}\) since the load on dimension \(1\) would then be \(2(1/2-d\varepsilon)+\varepsilon^{\prime}>1\). Thus NF closes \(B_{1}\) and opens a bin \(B_{2}\) to accommodate item 3, after which item 4 is also placed in \(B_{2}\). Continuing this way, one observes that NF creates \(k\) bins to pack the items \(\{1,2,\ldots,2k\}\), with the last bin \(B_{k}\) being loaded at \((1/2-d\varepsilon+\varepsilon^{\prime})\) in dimension \(1\) and \((\varepsilon+\varepsilon^{\prime})\) in other dimensions. Subsequently, items \((2k+1)\) which is loaded at \((1/2-d\varepsilon)\) in dimension \(2\), and item\((2k+2)\) can also be placed in \(B_{k}\), since \((\varepsilon+\varepsilon^{\prime}+1/2-d\varepsilon+\varepsilon^{\prime})<1\). This loads the bin \(B_{k}\) at \((1/2-d\varepsilon+\varepsilon+2\varepsilon^{\prime})\) in dimension \(2\). Consequently, item \((2k+3)\) cannot be placed in this bin since it has load \((1/2-d\varepsilon)\) in dimension \(2\). Thus NF closes \(B_{k}\) and opens another bin. It can be seen that for the rest of the items in \(\{2k+3,\ldots,4k\}\), NF opens \((k-1)\) bins, similar to the execution on items \(\{1,\ldots,2k\}\). Continuing this way for \((d-1)\) more phases, with phase \(i\) corresponding to packing of the items in \(\{2(i-1)k+1,\ldots,2ik\}\) for \(2\leq i\leq d\), one can observe that NF opens \(k+(k-1)+\cdots+(k-1)=1+(k-1)d\) bins. Since each bin contains at least one even-indexed item, each bin is active for a duration of \(\mu\). Thus, \(\mathsf{cost}(NF,\mathcal{R})\geq(1+(k-1)d)\mu\). On the other hand, \(\mathsf{OPT}(\mathcal{R})\leq\mu+k/2\). This is because the optimal algorithm can pack all items of \(G_{0}\) into a single bin, since \(\varepsilon^{\prime}\cdot dk<1\). This bin will be active in the interval \([0,\mu)\), leading to a cost of \(\mu\). The remaining items can be packed into \(k/2\) bins, each of which contains exactly two items from \(G_{i}\), for each \(i\in[d]\). This is a feasible packing since the load on the \(j^{th}\) dimension of any such bin is \(2\cdot(1/2-d\varepsilon)+(d-1)\cdot 2\cdot\varepsilon=1-2\varepsilon<1\). Each such bin is active for the interval Figure 3: Illustrates the load on bins used by any Any Fit packing algorithm \(\mathcal{A}\) on the item list \(\mathcal{R}_{0}\cup\mathcal{R}_{1}\). Part (a) shows \(\mathcal{A}\) opens \(dk\) bins in the time period \([0,1)\) where bins \(B_{(i-1)k+1},\ldots,B_{ik}\) have load \(1-\varepsilon^{\prime}\) in dimension \(i\). Part (b) shows that \(\mathcal{A}\) packs \(dk\) items of \(\mathcal{R}_{1}\) at time \(1\), fully loading each bin \(B_{(i-1)k+1},\ldots,B_{ik}\) in dimension \(i\). Part (c) shows the time period \([1,\mu+1)\) when items of \(\mathcal{R}_{0}\) have departed and each bin contains one item from \(\mathcal{R}_{1}\). \([0,1)\), contributing a cost of \(1\). Thus the competitive ratio of Next Fit satisfies: \[CR(NF)\geq\frac{\mathsf{cost}(NF,\mathcal{R})}{\mathsf{OPT}(\mathcal{R})}\geq \frac{(1+(k-1)d)\mu}{\mu+k/2}=\frac{2\mu(d+\frac{1}{k-1})}{\frac{k}{k-1}+\frac{ 2\mu}{k-1}}.\] Since \(k\) is arbitrary, in the limit \(k\to\infty\), we have \(CR(NF)\geq 2\mu d\), proving the claimed lower bound. Note that our lower bounds (Theorems 5 and 6) match the lower bound of \((\mu+1)\) for Any Fit packing algorithms [22, 28] and \(2\mu\) for Next Fit [32] for the one-dimensional case. However the one-dimensional examples of [22, 32] do not generalize to multiple dimensions, and hence the new constructions of Theorems 5 and 6 are needed. We also record that the competitive ratio of Best Fit can be unbounded, even for \(d=1\), as was shown in [22]. **Theorem 7** ([22]).: _The competitive ratio of Best Fit for the MinUsageTime Dynamic Vector Bin Packing problem is unbounded._ Lastly, we examine the competitive ratio of Move To Front. **Theorem 8**.: _The competitive ratio of the Move To Front algorithm for the MinUsageTime Dynamic Vector Bin Packing problem is at least \(\max\{2\mu,(\mu+1)d\}\)._ Proof.: Note that the lower bound of \((\mu+1)d\) follows from Theorem 5 since Move To Front is an Any Fit packing algorithm. For the lower bound of \(2\mu\), consider the following one-dimensional example. For a parameter \(n\geq 1\), let \(\mathcal{R}=\{1,2,\ldots,4n\}\) be a sequence of \(4n\) items which arrive in that order at time \(0\). The odd-indexed items have size \(1/2\) and active interval \([0,1)\). The even-indexed items have size \(1/(2n)\) and active interval \([0,\mu)\). Consider the execution of Move To Front on \(\mathcal{R}\). Items 1 and 2 are placed into a single bin \(B_{1}\), which is then loaded at \(1/2+1/(2n)\). Therefore item 3 does not fit into bin \(B_{1}\), and a new bin \(B_{2}\) is opened to accommodate item 3. Since \(B_{2}\) is now the most-recently used bin, it is moved ahead of \(B_{1}\), and therefore receives the next item 4, leading to a load of \(1/2+1/(2n)\). Now item 5 cannot be placed in either \(B_{1}\) or \(B_{2}\), causing another bin to be opened. Continuing this way, we can observe that Move To Front (MF) will create \(2n\) bins. Since each bin contains an even-indexed item, each bin will be active for a duration of \(\mu\). Thus \(\mathsf{cost}(MF,\mathcal{R})=2n\cdot\mu\). On the other hand, the optimum algorithm can pack all the even-indexed items into one bin since there are \(2n\) such items of size \(1/(2n)\). This bin is active in \([0,\mu)\). The remaining \(2n\) odd-indexed items of size \(1/2\) each can be paired up an placed in \(n\) bins, each of which is active in \([0,1)\) This implies \(\mathsf{OPT}(\mathcal{R})\leq\mu+n\). Thus the competitive ratio of Move To Front satisfies: \[CR(MF)\geq\frac{\mathsf{cost}(MF,\mathcal{R})}{\mathsf{OPT}(\mathcal{R})}\geq \frac{2n\mu}{\mu+n}=\frac{2\mu}{1+\frac{\mu}{n}}.\] Since \(n\) is arbitrary, in the limit \(n\to\infty\), we have \(CR(NF)\geq 2\mu\), proving the claimed lower bound. We note that the same example was used by [28, 32] to establish a lower bound of \(2\mu\) on the competitive ratio of Next Fit. **Remark 1**.: The above theorem implies that the lower bound is \(2\mu\) for \(d=1\), and \((\mu+1)d\) for \(d\geq 2\). We leave improving the lower bound of Move To Front for \(d\geq 2\) as an interesting open question. Experimental Evaluation In this section, we perform an experimental study evaluating the average-case performance of several Any Fit packing algorithms. Experimental Setup.In addition to Move To Front, First Fit and Next Fit, we study the following Any Fit algorithms: 1. Best Fit, with the load of a bin containing a set \(R\) of items is defined as \(w(R)=\|\mathbf{s}(R)\|_{\infty}\). 2. Worst Fit, which tries to place an item in the least loaded bin. 3. Last Fit, which in contrast with First Fit, tries to place an item in the bin which with the latest opening time. 4. Random Fit, which tries to place an item in a bin selected uniformly at random from the list of open bins. We evaluate the performance of these algorithms on randomly-generated input sequences. Our experimental setup closely follows the setup of [18] for the 1-D case. In \(d\)-dimensions, we assume that each bin has size \(\mathbf{B}^{d}\), for integers \(d,B\geq 1\). Each item is assumed to have a size in \(\{1,2,\ldots,B\}^{d}\). For an integral value \(T\) of the span, we assume each item arrives at an integral time step in \([0,T-\mu]\) and has an integral duration in \([1,\mu]\), for integral \(\mu\geq 1\). Each instance in our experiment is a sequence of \(n\) items, where the size and duration of each item is selected randomly from their ranges, assuming a uniform distribution. For different settings of the parameters as described in Table 2, we generate \(m=1000\) input instances. Since the computation of the optimal packing is NP-hard, we evaluate the performance of an algorithm by comparing its packing cost to the lower bound on OPT from Lemma 1 (i). Our experimental results are shown in Figure 4. For each combination of parameters \(d\in\{1,2,5\}\) and \(\mu\in\{1,2,5,10,100,200\}\), we plot the average performance of our algorithms in consideration, with error bars measuring the standard deviation. General Observations.We observe that Move To Front has the best average-case performance among all algorithms, even in multiple dimensions. Close in performance are First Fit and Best Fit, which have nearly identical performance (the blue and brown curves are nearly superimposed), with First Fit having generally lower variance. Following these are Next Fit, Last Fit and Random Fit with the performance of Next Fit degrading with higher values of \(\mu\), i.e., longer jobs on average. As expected, Worst Fit has the worst performance since it packs items inefficiently. Random Fit and Worst Fit are also seen to have a higher variance indicating a high variability in performance. In contrast, Move To Front, First Fit and Best Fit have low variance in performance. \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & Description & Value \\ \hline \hline \(d\) & Num. dimensions & \(\{1,2,5\}\) \\ \(n\) & Sequence length & \(n=1000\) \\ \(\mu\) & Max. item length & \(\{1,2,5,10,100,200\}\) \\ \(T\) & Sequence span & \(T=1000\) \\ \(B\) & Bin size & \(B=100\) \\ \hline \end{tabular} \end{table} Table 2: Summary of experimental parameters Figure 4: Average-case performance of Any Fit packing algorithms for different values of \(\mu\) and \(d\). Error bars measure std. deviation. Packing and Alignment.We attempt to intuitively explain our experimental results. As [18] discuss, the quality of a solution is influenced by _packing_ and _alignment_. Packing refers to how tightly items are packed together and how much space is wasted, while alignment refers to how well-aligned items are in terms of their durations. Better packing leads to improved performance since lower number of bins need to be opened. Better alignment saves on cost because if items arriving at almost the same time are packed together then in expectation they all depart together, preventing solutions in which multiple bins are active, each containing a small number of long-duration jobs. The performance of Best Fit (resp. Worst Fit) can therefore be explained due to its excellent (resp. poor) packing. Next Fit generally should lead to well-aligned solutions since it tries to fit items into one bin, however does not factor packing into consideration as it only maintains one open bin. On the other hand, Move To Front does relatively well on both fronts: by using the most recently-used bin it leads to well-aligned solutions, and since it keeps all bins open it does not open many new bins like Next Fit. These intuitive ideas can also be used to explain the observation that the performance of Next Fit worsens with large \(\mu\): for larger \(\mu\), it is more likely that bins remain open for a longer duration, hence it is better to avoid opening many new bins (which is what Next Fit does) to save cost. Theory vs Practice.Our work invites an interesting discussion contrasting theory and practice. While the competitive ratio (CR) of Best Fit is theoretically unbounded, it has a good average-case performance in practice. In contrast, although the CR of Next Fit is theoretically bounded, it does not do as well in practice. Finally, although the bound on the CR of First Fit is lower than the proved lower bound on the CR of Move To Front, on average Move To Front has better performance than First Fit. These observations suggest theoretically studying the average-case performance of these algorithms against input sequences arising from specific distributions (such as the uniform distribution considered in our experiments) as an interesting direction for future research. ## 8 Concluding Remarks In this paper, we studied the MinUsageTime Dynamic Vector Bin Packing problem, where the size of an item is a \(d\)-dimensional vector, modelling multi-dimensional resources like CPU/GPU, memory, bandwidth, etc. We proved almost-tight lower and upper bounds on the competitive ratio (CR) of Any Fit packing algorithms such as Move To Front, First Fit and Next Fit. Notably, we showed that Move To Front has a CR of \((2\mu+1)d+1\), thus significantly improving the previously known upper bound of \(6\mu+7\) for the 1-D case. Our experiments show that Move To Front has superior average-case performance than other Any Fit packing algorithms. We discuss some interesting directions for future work. The first is to close the gap between the upper and lower bounds presented in this paper. Concretely, investigating if the lower bound of \(\max\{2\mu,(\mu+1)d\}\) on the CR of Move To Front can be improved to \(2\mu d\) is a natural first step. Another direction is to design algorithms with improved CR for small number of dimensions, such as \(d=2\). Lastly, studying the problem when given additional information about the input, perhaps obtained using machine learning algorithms, is another direction for future work. For instance, studying the clairvoyant DVBP problem, where the duration of an item is accurately known at the time of its arrival, is an interesting question.
2307.09006
OxfordVGG Submission to the EGO4D AV Transcription Challenge
This report presents the technical details of our submission on the EGO4D Audio-Visual (AV) Automatic Speech Recognition Challenge 2023 from the OxfordVGG team. We present WhisperX, a system for efficient speech transcription of long-form audio with word-level time alignment, along with two text normalisers which are publicly available. Our final submission obtained 56.0% of the Word Error Rate (WER) on the challenge test set, ranked 1st on the leaderboard. All baseline codes and models are available on https://github.com/m-bain/whisperX.
Jaesung Huh, Max Bain, Andrew Zisserman
2023-07-18T06:48:39Z
http://arxiv.org/abs/2307.09006v1
# Oxfordyog submission to the ego4d av transcription challenge ###### Abstract This report presents the technical details of our submission on the EGO4D Audio-Visual (AV) Automatic Speech Recognition Challenge 2023 from the OxfordVGG team. We present _WhisperX_, a system for efficient speech transcription of long-form audio with word-level time alignment, along with two text normalisers which are publicly available. Our final submission obtained 56.0% of the Word Error Rate (WER) on the challenge test set, ranked 1st on the leaderboard. All baseline codes and models are available on [https://github.com/m-bain/whisperX](https://github.com/m-bain/whisperX). Jaesung Huh, Max Bain, Andrew Zisserman Visual Geometry Group, University of Oxford, UK ## 1 Introduction Speech recognition has been a fundamental challenge in the field of audio processing, aiming to convert speech waveforms into textual representations. In recent years, Deep Neural Networks (DNNs) have significantly advanced the field by improving the performance of speech recognition systems. These improvements have been achieved either through the hybrid DNN-HMM architecture [11, 12] or by leveraging end-to-end models [3, 7, 8]. The availability of web-scale datasets and advancements in semi- or unsupervised learning techniques have further propelled the performance of speech recognisers to new heights. Notably, Whisper [14] shows that even a simple encoder-decoder architecture could generalise well training with 680,000 hours of data. However, it is worth noting that Whisper's input window is limited to audio segments of only 30 seconds. As a result, it faces challenges when transcribing longer audio files. Additionally, due to its sequential decoding approach, Whisper is susceptible to issues such as hallucinations or repetitive outputs. These challenges are similar to those encountered in auto-regressive language generation tasks. _WhisperX_[4] proposes a method to improve both the accuracy and efficiency of Whisper when transcribing long audio. It uses a voice activity detection model to pre-segment the input audio to run Whisper with a cut&merge scheme, allowing long-form audio to be transcribed in parallel with batched transcription of the pre-segmented audio chunks. It also conducts forced phoneme alignment using an off-the-shelf model such as Wav2Vec2 [3] to generate word-level timestamps required by the EGO4D transcription challenge. This report investigates how _WhisperX_ performs on the EGO4D speech transcription dataset. EGO4D challenge dataset presents unique difficulties on two fronts. Firstly, unlike other widely used speech datasets [6, 13], it comprises audio recordings captured in real-world scenarios with diverse types of background noise. Secondly, the audio files in the EGO4D dataset are recorded using a microphone positioned on the wearer's head-mounted camera. The frequent movements of the wearer introduce variations in the audio amplitude, making the transcription process more difficult. Our model achieved a Word Error Rate (WER) of 56.0% on the challenge test set, showing a substantial improvement over the baseline results. Additionally, we show the significance of text normalisation in achieving favourable WER outcomes. ## 2 Method Here, we present our model _WhisperX_ and the text normalisation methods we've used for submission. Please refer to the original paper for more details [4]. ### WhisperX WhisperX is a time-accurate speech recognition system enabling within-audio parallelised transcription. It employs several pre-processing steps. The input audio is first segmented with Voice Activity Detection (VAD) model, divided into a set of voice regions. These regions are then cut & merged into approximately 30-second input chunks. The paper shows that this VAD Cut & Merge preprocessing reduces hallucination and repetition and enables within-audio batched transcription. The resulting chunks are then: (i) transcribed in parallel with Whisper, and (ii) aligned with a phoneme recognition model to generate precise timestamps for each word. In our submission, we use pyannote [5] VAD model and Wav2Vec2 [3] fine-tuned with 960 hours of Librispeech [13] for phoneme alignment. Unless specified, we follow the default settings outlined in the original paper [4]. ### Text normaliser Although the challenge evaluation script normalises the submission results using the English global mapping file from the Kaldi repository [1], additional post-processing steps are essential to improve our performance. For example, WhisperX normally outputs the number as a form of integer or float, but the ground truth in the validation set transcribes the number into spoken forms. (e.g. 1 to 'one'). For this reason, we use two text normalisers, Whisper text normaliser [14] and NeMo text normaliser [15]. After obtaining the initial transcript from _WhisperX_, we utilise the text normaliser provided in Whisper's original repository [9], except keeping the interjections such as 'hmm' or 'oh'. These interjections are not ignored in the validation set, whereas the original Whisper normaliser does overlook them. We also observe that Whisper normaliser converts numbers from words to digits, whereas the numbers in validation set are represented as words. Therefore, we employ the NeMo text normaliser to process the output of the Whisper normaliser, converting the numbers into their corresponding word forms. This significantly improves the WER in both the validation and the test set, shown in Table 1 and 2. ## 3 Result Table 1 shows the result on the validation set with different versions of Whisper and WhisperX. We can see that using the text normaliser results in a significant improvement on the WER. WhisperX also performs better than Whisper in general due to reduced hallucinations and repetitions. Interestingly, medium.en performs best on the validation set. The medium.en model assumes that the language spoken in the utterance is purely English, whereas the large-v2 model checks the first 30 seconds of the audio to determine the language spoken in the whole conversation, and assumes this when making a transcript, which may lead to non-English results. In fact, we notice that out of 50 WhisperX outputs in the val set, 13 are transcribed with non-English. However, all the audio files in the test set are detected as English, so we decide to use large-v2 for its greater capacity and generalisation. Table 2 shows the result on the challenge test set. We could see using normaliser results in significant improvement in WER (73.3 to 56.0). ### Qualitative Examples Figure 1 shows some of the qualitative examples of our method along with the ground truth. We show the project with the best WER (0d4efcc9-a336-46f9-a1db-0623b5be2869) and the worst WER (f0cb79ef-c081-4049-85ef-2623e02c9589) in the validation set. In the worst example, the speakers speak a non-English language in the middle of the conversation (NON-ENGLISH in Figure 1), which is not transcribed in the ground truth. However, our model assumes that English is being spoken, resulting in a hallucination that causes a huge insertion error. ## 4 Conclusion and Future Work Here we report our results using WhisperX and text normalisation in the EGO4D AV transcription challenge. All the code and models we've used are publicly available at [https://github.com/m-main/whisperX](https://github.com/m-main/whisperX). Note that our method does not use visual streams, which has been shown to be helpful \begin{table} \begin{tabular}{l c c} \hline \hline Version & Normaliser & WER \\ \hline \multirow{2}{*}{large-v2} & ✗ & 73.3 \\ & ✓ & **56.0** \\ \hline \hline \end{tabular} \end{table} Table 2: Word error rate (%) on test set. The lower is better. \begin{table} \begin{tabular}{l c c} \hline \hline Model & Version & Before normaliser & After normaliser \\ \hline \multirow{2}{*}{Whisper} & base.en & 74.4 & 68.2 \\ & medium.en & 72.0 & 65.6 \\ & large-v2 & 75.3 & 73.3 \\ \hline \multirow{2}{*}{WhisperX} & base.en & 73.8 & 71.0 \\ & medium.en & 67.0 & 62.0 \\ & large-v2 & 73.7 & 73.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Word error rate (%) on validation set. The lower is better. Figure 1: Qualitative examples. The figure shows the first few lines of results from audio file which produces the best WER (top) and the worst WER (bottom). **GT**: ground truth, **PRED** : our model’s prediction. in recent works [2, 10]. Also, as shown in our qualitative example, automatic language detection + multilingual speech recognition could be helpful to improve speech recognition performance.
2307.06335
Neural Free-Viewpoint Relighting for Glossy Indirect Illumination
Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.
Nithin Raghavan, Yan Xiao, Kai-En Lin, Tiancheng Sun, Sai Bi, Zexiang Xu, Tzu-Mao Li, Ravi Ramamoorthi
2023-07-12T17:56:09Z
http://arxiv.org/abs/2307.06335v1
# Neural Free-Viewpoint Relighting for Glossy Indirect Illumination ###### Abstract Precompated Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics. + Footnote †: journal: Computer Graphics and Jahn Wiley & Sons Ltd + Footnote †: journal: Computer Graphics and Jahn Wiley & Sons Ltd ## 1 Introduction Interactive rendering of scenes with complex global illumination effects remains a long-standing challenge in computer graphics. Precomputed Radiance Transfer (PRT) [17], which enables interactive relighting by precomputing the light transport of a static scene, remains an attractive solution. However, the practical impact of PRT has largely been limited to low-frequency spherical harmonic methods. All-frequency methods using Haar wavelets were proposed to address this shortcoming, but required substantially larger data storage, and were therefore limited to fixed viewpoint [21], triple products for direct lighting only [21] or lower-frequency BRDF in-out factorizations [16, 22]. Obtaining true all-frequency relighting with changing view-dependent glossy global illumination effects requires precomputing, storing and rendering with a high-resolution 6D light transport tensor for spatial, light and view variation, which has remained intractable because of the exponential growth in data size with dimensionality. With the advent of deep learning and implicit neural representations, we have a new mathematical toolbox of function approximators that can be used to revisit this challenge. Indeed, work on neural radiance fields [23, 24] showed that high-dimensional spatio-angular radiance functions can be learned by a simple multi-layer perceptron (MLP), and these ideas have been applied to directly solve the rendering equation with neural function representations [15]. However, simply approximating the light transport matrix in a neural basis is insufficient for PRT, since one needs to compute light transport integrals in real-time as is done in spherical harmonics or wavelets. In this paper, we leverage the seminal early PRT work, modern MLP-based function approximators and recent rendering advances to tackle these problems. We focus on indirect lighting, including glossy view-dependent global illumination. Several approaches to real-time direct lighting exist, including the original triple product formulation [21], and ReSTIR [2]. We leverage real-time raytracing in OptiX on modern RTX graphics cards [22, 23] with importance sampling of the environment map and Monte Carlo denoising [11] in OptiX [21]. However, such a direct path-tracing approach is still not real-time for complex light transport paths involving multi-bounce glossy indirect reflections or caustic patterns. Our major technical contributions and design decisions include: _Haar Wavelet Representation:_ As in the original wavelet-based PRT algorithms, we seek to project the lighting and light transport into Haar wavelets, while keeping a small number (typically 64) of the most important lighting coefficients. This enables a real-time wavelet dot product and projection of the environment map as in previous work, and differs from recent neural PRT ap proaches [14, 15], which require separate neural layers to compute dot products within neural functions. While Rainer et al.'s method [14] is suitable for largely diffuse scenes, the quality of indirect view-dependent effects is often less accurate. Their neural approximation of the linear dot product can also lead to a tonal shift of the image. By working directly with wavelets, our approach better preserves high-frequency effects such as glossy reflections, and it has the theoretical benefits in remaining grounded in linear function spaces (see Fig. 1 and quantitative comparisons in Table 1). Light Transport RepresentationThe key challenge is the represention of the light transport coefficient for a given view, spatial location and wavelet index. For direct lighting, the 6D light transport can be factorized into a product of 4D view-independent visibility and 4D BRDF functions, with wavelet coefficients of the product computed using triple products [13]. However, it is not possible to extend this formulation to global illumination. We make two important modifications, enabled by modern MLP-based learning algorithms. First, instead of visibility, we learn a feature vector parameterized by spatial location and wavelet index. To enable compact storage and fast training and evaluation, we decompose the feature field with tensor factorization [12], where the 3D spatial component is represented using a multiresolution hash grid [15]. To our knowledge, this is the first method to combine the use of tensor factorization and multiresolution hash grids. Finally, we use a small MLP that takes as input the feature vector, reflection direction, normal, and BRDF parameters, and outputs the transport coefficients. The MLP and feature field tensors are all trained on images of the scene under different views and environment maps. Real-Time RenderingWe demonstrate real-time rendering with precomputed light transport, including glossy effects with changing lighting and view. The size of our final representation is compact (113 MB for the scene in Fig. 1), significantly smaller than an explicit 6D representation, and even smaller than early wavelet-based relighting methods without view-dependence [13]. We believe our work addresses a long unresolved challenge in PRT methods, to enable high-frequency lighting and viewpoint variation with global illumination (see Fig. 1, rendered at 24fps), while giving a new capability to real-time rendering. ## 2 Related Work PRT research has always relied on new mathematical representations beyond spherical harmonics and wavelets, such as zonal harmonics [16], clustered principal components (CPCA) [17], spherical Gaussians with tensor approximation [18], and von-Mises Fisher approximations of the transfer function [19]. Our work can be seen as a natural progression along this line involving MLP-based neural function approximators. We limit ourselves to the standard PRT setting of static scenes with distant environment map illumination, and do not consider near-field area sources [15], or dynamic objects [2]. We are distinct from direct-to-indirect transfer methods [1, 1], which cannot easily handle complex view-dependent global illumination. We do take inspiration from them in handling direct lighting separately. We refer readers to the comprehensive Figure 1: We develop a precomputed radiance transfer (PRT) method, based on a hybrid neural-wavelet representation. Our method enables high-frequency relighting with changing view and glossy indirect illumination. _Left: Indirect illumination (which our method focuses on) rendered at 24 FPS on an RTX 4090 with our system using 64 Haar wavelets for the environment map and our learned MLP light transport. _Middle: Comparison to Neural PRT [1] and ground truth. Neural PRT does not handle high-frequency view-dependent effects as well as our method (notice the missing glossy reflections pointed out by the arrows), and has a slight tonal shift on the stove and the pots. _Right: Showing the rendering combined with direct lighting, and different lighting environments and views of the same scene rendered in real-time._ survey by Ramamoorth [15], which points out the unsolved nature of all-frequency relighting with changing viewpoint and glossy objects. They also note that triple products [16] are limited by the inability to support spatial compression (CPCA), while BRDF factorization methods [17, 18] can require more than a hundred terms for high-frequency materials [19]. Ren et al. [19] first introduced the use of neural networks to regress global illumination and image-based relighting. In contrast, we focus on the classic PRT problem with environment maps, and introduce a novel light transport representation. Most recently, Rainer et al. [14] introduced a neural PRT solution with diffuse-specular separation. They do not directly use wavelets, unlike our method, but use a convolutional neural network to extract lighting features. In contrast, our method is a novel hybrid using neural networks to predict Haar wavelet transport coefficients, and we demonstrate better glossy effects in our results, and better quantitative metrics (see Table 1 in results). Xu et al. [18] introduce lightweight neural basis functions, and neural networks for double and triple product integrals. As with most neural bases, there is no guarantee of orthonormality and a separate network is needed for the dot products. In contrast, we leverage standard orthonormality and approximation with the most significant coefficients by performing dot products in wavelets, while using neural networks only to represent wavelet transport coefficients. Moreover, Xu et al. only demonstrate fixed view, and the inherent limitations of triple product integrals require a restriction to direct lighting. Our work also relates to research on neural materials and layering [16] and recent efforts in acquisition of light transport from real scenes [20] but we have very different goals. We acknowledge the significant recent progress in real-time path tracing and denoising [21, 22] without the need for any precomputation. A comprehensive discussion of these methods is out of scope, and they are largely orthogonal to our PRT-based approach. We do note that they are usually still limited in capturing complex multi-bounce light transport like glossy reflections at the low sample counts required for real-time applications. We do leverage this research by denoising the direct lighting. Although our PRT indirect renderings are of high quality, and not affected by Monte Carlo noise in the traditional sense, we do observe a small benefit from denoising, see Table 1 and Fig. 8. ## 3 Overview We now provide a brief overview of our method. The light transport equation is given by, \[B(\mathbf{x},\mathbf{\omega}_{o})=\int_{\Omega}T(\mathbf{x},\mathbf{\omega},\mathbf{\omega}_{o})L (\mathbf{\omega})\,d\mathbf{\omega}, \tag{1}\] where \(B\) is the outgoing radiance we seek, as a function of surface position \(\mathbf{x}\), and outgoing direction \(\mathbf{\omega}_{o}\). It is given by an integral of the environment map lighting \(L\), a function of incident direction \(\mathbf{\omega}\), multiplied by the light transport function \(T\), which is a function of spatial location \(\mathbf{x}\) and incident and outgoing angles \((\mathbf{\omega},\mathbf{\omega}_{o})\). For our purposes \(T\) will represent only the global illumination component, with direct lighting computed separately. In PRT, we recompute the light transport \(T\) and dynamically change the lighting \(L\) and view \(\mathbf{\omega}_{o}\). We follow previous PRT methods by projecting the lighting (at run-time) and transport (precomputed) into a suitable basis--Haar wavelets on cubemap faces as in previous work [16] \[L(\mathbf{\omega}) = \sum_{j}L_{j}\Psi_{j}(\mathbf{\omega}) \tag{2}\] \[T(\mathbf{x},\mathbf{\omega},\mathbf{\omega}_{o}) = \sum_{k}T_{k}(\mathbf{x},\mathbf{\omega}_{o})\Psi_{k}(\mathbf{\omega}),\] where \(\Psi_{k}\) are the basis functions indexed by \(k\), and \(L_{k}\) and \(T_{k}\) are the lighting and transport coefficients. The basis expansion in \(T\) is only over the incident direction \(\mathbf{\omega}\) which is being integrated. We achieve real-time rendering simply by taking the dot-product, \[B(\mathbf{x},\mathbf{\omega}_{o}) = \sum_{j}\sum_{k}L_{j}T_{k}(\mathbf{x},\mathbf{\omega}_{o})\int_{\Omega} \Psi_{j}(\mathbf{\omega})\Psi_{k}(\mathbf{\omega})\,d\mathbf{\omega} \tag{3}\] \[= \sum_{k\in K}L_{k}T_{k}(\mathbf{x},\mathbf{\omega}_{o})\] \[= \mathbf{L}\mathbf{\cdot}\mathbf{T}(\mathbf{x},\mathbf{\omega}_{o}),\] where \(\mathbf{L}\) and \(\mathbf{T}\) represent vectors of all coefficients \(k\), and the integral simplifies by the orthonormality of basis functions. This simplicity and the resulting practicality for real-time rendering is not possible when using a (non-orthonormal) neural basis as in earlier work [14, 18]. These works therefore require a separate more complex network to perform approximate integration/dot products. Efficiency in the summation or dot-product is obtained by considering only a set \(K\) of the largest wavelet coefficients in the lighting (we typically use 64); this is indicated in the second line above. The entire transport \(T\) must still be precomputed, but only the coefficients in \(K\) will be used (this set can change at each frame with the lighting). It remains to compute and represent \(T\) and \(T_{k}\). As motivation, we first review the triple product approach used for direct lighting [16]. In that case, the transport is simply the point-wise product of (view-independent) visibility \(V\) and cosine-weighted BRDF \(\rho\), with wavelet coefficients computed using triple products, \[T^{d}(\mathbf{x},\mathbf{\omega},\mathbf{\omega}_{o}) = V(\mathbf{x},\mathbf{\omega})\rho(\mathbf{\omega},\mathbf{\omega}_{o})\] \[T^{d}_{k}(\mathbf{x},\mathbf{\omega}_{o}) = \sum_{i}\sum_{j}C_{ijk}V_{i}(\mathbf{x})\rho_{j}(\mathbf{\omega}_{o})\] \[B^{d}(\mathbf{x},\mathbf{\omega}_{o})=\sum_{k}L_{k}T^{d}_{k}(\mathbf{x},\mathbf{ \omega}_{o}) = \sum_{i}\sum_{j}\sum_{k}C_{ijk}V_{i}(\mathbf{x})\rho_{j}(\mathbf{\omega}_ {o})L_{k},\] where \(C_{ijk}\) are the tripling coefficients and we use the superscript \(d\) to specify this is for direct lighting only (these equations are not used in our system; they are for illustration and motivation only). Note that the original triple product method directly used the integration with the lighting (last line above) without explicitly forming the transport coefficient above, but this formulation is equivalent. For global illumination, no such simple form exists and we will represent \(T_{k}(\mathbf{x},\mathbf{\omega}_{o})\) instead by a neural network. However, we are inspired by the formulation above and modify it in two key ways. First, as there is no closed-form expression for the convolution of visibility terms for an arbitrary number of ray bounces, we replace the visibility in the above formulation with a view-independent general feature vector, which is a function of output wavelet coefficient \(k\) and spatial position \(\mathbf{x}\). This promotes a compact fac torization of light transport that allows the network to learn these terms. Second, we replace the simple multiplication of visibility and BRDF (and related triple product wavelet formulation) by a small multi-layer perceptron (MLP) that takes as input the feature vector, surface normal, reflected direction and BRDF parameters (diffuse and specular coefficients, roughness) and outputs the transport coefficient \(T_{k}\). We provide the mathematical details in the next section. ## 4 Mathematical Framework We now present a hybrid wavelet-neural framework, where transport is computed in the wavelet basis as in the classical works, but transport coefficients are determined by a neural network. Regression directly in the wavelet basis has several advantages. It is well-established that the discrete transport operators are sparse in the wavelet domain, as most of the frequencies are concentrated in relatively few entries. This makes the problem of memorizing the light transport for a particular scene tractable. Second, we can compute the rendering equation directly, avoiding the need for low-frequency approximations or using neural networks as renderers. This allows for both view and lighting variations, enabling full generalization for complex light transport effects. _Representing Light Transport:_ Specifically, we represent the transport coefficients as, \[T_{k}(\mathbf{x},\mathbf{\omega}_{o})=f\left(\mathbf{h}_{k}(\mathbf{x}),\mathbf{\omega}_{r}(\mathbf{x }),\mathbf{n}(\mathbf{x}),\mathbf{p}(\mathbf{x});\mathbf{\Theta}\right), \tag{4}\] where \(\mathbf{h}_{k}\) is a feature vector as a function of spatial coordinate \(\mathbf{x}\) and wavelet index \(k\). The feature field \(\mathbf{h}\) in essence captures how a wavelet light \(k\) scatters with global illumination when encountering scene point \(\mathbf{x}\). \(f\) is a small multilayer perceptron (MLP) network that decodes the feature vector \(\mathbf{h}_{k}\) into the appropriate light transport wavelet coefficient \(T_{k}\). Additional inputs to the MLP are the reflected direction \(\mathbf{\omega}_{o}\), the reflection of the outgoing direction \(\mathbf{\omega}_{o}\) about the surface normal \(\mathbf{n}(\mathbf{x})\), all in global coordinates. It is well known that using \(\mathbf{\omega}_{r}\) instead of \(\mathbf{\omega}_{o}\) enables more coherent functions that are easier to optimize/regress for [17, 18]. We also pass in the BRDF parameters which we denote as a vector \(\mathbf{\rho}\), which could be spatially-varying. We adopt a standard GGX/Trowbridge-Reitz reflection model [16, 15], with parameters \(\mathbf{\rho}\) including the diffuse and specular colors \(\mathbf{k}_{d}\) and \(\mathbf{k}_{s}\) and roughness \(\sigma\). \(\mathbf{\Theta}\) denotes the parameters of the MLP. _Feature Field Representation:_ We have so far considered the feature vector \(\mathbf{h}_{k}(\mathbf{x})\) for a given wavelet index \(k\) and spatial point \(\mathbf{x}\). For compact representation, it is convenient to explicitly write the feature field \(\mathbf{h}\) as a tensor \(H\) with explicit parameters/indices, (we use notation \([]\) for accessing feature grids and \(()\) for functions) \[\mathbf{h}\equiv H[\mathbf{x},\mathbf{k},l], \tag{5}\] where spatial location is designated by \(\mathbf{x}\) as before, a 3D vector. It is convenient for later representation to view the wavelet index as a 2D vector \(\mathbf{k}\), corresponding to position on a cubemap after non-standard Haar wavelet decomposition. Finally, \(l\) is the 1D index of the feature vector (typically we use a vector of length 64). Note that explicitly representing \(H\) can be prohibitive, given it is effectively a 6D tensor. Therefore, we develop a compact tensor factorization, inspired by previous tensor approximations in the PRT and NeRF literature [14, 15]. This approach also has similarities to PCA matrix decompositions, although we use a multiresolution hash grid [14] for further compression rather than clustering as in previous PRT works [13, 15, 16]. Specifically, we use a CP tensor decomposition along the three modes (spatial \(\mathbf{x}\), wavelet \(\mathbf{k}\), feature \(l\)) with \(M\) terms to write, \[H[\mathbf{x},\mathbf{k},l]\approx\sum_{m=1}^{M}S_{m}[\mathbf{x}]W_{m}[\mathbf{k}|U_{m}[l]. \tag{6}\] In the equation above, \(S_{m}[\mathbf{x}]\) is itself a 3D spatial feature grid depending on spatial coordinate \(\mathbf{x}\), with trilinear interpolation to obtain the value at any \(\mathbf{x}\). We represent \(S_{m}\) as a three-dimensional multiresolution hash encoding [14] for ease of training and evaluation. This differs from most previous works that store information on scene vertices. In our experiments, we found that such a volumetric representation results in fewer view-dependent artifacts than a scene vertex representation (see Table 6) or a learned neural texture (single-resolution hash grid), and is easier to implement and compress, since parameterization of geometry remains a difficult problem. Note that the rendering costs of volumetric methods are independent of the level of detail of the scene; this has been exploited in previous works involving neural scene-to-volume computation [13]. \(W_{m}[\mathbf{k}]\) is a two-dimensional grid that stores a feature vector for each wavelet. Since the environment map is represented with a cubemap, wavelets and \(W_{m}\) can also be represented as a cubemap. Finally, \(U_{m}[l]\) represents the "feature" dimension, which is a 1D vector for each \(m\), where \(\mathbf{U}\) itself is simply a learnable matrix. Given the tensor decomposition of the feature field, we can evaluate the feature vector in Equation 4 at runtime as follows, \[\mathbf{h}_{k}(\mathbf{x})=\sum_{m=1}^{M}S_{m}[\mathbf{x}]W_{m}[\mathbf{k}|\mathbf{U}_{m}, \tag{7}\] where \(\mathbf{U}_{m}\) denotes the vector corresponding to all \(l\) in \(U_{m}[l]\). _High-Level Rendering Algorithm:_ Algorithm 1 shows the pseudocode of our global illumination rendering algorithm. We first decompose the environment map into wavelets (Equation 3, line 2) and pick the set of largest wavelet coefficients \(K\) (line 3). The feature field \(\mathbf{h}\equiv\{S_{m},W_{m},\mathbf{U}_{m}\}\) is stored and learned compactly using a tensor decomposition and multiresolution hash grid, as discussed above. For a given pixel, we use rasterization or raytracing to find the primary hit at a pixel, with spatial location \(\mathbf{x}\), normal \(\mathbf{n}\), outgoing/reflected directions \(\mathbf{\omega}_{0}\) and \(\mathbf{\omega}_{r}\) and BRDF parameters \(\mathbf{\rho}\) (line 5). Now, for each wavelet index \(k\in K\), we determine the feature vector \(\mathbf{h}_{k}(\mathbf{x})\) (see Equation 7, line 7). We now evaluate the MLP \(f(\cdot)\) in Equation 4 (line 8) to obtain the transport coefficient \(T_{k}\). Once the vector of all transport coefficients \(T_{k}\) with \(k\in K\) is obtained, we determine the final color by performing the dot product with lighting in Equation 3 (line 10). We also add in the denoised direct lighting, computed separately (line 11). ## 5 Implementation and Algorithm We now proceed to discuss the implementation and algorithm, based on the mathematical framework in the previous section. Precomputation: RenderingAs with all PRT algorithms, there is a precomputation step. In our case, this involves not only offline rendering of the scene of interest, but also learning the relevant light transport parameters. We use a GPU-accelerated path tracer in OptiX with denoising to produce 512x512, 1024 samples per pixel ground truth images for training. Each image takes 1-3 seconds to render and it is not interactive, underscoring the need for our real-time PRT algorithm. The image resolution for real-time rendering can be changed arbitrarily at run-time, and we use higher-resolution \(800\times 600\) renders in some of our results. For a given scene, we render approximately 4000 images under different environment maps and viewing conditions. We use 1000 indoor cubemaps and rotate each by 120 and 240 degrees to obtain the 3000 training lighting conditions. We only select indoor ones instead of outdoor ones since nonlinear wavelet selection on those tends to result in a larger quantity of meaningful wavelet coefficients [15]. We generate 2000 camera locations using trajectories placed in the scene, and for each camera, we randomly select 2 environment maps from our training pool. We use one-sample-per-pixel raycasting to obtain the geometry parameters, reflection direction and BRDF parameters for these training views. This precomputation step takes about 1-3 hours. Note that the number of images is almost an order of magnitude less than the number needed in early wavelet methods, even for fixed view [15]. We found that for highly specular areas, the algorithm requires multiple samples of view-dependent effects under different lighting conditions. For simple scenes (Four Animals) where the camera can see almost every object at a given time, we place the cameras on predetermined spherical trajectories. For scenes that have many occluded areas (Kitchen and Armadillo) we add an additional helical trajectory. Precomputation: Learning Light TransportThe trainable parameters in our formulation are the feature grids \(\{S_{m}\}\), \(\{W_{m}\}\) and \(\{\mathbf{U}_{m}\}\) as well as the parameters for the MLP \(f\), which we denote as \(\mathbf{\Theta}\). In particular, \(\{S_{m}\}\) is represented as a multiresolution hash grid, which concatenates features obtained by trilinear interpolation across resolutions. Though past PRT methods have generally stored the feature vectors representing extant radiance densely along the vertices of a mesh, we found that using such a volumetric representation significantly improves performance (see Table 4). \(\{W_{m}\}\) is represented as a neural texture at the same resolution as the cubemap, \(6\cdot 64\cdot 64\). We set the number of terms \(M\) for both feature grids to be 64, which we found gives the best tradeoff between accuracy and speed, and we also set the feature dimension of the hash-grid to be 64 (so \(\mathbf{U}\) becomes a square matrix) as we found reducing this value does not meaningfully reduce the computation time. For ease of implementation, the learnable matrix \(\mathbf{U}\) is represented as a single-layer fully-fused MLP with no bias. \(f\) is implemented as a two-layer fully-fused MLP with width 128. The total size of our precomputed data is about 113 MB, the bulk of which stores the 3D multiresolution hashgrid representing \(\{S_{m}\}\). This is substantially less than previous methods [15, 15] even though we are considering full \(\mathbf{\Theta}\) indirect light transport. Our goal is to optimize \[\{S_{m}\},\{W_{m}\},\{\mathbf{U}_{m}\},\mathbf{\Theta}=\arg\min\,\mathcal{L}\left(I \left(S_{m},W_{m},\mathbf{U}_{m},M,\mathbf{\Theta}\right),I_{0}\right), \tag{8}\] where \(I\) is the image rendered using the procedure in Algorithm 1 and \(I_{0}\) is the ground truth rendering discussed above. At each training step, we randomly select an environment map from the training data, perform the non-standard wavelet decomposition over cubemap faces as in [15] and select 300 wavelets. The choice of 300 is motivated by past findings ( [15], [15]) noting that over 99% \(L^{2}\) accuracy can be obtained by choosing less than 1% of the total wavelets. We importance sample half of these wavelets via unweighted selection from the environment map, and as the largest entries of the ground-truth wavelet transport matrix are uncorrelated with such a purely top-\(k\) selection, we uniformly sample wavelet indices for the other half to form \(K\) and \(L_{k}\). We found that performing the wavelet transform without premultiplying the environment map entries by their solid angle factors (in effect, allowing the network to learn these) tends to produce better results. We then sample 2048 pixels from the subset of our training data corresponding to this environment map and pass them through our algorithm to obtain the wavelet coefficients \(T_{k}\) corresponding to the indices \(k\), which we multiply with \(L_{k}\) to obtain our final rendering. The network tends to converge much more slowly on the highly view-dependent areas of the scene, so we adopt a specialized importance sampling strategy on these pixels (see Fig. 2). In addition to the geometry buffer, we compute the empirical variances of all the hit points of the scene (stored in half-precision) and the high-frequency regions (obtained by subtracting a low-pass filtered version of the ground-truth indirect illumination from the original image). To deal with moderate-frequency regions we also importance sample based on the product of the specular coefficient times the roughness complement \(k_{s}\cdot(1-\sigma)\), and deal with all other regions via standard uniform sampling. We treat the output of these strategies as a probability distribution and sample 512 pixels from each accordingly. We opt for such an image-based strategy as it is faster than supervision using the full ground-truth light-transport \(T\). The latter, which would entail generating a 6D tensor at resolutions of \(6\times 64\times 64\) for lighting and view (as used for cubemaps in [15, 15]), would require over \((6\times 64\times 64)^{2}\approx 6\times 10^{8}\) images. Additionally, experiments showed that even if this tensor were subsampled at multiple views, the resulting convergence of the network was inadequate to getting good results on novel-view specularities. In the future, a more adaptive active exploration approach may be helpful to increase the training time spent on hard to-learn examples and prevent overfitting on the diffuse parts of the scene [1]. We now compute the loss. Past works have demonstrated that error from applying an \(L2\) loss directly on output HDR images tends to be disproportionately affected by bright regions, so we apply a tonemap to our prediction and the ground-truth rendering before we take the loss. We use the \(\mu\)-law tonemapping [1], \(\text{TM}(x)=\text{sgn}(x)\frac{\log[\mu:[x]+\epsilon]}{\log[\mu:\epsilon]}\), with \(\mu=10\) and \(\epsilon=1\), and define the loss as \(\mathcal{L}(I,I_{0})=||\text{TM}(I)-\text{TM}(I_{0})||_{2}^{2}\). The extension to the negative numbers is required as our network operates directly in the wavelet domain, so our initial network predictions may result in negative colors after multiplication. We discuss the resolutions and encoding of the feature grids and MLPs. For the three-dimensional multiresolution hash grid \(\{S_{m}\}\), we use 32 levels, 2 features per level, base resolution 16, a hash table size of 219 and a per-level scale of 1.3. This takes up the bulk of the total size of our method at 107 MB. While choosing a smaller hash table would result in a smaller model size, we found that it corresponded to a significant decrease in performance; see Table 3 for ablations on different hash table sizes. For the two-dimensional grid \(\{W_{m}\}\), we store 6 - 64 - 64 - 64 values as full-precision floats, which takes up 6.1 MB. We represent the learnable matrix \(\mathbf{U}\) as a single-layer neural network with no bias or nonlinearity, taking up 36 KB. The final MLP takes as input the normal, reflection direction, and BRDF parameters and encodes only the reflection direction with spherical harmonics of maximum degree 4. The input roughness \(\sigma\) is additionally mapped to \(\frac{\log(25\sigma+1)}{\log(25\sigma+1)}\) for better resolution in areas with low roughness. An evaluation of our encoding scheme can be found in Table 4. This final MLP has 128 neurons and 2 hidden layers with ReLU activations, resulting in a size of 124 KB. Further significant compression is possible using dithering and aggressive quantization of values (we use floats). The total training time is typically around 16 hours on an NVIDIA RTX 3090Ti. _Real-Time Rendering._ Our renderer is implemented in C++ with Optix, CUDA, and Tiny-CUDA-NN [1]. As noted earlier, direct lighting is computed separately by importance sampling the environment map and denoising (other approaches could also be used). We compute indirect lighting per Algorithm 1. Tiny-CUDA-NN is used for our neural rendering step to obtain the coefficients \(T_{k}\) by evaluating the MLP \(f\), and we have a final CUDA kernel to compute the dot product of the transport coefficients and the environment map wavelet coefficients. In practice, we have found a modest benefit from denoising the indirect lighting as well to avoid wavelet noise and we, therefore, apply denoising to the combined direct and indirect (this takes only about 1.5 ms and adds minimal overhead). However, our results remain of high quality even without denoising (see Fig. 8). ## 6 Results We show results from our system, including screen-captures of the scenes we demonstrate (see video for real-time captures), comparisons to previous work, and an evaluation of some of our parameters. We compare to the best performing model described in Neural PRT [1], diffuse-specular separation. ### Evaluation Methods We created our evaluation dataset of lighting and views for numerical comparisons using held-out environments and views. We selected 70 unseen indoor and outdoor environment maps. Our evaluation views include 5 hand-picked locations that cover most objects in the scene and roughly 500 locations generated using evaluation trajectories, which consist of helices, circular sweeps, or figure eights. We include videos generated using some of these evaluation trajectories in the supplementary material, and visual results show representative lighting environments and views. Note that PSNR numbers in Figs. 1 and 3 refer to the scene with specific lighting/viewing configuration shown, and differ slightly from the averages over all configurations reported in Table 1. For all the performance metrics (PSNR, SSIM, and LPIPS) reported in the tables, we show full direct+indirect / indirect only. Figure 2: Sampling strategy used for precomputation/learning of indirect light transport. The left image shows the total sample point distribution. The points are made up of uniform samples over the image, and concentrations of samples in regions of high view-variance, high-frequency regions, and specular materials. Figure 3: Visual comparisons of our method against Neural PRT [RBRD22], Our method is able to reconstruct complex indirect illumination effects. For example, see the glossy reflections on the table and the floor in the Kitchen scene (top row), the caustics in the Four Animals scene (mid row), and the color bleeding in the Armadillo scene (bottom row). ### Visual Results In Figs. 1 and 3, we show example images from three scenes (all these results use denoising on both our method and the Neural PRT comparisons). These are all rendered at 24fps for \(512\times 512\) images and 13fps for \(800\times 600\) images on an NVIDIA RTX 4090. We see a range of glossy indirect reflection effects, including view-dependent caustics (see the Four Animals scene). Capturing these high-frequency global illumination effects has been very challenging in previous PRT algorithms, since full high-resolution 6D lighting and view variation is required. Our video clearly shows smooth motions for changing view and relighting. In Fig. 1, we see the Kitchen scene with glossy indirect reflections on the wall and the table. Our method produces accurate indirect illumination and overall global illumination rendering. In contrast, Neural PRT [12] works well for largely diffuse interreflections, but cannot perform well for high-frequency indirect highlights. Additionally, the top row of Fig. 3 shows a different view of the Kitchen scene. Neural PRT produces an overall browner tone compared to the reference, with a color shift on the stove. It also misses glossy reflections on the table and the floor, making chair legs look flat and unnatural. Note that we retrained Neural PRT on each scene, using the same data as our method. We do not include comparisons with other methods, as traditional non-learning PRT approaches [25, 25] are limited to fixed view or diffuse for double-products and direct lighting only for triple-products [13]. Liu, et al. [12] and Wang, et al. [25]'s approaches use the in-out BRDF factorization which is limited to lower-frequency reflections, while Hasan et al.'s method [1] cannot render caustic paths and is similarly restricted to fixed view. The middle row of Fig. 3 shows the Four Animals scene with challenging light transport with glossy reflection and a ring casting view-dependent caustics on a ground plane. These all-frequency view-dependent indirect reflections have historically been difficult for PRT methods, but our method produces fairly accurate results, where Neural PRT produces high frequency artifacts for the ring caustics and incorrect glossy reflections. Additionally, from the video of the Four Animals scene, we show that our method is also more temporally stable when it comes to rotation of high-frequency effects, while NPRT tends to be smoother with more incorrect shading. Finally, the bottom row of Fig. 3 shows the Armallo scene. Even in the largely diffuse regions, we still perform better than Neural PRT, since Neural PRT suffers from a color shift on the wall in the left inset and incorrect interreflections as indicated in the right inset. Note the missing edge of the cube and the lack of indirect reflections on the ground from the claws. The color shift of Neural PRT in the results above are likely due to the fact that Neural PRT does not use an orthonormal basis and has to approximate the linear dot product with a non-linear neural network. To further investigate the behavior on diffuse reflections, we also include an almost entirely Diffuse Kitchen scene (all roughnesses set to 1), shown in Fig. 4. We see that our performance is substantially better, because we do not suffer from color shifts or other network artifacts in computing wavelet dot-products. Finally, Fig. 5 makes a direct comparison of real-time low sample count (44 samples per pixel) path tracing with denoising in OptiX to our PRT rendering at equal time. Note that this a highly favorable case for OptiX since the scene is geometrically simple, enabling a moderate brute-force path tracing sample count. This sample count would be substantially lower in more complex production and game scenes. Nevertheless, indirect reflections from the path tracer miss detail near the toes, head and wings of the animals, which are captured by our PRT rendering. We have also observed greater temporal flickering in real-time path-traced renderings. ### Quantitative Results Table 1 shows quantitative comparisons of Neural PRT and Our Method, both with and without denoising. We show results for both the full image and indirect lighting only on the metrics of PSNR, SSIM and LPIPS. The main results on the left of the table are on novel lights and trajectories _far_ from the training data, corresponding to the result figures, and showing the generalization ability. The right of Table 1 shows additional statistics on held-out (_near_) views in training trajectories for both NPRT and our method. For all scenes and metrics, our method has significantly better accuracy than Neural PRT. This is true both in scenes with strong glossy indirect reflections and complex caustics like Kitchen and Four Animals, and even when much of the global illumination is largely diffuse (Armadillo and Diffuse Kitchen). Both NPRT and our method have a small metric drop from training trajectories (near views) to novel ones (far views), and we perform better on all metrics in both cases. This indicates our novel training strategy can generalize to unseen views and lighting conditions. Our indirect PRT method does not have standard Monte Carlo noise, and denoising only provides a small (but important) boost. Even without denoising, we are better than Neural PRT with denoising on the PSNR metric, and comparable on SSIM in most scenes. Neural PRT does show a smaller bump in metrics after denoising than does our method; this is due to the fact that we predict coefficients directly in an orthonormal basis, which can result in noisier predictions in higher frequency regions. However, our method without denoising is also better than Neural PRT without denoising on almost all metrics. Table 2 shows the runtime performance of our method on an RTX 4090. Note that the performance is essentially the same for all scenes, independent of the scene's geometric complexity, because we use a volumetric hash-grid. The rendering time for one frame scales approximately linearly with resolution (\(512\times 512\) is about twice as fast as \(800\times 600\)) and the number of wavelets used (we used 64 wavelets for results in this paper, comparable to the number used in previous work [25, 25], but 32 wavelets may be suitable in lower-frequency scenes, with 128 wavelets needed in very challenging scenes for higher fidelity). Rendering time remains interactive in all cases with real-time performance of 24fps or higher achieved for lower resolutions or number of wavelets. ### Evaluation We now evaluate various components of our algorithm. Figure 2 shows our adaptive sampling approach to training during the pre-computation phase. We first uniformly select samples on the scene based on the indirect light. We then allocate additional samples in regions of high variance with respect to view, high-frequency regions, and those with high specular coefficients. The total sample distribution is shown in the leftmost image. Table 3 shows an evaluation of different hashmap resolutions on the Four Animals scene. We find empirically that a hash table size of \(2^{19}\) produces the best results, and also has a reasonable storage size. In Table 4, we evaluated different encodings for the MLP. They all performed similarly, but best PSNR was achieved with a spherical harmonic encoding only for \(\omega_{r}\), with no impact on frame rate. Table 5 analyzes the encoding on just \(\omega_{r}\) further on the more specular Four Animals by considering learned feature vectors to en \begin{table} \begin{tabular}{c c c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Methods} & \multirow{2}{*}{Denoising} & \multicolumn{4}{c||}{**Far Views**} & \multicolumn{4}{c}{**Near Views**} \\ & & & Four Animals & Kitchen & Armadillo & Diffuse Kitchen & Four Animals & Kitchen & Armadillo & Diffuse Kitchen \\ \hline \multirow{5}{*}{PSNR(\(\uparrow\))} & NPRT & ✗ & 28.08/ 24.35 & 33.80/ 31.43 & 34.53/ 30.92 & 43.20/ 39.65 & 28.23/ 24.44 & 35.05/ 32.08 & 34.30/ 30.89 & 43.63/ 40.10 \\ & NPRT & ✗ & 28.53/ 25.09 & 34.56/ 32.45 & 34.74/ 31.53 & 3.45/ 40.45 & 28.73/ 25.19 & 35.85/ 32.88 & 34.52/ 31.53 & 48.33/ 41.16 \\ & Ours & ✗ & 28.53/ 25.16 & 38.53/ 34.23 & 36.18/ 32.45 & 45.82/ 42.24 & 28.77/ 25.38 & 36.71/ 34.93 & 36.59/ 32.68 & 46.18/ 42.68 \\ & Ours & ✓ & **29.62/ 26.21** & **36.59/ 34.31** & **36.48/ **33.17** & **46.27**/ **43.69** & **29.96**/ **26.45** & **37.64**/ **35.59** & **36.94**/ **33.44** & **46.44**/ **43.97** \\ \hline \multirow{5}{*}{SSIM(\(\uparrow\))} & NPRT & ✗ & 0.923/ 0.8408 & 0.957/ 0.8957 & 0.973/ 0.9435 & 0.910/ 0.9540 & 0.9295/ 0.8850 & 0.9478/ 0.9128 & 0.9758/ 0.9483 & 0.9923/ 0.9607 \\ & NPRT & ✓ & 0.9328/ 0.8527 & 0.968/ 0.9165 & 0.9783/ 0.9511 & 0.9913/ 0.9620 & 0.9394/ 0.8656 & 0.9683/ 0.9271 & 0.9799/ 0.9551 & 0.9572/ 0.9685 \\ & Ours & ✗ & 0.9064 / 0.82182 & 0.967/ 0.9170 & 0.9750/ 0.9460 & 0.9945/ 0.9783 & 0.9143/ 0.8374 & 0.9681/ 0.9211 & 0.9773/ 0.9512 & 0.9943/ 0.9765 \\ & **Ours** & ✓ & **0.9390/ 0.8642** & **0.9767/ 0.9414** & **0.9822** **0.9596** & **0.9947** **0.9862** & **0.9441** **0.8751** & **0.9797** **0.9343** & **0.9840**/ **0.9637** & **0.9980**/ **0.9834** \\ \hline \multirow{5}{*}{LPIPS(\(\downarrow\))} & NPRT & ✗ & 0.083/ 0.2094 & 0.0641/ 0.1792 & 0.0296/ 0.0851 & 0.0014/ 0.1009 & 0.0803/ 0.1948 & 0.0051/ 0.1665 & 0.0287/ 0.0844 & 0.0105/ 0.0921 \\ & NPRT & ✓ & 0.0547/ 0.1788 & 0.0280/ 0.0970 & 0.0185/ 0.0559 & 0.0073/ 0.09593 & 0.0496/ 0.1620 & 0.0275/ 0.1021 & 0.0187/ 0.0503 & 0.0071/ 0.0480 \\ \cline{1-1} & Ours & ✗ & 0.1315/ 0.1882 & 0.0498/ 0.1399 & 0.0293/ 0.05617 & 0.0059 / 0.06926 & 0.1202/ 0.1774 & 0.0549/ 0.1408 & 0.0257/ 0.0609 & 0.0063/ 0.0621 \\ \cline{1-1} & **Ours** & ✓ & **0.0489**/ **0.1609** & **0.0161**/ **0.0611** & **0.0121**/ **0.0420** & **0.029**/ **0.0226** & **0.0483**/ **0.1456** & **0.0211**/ **0.0787** & **0.0112**/ **0.0364** & **0.0038**/ **0.0277** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparison of our method and Neural PRT (NPRT), both with and without denoising, for full direct+indirect / indirect only. These metrics are evaluated on views and lighting conditions both near and far from the training set. Our method is better on all metrics on all scenes. Figure 4: Comparison of our method with NPRT on Diffuse Kitchen evaluated on a red environment map dissimilar to lighting conditions seen during training. In addition to an overall tone shift, the shading on certain objects (such as the pots on the stove) is inaccurate for NPRT, indicating our method is better able to generalize to unseen lighting conditions even for purely diffuse objects. Figure 5: Comparison of our PRT method against Opix path-traced renderings with the same rendering time (44 samples per pixel) under indirect lighting. Note the lack of high-frequency details around the head and the toes of the animal figurine in the OptiX render. code \(\omega_{r}\) on the Four Animals scene. For these learned feature vector experiments, we converted \(\omega_{r}\) to UV coordinates within a differentiable cubemap and learned a hash grid for each face, which we called DCE. To compare this with the SH embedding, we considered a high- and low-frequency hash grid configuration to explore the impact of potentially overfitting to training views. To further explore the contribution of encoding \(\omega_{r}\), we also conducted experiments where the encoding is multiplied with the position and wavelet encodings in the initial CP decomposition phase to ablate its overall impact on the algorithm. In general, these approaches do not perform better than the simple spherical harmonic encoding, which we use for all of our results. A more thorough study on band-dimiting the angular component of these neural algorithms may be interesting as future work. Figure 6 visualizes the wavelet statistics of a particular view. While the shape of the distribution we learn is different from the ground truth, most of the energy of these wavelets is nonetheless contained within relatively few entries. Note that this is for the full transport matrix; as in previous work [20], for rendering an image we can make use of the most important wavelet coefficients in the lighting, dramatically reducing the number of wavelets needed to 64 in our case. In Table 6, we compare the use of hashgrids for \(S_{m}\) versus storing this information on vertices, using both learned feature vectors and spherical harmonics (maximum degree 8 using the traditional approach [17]), showing the benefit of using the volumetric hashgrid. We do this on the Four Animals scene, as this best showcases the challenges of storing the feature vectors on mesh Figure 8: The effects of the denoiser on our proposed method. Standard OpiX path-tracing in equal time (left) is very noisy, and denoising does not address all issues (see Fig. 5). In contrast, our PRT rendering (middle) is already high quality without Monte Carlo noise, but does have a small amount of wavelet noise, and applying a denoiser helps improve the results (right) while adding almost no overhead. Figure 6: We fix a particular view of Kitchen from our far-views dataset and render out the ground truth wavelet transport matrix as in [20]. We compare it to the equivalent transport matrix output via our method by visualizing the histogram of the top 512 wavelets over all the pixels in the final image. Figure 7: Ablation study on storing the learned feature vectors on mesh vertices (learned and SH) vs a hashgrid. Spherical harmonics with a maximum degree of 8 stored on mesh vertices [17] are unable to properly represent caustics and the sharp reflections on the horse as expected, and also show some ringing and other artifacts. The learned mesh vertex features, while much better than SH, still cannot properly represent caustics, show a lot more noise in the reflections around the griffin and horse, and require a well-subdivided mesh. The use of the hashgrid solves these issues while being invariant to scaling of the scene. vertices. We perform barycentric interpolation on the mesh vertices closest to the hit point. In the learned method, we apply a nonlinearity (softplus) to increase the representative capacity of the model. Spherical Harmonics perform worse than our method, lacking sharp reflection details and showing ringing and other artifacts on high-frequency light transport effects like caustics. This underscores that spherical harmonics are primarily a low-frequency representation. The learned feature vectors stored on the vertices perform better, but are noisier and still demonstrate inability to reconstruct caustics. Additionally, they require a well-subdivided mesh, so they would not be invariant to scaling scene complexity. Figure 7 shows a visual comparison. Figure 8 shows results before (middle) and after (right) denoising, indicating that our initial results are already high quality, but a small amount of wavelet noise can be removed by the standard OptiX denoiser. The left image is a comparison to an equal time path-traced image without denoising which is substantially worse. Note that denoising brute-force path tracing does not resolve complex interreflections, as shown in Fig 5. ### Limitations Most limitations are inherited from previous PRT algorithms. The results, while significantly higher quality than previous work are not perfect, since we use only 64 wavelets, and also approximate the transport coefficients. Very high-frequency effects like mirrors are not perfectly reproduced (nor handled in previous techniques), and this can be seen in Figure 9 where we evaluate our method on the PBRT Bathroom scene with the mirror set to 0.05 roughness. For reference, the full table of metrics is listed in Table 7, evaluated on far views only. Some flicker can occasionally be seen in relighting as the selected wavelet coefficients change between environments (we minimize this by using area-weighted selection, which minimizes visual error by quickly resolving the diffuse colors as in [14]). Our optimization/training time for each scene can involve several hours, which is significantly higher than earlier non-learning approaches. Finally, our volumetric hashgrid, while significantly improving quality, does use more space than would a pure MLP or CNN approach, or in some cases a vertex-based method. Neural PRT does have a smaller model size/faster evaluation due to its weights being constrained within a single neural network (it uses only MLPs/CNNs rather than a feature grid). Our contribution is to provide substantially higher quality compared to Neural PRT, while using data sizes significantly lower than previous wavelet-based PRT methods - our hashgrid is substantially more efficient than explicit transport matrix storage in early PRT work, often requiring at least an order of magnitude less storage. An analogy can be made with NeRF-like models where the tradeoff is that feature fields can provide higher accuracy at the cost of higher required storage space (still much less than explicitly tabulated representations). An interesting future direction is to quantify the tradeoff between explicit feature fields and implicit methods in the PRT space. ## 7 Conclusions and Future Work All-frequency relighting for indirect glossy reflections with changing illumination and view has been one of the long-standing challenges for precomputed radiance transfer, and real-time rendering in general. In this paper, we have taken an important step toward this goal, showing that a new approach leveraging modern MLP, \begin{table} \begin{tabular}{l c c c} \hline \hline Encoding & PSNR (\(\uparrow\)) & SSIM (\(\uparrow\)) & LPIPS (\(\downarrow\)) \\ \hline No encoding & 21.84 / 17.84 & 0.8667 / 0.1294 & 0.0871 / 0.7739 \\ OB(\(n\)), OB(\(\omega_{n}\)), OB(\(\omega_{t}\)) & 35.75 / 33.35 & **0.9647 / 0.9179** & **0.0495 / 0.1365** \\ Just OB(\(\omega_{n}\)) & 35.74 / 33.38 & 0.9641 / 0.9166 & 0.0498 / 0.1402 \\ Just SH(\(\omega_{n}\)) & **35.83** / **33.42** & **0.9647** / 0.9170 & 0.0498 / 0.1399 \\ \hline \hline \end{tabular} \end{table} Table 4: **Ablation study of different encodings on the Kitchen scene.** OB refers to the one-blob encoding [12]; SH refers to maximum degree-4 spherical harmonics. While encoding everything with one-blob performs slightly better than spherical harmonics in some categories, we chose to use spherical harmonics as it was superior to encoding everything in PSNR (for best results after denoising) while being extremely close in the other metrics. \begin{table} \begin{tabular}{l c c c} \hline \hline Resolution & \#Wavelets & Framerate (FPS) & Frame time (ms) \\ \hline \multirow{3}{*}{\(512\times 512\)} & 32 & 42 & 24 \\ & 64 & 24 & 42 \\ & 128 & 12 & 87 \\ \hline \multirow{3}{*}{\(800\times 600\)} & 32 & 25 & 40 \\ & 64 & 13 & 78 \\ & 128 & 6 & 180 \\ \hline \hline \end{tabular} \end{table} Table 2: **Runtime Performance of our method with different resolutions, and # of wavelets.** We achieve interactivity in all cases. \begin{table} \begin{tabular}{l c c c} \hline \hline HM size & Total size & PSNR (\(\uparrow\)) & SSIM (\(\uparrow\)) & LPIPS (\(\downarrow\)) \\ \hline 21\({}^{\text{\text{\text@underline{\text@underline{\text@underline{\text@underline{ \text@underline{\text@underline{\text@underline{\text@underline{\text@underline{\text@underline \text@underline{\text@underline {\text@ hashgrid, and novel factorization techniques can address the challenge of glossy global illumination, obtaining the best of both traditional orthogonal Haar wavelet decomposition and neural light transport approximation. In future work, we wish to consider alternative factorizations and feature grids that may be more accurate and compact, and alternatives to the hash-grid that can be computed directly on the object/scene surface. More broadly, this paper has introduced a neural representation of 6D light transport that may be applicable in many other areas including acquisition of the appearance of real scenes, and for modeling of neural materials. ## 8 Acknowledgements We thank Peter-Pike Sloan, Alexandr Kuznetsov, Lingqi Yan, Ari Silvennoinen, Michal Iwanicki, Yash Belhe, Mohammad Shafiei, Pratul Srinivasan, Zhengqin Li and Alexander Mai for comments and discussions. We additionally thank Alexander Mai, Falko Kuester, Mustafa Yaldz and Xiaoshuai Zhang for generously allowing us to use their compute resources, and we thank Gilles Rainer for answering questions. This work was funded in part from NSF grants 2212085, 2100237 and 2120019, the Ronald L. Graham Chair and the UC San Diego Center for Visual Computing. We also acknowledge gifts from Google, Adobe, Qualcomm, Meta and a Sony Research Award.
2307.08318
Airway Label Prediction in Video Bronchoscopy: Capturing Temporal Dependencies Utilizing Anatomical Knowledge
Purpose: Navigation guidance is a key requirement for a multitude of lung interventions using video bronchoscopy. State-of-the-art solutions focus on lung biopsies using electromagnetic tracking and intraoperative image registration w.r.t. preoperative CT scans for guidance. The requirement of patient-specific CT scans hampers the utilisation of navigation guidance for other applications such as intensive care units. Methods: This paper addresses navigation guidance solely incorporating bronchosopy video data. In contrast to state-of-the-art approaches we entirely omit the use of electromagnetic tracking and patient-specific CT scans. Guidance is enabled by means of topological bronchoscope localization w.r.t. an interpatient airway model. Particularly, we take maximally advantage of anatomical constraints of airway trees being sequentially traversed. This is realized by incorporating sequences of CNN-based airway likelihoods into a Hidden Markov Model. Results: Our approach is evaluated based on multiple experiments inside a lung phantom model. With the consideration of temporal context and use of anatomical knowledge for regularization, we are able to improve the accuracy up to to 0.98 compared to 0.81 (weighted F1: 0.98 compared to 0.81) for a classification based on individual frames. Conclusion: We combine CNN-based single image classification of airway segments with anatomical constraints and temporal HMM-based inference for the first time. Our approach renders vision-only guidance for bronchoscopy interventions in the absence of electromagnetic tracking and patient-specific CT scans possible.
Ron Keuth, Mattias Heinrich, Martin Eichenlaub, Marian Himstedt
2023-07-17T08:26:36Z
http://arxiv.org/abs/2307.08318v1
Airway Label Prediction in Video Bronchoscopy: Capturing Temporal Dependencies Utilizing Anatomical Knowledge ###### Abstract **Purpose:** Navigation guidance is a key requirement for a multitude of lung interventions using video bronchoscopy. State-of-the-art solutions focus on lung biopsies using electromagnetic tracking and intraoperative image registration w.r.t. preoperative CT scans for guidance. The requirement of patient-specific CT scans hampers the utilisation of navigation guidance for other applications such as intensive care units. **Methods:** This paper addresses navigation guidance solely incorporating bronchoscopy video data. In contrast to state-of-the-art approaches we entirely omit the use of electromagnetic tracking and patient-specific CT scans. Guidance is enabled by means of topological bronchoscope localization w.r.t. an interpient airway model. Particularly, we take maximally advantage of anatomical constraints of airway trees being sequentially traversed. This is realized by incorporating sequences of CNN-based airway likelihoods into a Hidden Markov Model. **Results:** Our approach is evaluated based on multiple experiments inside a lung phantom model. With the consideration of temporal context and use of anatomical knowledge for regularization, we are able to improve the accuracy up to to 0.98 compared to 0.81 (weighted F1: 0.98 compared to 0.81) for a classification based on individual frames. **Conclusion:** We combine CNN-based single image classification of airway segments with anatomical constraints and temporal HMM-based inference for the first time. Our approach renders vision-only guidance for bronchoscopy interventions in the absence of electromagnetic tracking and patient-specific CT scans possible. Video Bronchoscopy; Image-guided navigation; Sequential inference; Classification ## 1 Introduction Video Bronchoscopy (VB) is frequently carried out in Intensive Care Units (ICUs) due to several diagnostic and therapeutic indications such as removal of foreign objects, secretion sampling and suction as well as clarification of ventilation problems. Biopsies conducted in cases of suspected lung cancer target at a limited number of predefined locations. In contrast, VB in ICUs often require inspecting large portions of bronchial trees, i.e. interventions are performed within a large spatial extent. Constantly keeping track of the bronchoscope's position within the airway tree poses a significant mental challenge to physicians, potentially resulting in longer treatment times, which entail increased risk for patients. This is emphasized by the fact that the majority of physicians in ICUs are not pneumologists having less practical experience in bronchoscopy resulting in a greater error rate, also in the identification of upper airways [2]. Today, navigation guidance is a default tool for lung biopsies, however, these systems are uncommon in ICUs. The additional hardware, i.e. Electromagnetic Tracking (EMT) implies substantial costs particularly due to their single-use components as well as additional setup times and requirements for medical staff [3]. The lack of prior CT scans eventually constitutes the foreclosing reason for omitting EMT in ICUs as patients are likely to be inappropriate for radiology transfers due to their unstable conditions and potential infection risks for patients and medical staff. An easy-to-use tracking solution solely utilizing interventional images and interpatient airway knowledge exhibits a beneficial tool for this application. The lung's airways are particularly well-suited for learning a generic model as the branch variation is limited. Clinical studies [4] have demonstrated \(>95\%\) of patients comprise a common airway model with only two anatomical variations that up to the fourth branching generation (thereof 16%: one accessory sub-superior segment; 6.1%: absent right medial-basal segment) which can be adequately accounted for in the training process. A navigation beyond this level is rather uncommon for intensive care settings. The requirements to the spatial accuracy are moderate as long as the topological consistence is preserved, i.e. the correct prediction of airways and bifurcations w.r.t. an interpatient model. Robust tracking as well as preserving and highlighting traversed airways enable an increase of a physician's confidence while simultaneously reducing intervention times and making quality less reliant on individual skills. The overall reduced risks for patients and medical staff promise a substantial impact for ICUs. Omitting additional single-use parts being attached to bronchoscopes simplifies regulatory processes and minimizes costs for health insurances and other third-party payers. This paper presents a novel approach to purely image-based navigation guidance in VB consisting of the following components: 1. A Convolutional Neural Network (CNN)-based single-image classification of 15 airway segments up to the fourth branching generation. 2. An inference model for processing sequential airway likelihood data predicted by the CNN based on Hidden Markov Models (HMMs). Our work includes a substantial calibration of the aforementioned components, ensuring optimal results which are demonstrated in exhaustive phantom experiments. Figure 1: Interpatient model with multi-label segmentation of airways generated based on [1]. Bronoscopy video frames are assigned labels according to this anatomical model. Our approach predicts the airway label of the current bronchoscope location in a topological manner (grey circle; dashed line). ## 2 Related work _Airway classification_ In the past decade, CNNs have proven their ability to learn the extraction of task-specific features, which also enables their use in the image-based localization of the endoscope in VB. For example, CNNs have been directly used to predict the visibility of airways as well its position and angel to the endoscope in the current frame[5]. This approach was supplemented with a partical filter to capture the temporal context, and is thus similar to our proposed method with a HMM. However, our method uses a CNN that directly predicts the current location based on the frame and is then regularized within the temporal context, while Sganga's CNN predicts only the feature for the partical filter. Navigation support by means of airway classification up to the first branching generation using CNNs is investigated by [2]. Although presenting promising results and demonstrating the CNN's semantic understanding (by means of Class Activation Map (CAM) visualizations) the benefit for navigation tasks is rather limited. _Electromagnetic Navigation Bronchoscopy_ EMT-based solutions are state-of-the-art for navigation inside the lung and have been integrated by multiple commercial products, e.g. SuperDimension(tm) (Medtronic Inc., Minneapolis, MN), SPiN System(r) Veran (Veran Medical Technology Inc., St. Louis, MO) and Monarch(r) (Auris Health Inc., Redwood City, CA) utilize preoperative patient-specific CTs, EMT and video bronchoscopies for navigation guidance. Based on airway segmentations, virtual bronchoscopy images are rendered from prior CTs which are subsequently registered to intraoperative real bronchoscopy images [6; 7; 8; 9]. This process is stabilized by incorporating EMT. The methodological background for this hybrid method is exhaustively investigated in [3; 10] and with [11] giving particular insights into deformable registration of in-vivo and virtual bronchoscopy images. Deliigianni et al. investigate statistical shape models aiming at airway motion compensation in registration [12]. The utilization of interpatient knowledge is rather uncommon but exhibits superior capabilities for both, missing patient-specific CT scans and motion compensation as exhaustively motivated in [4]. _Endobronchial pose estimation_ Solutions to endobronchial estimation of metric camera poses omitting EMT have been investigated for more than two decades. The fundamental for this is given by airway structures being segmented in CT scans. One group (a) of existing approaches generates virtual bronchoscopy sequences (2D images) that are subsequently matched to in-vivo images regarding image similarity. The particular challenge here is the domain gap arising from substantially different textures of tissue surfaces given in-vivo and rendered images respectively. To address this problem, Sganga et al. employ domain randomization across the texture space, which can mitigate the impact of textural appearances [5]. Approaches solely examining image similarity are subject to ambiguity for global pose estimation and thus rather limited to pure (local) tracking tasks [3]. Another group (b) resolves the domain gap through an intermediate representation, where depth maps are estimated from RGB images using GANs [13; 14; 15]. Ground truth depth maps are generated based on paired CT and in-vivo datasets, which is partly accompanied by pre-training on large-scale CT datasets incorporating virtual bronchoscopies and depth maps rendered thereof. A third group (c) of approaches utilizes visual Simultaneous Localization And Mapping (SLAM) or Structure from Motion (SfM) for metric pose estimation. Wang et al. utilize SLAM based on ORB features to establish 2D-3D correspondences of intraoperative bronchoscopy frames and prior CT scans omitting EMT [16]. Similarly to [17], this approach relies on sufficient structural content to be tracked and constant camera motion. For an in-depth review of endobronchial pose estimation approaches, the reader is referred to [18]. _Summary_ The majority of existing approaches focusses on the prediction of camera poses, i.e. metric navigation. While traditional approaches utilize nonlinear 2D-3D or 2D-2D registration, it can be observed that 3D-3D registration methods incorporating learned depth maps mainly acquired from virtual bronchoscopy have drawn particular interest of latest research. Predicting endbornchial locations solely from bronchoscopy sequences and thus treating pose estimation as airway recognition, i.e. topological navigation, has been barely addressed by the research community, except for the limited solution presented in [2]. Most of the state-of-the-art approaches require at least prior patient-specific CT scans for navigation, which hampers their use for applications outside tissue biopsies (e.g. ICU). ## 3 Methods ### Dataset Due to the lack of public available datasets that cover multiple sequences of in-vivo bronchoscopies, we develop our method on a synthetic but public available dataset [17] generated on a simplified silicone phantoms of the bronchial tree. The simplified phantom covers 17 bronchial branches (also called nodes) up to the fourth generation of the bronchial tree. Each of these nodes marks the destination for 16 bronchoscopies, with the trachea as their start- and endpoint, resulting in 39 000 RGB-Image in total. The utilized phantom dataset is accompanied by a 3D mesh model of the bronchial tree, as well as the exact global position of the endoscope for each frame. However, it misses ground truth airway labels. Thus, we manually annotated the airway structures inside the mesh model enabling a subsequent automatic assignment of airways labels for individual bronchoscopy frames based on their ground truth poses w.r.t. to the mesh. Because we use the dataset to optimize our CNNs as well as our HMM subdivision into disjunct splits is crucial. We decide to split the dataset along the sequences, making sure that the split \(F\) for the training of the CNNs contains all anatomical classes but as few sequences as possible, holding as many of those back for the optimization of the HMM. We implement a greedy algorithm that starts picking the sequence holding the most uncovered anatomical classes until each class is covered, resulting in a split with six sequences and 15 961 images. For the final evaluation of the complete detection pipeline on unseen images and sequences, we choose the two longest sequences (5 175 images), which cover the most nodes of the left and right side of the bronchial tree, as the test split \(T\). Leaving eight sequences with 18 463 images in total for the sequence-based optimization of the HMM. The class distribution of each final split is shown in Fig. 2 and details are summarized in Tab. 1. ### Classification pipeline Fig. 3 shows our proposed pipeline for the image-based localization of the endoscope during a VB. A Lite Reduced Atrous Spatial Pyramid Pooling (Lite R-ASPP)[19] generates the semantic segmentation of the bronchial orifice for each frame as an abstract scene representation to overcome image artifacts like secrete or bubbles generated by a coughing patient during the VB. We trained the Lite R-ASPP with a generated segmentation ground truth due to the lack of readily useable annotations. Although this generated ground truth is just a weak supervision, we were able to show that the Lite R-ASPP gained a semantic understanding of bronchial orifices and is also fairly domain robustness [20]. We then use the latent space of the Lite R-ASPP as the input for a shallow Residuale Network (ResNet)[21] (800\(k\) parameters, 30 epochs, Adam with \(1e-3\) learning rate, same data augmentation like in [20]) to localize the endoscope within the bronchial tree by classifying the visible anatomical structure. Due to the high class imbalance, we undersample all classes to match the less frequently one as close as Figure 2: Class distribution of each dataset split in percent. Please see Fig. 4 for the anatomical position of each label. possible by increasing the stride on the frames for each class individually and, in addition to that, weight the cross entropy loss for each class by its rooted inverse frequency. Finally, a HMM to predict the anatomical region of the current frame based on the likelihood prediction of all frames by the CNN classifier. The HMM adds the missing awareness of the temporal context and enables the explicit use of anatomical a priori knowledge with its formulation within the regularization term. ### Calibration of CNN classifier Nowadays, CNNs tend to overestimate their prediction's confidence, resulting in a gap between the (softmax-) probability associated with the predicted class and its actual likelihood, which is mainly due to the increased model capacity [22]. However, as the likelihood prediction defines the base of our HMM, this miscalibration would bias the classification within the temporal context and is therefore addressed by the temperature scaling [22], which introduces a variable \(T\in\mathbb{R}^{+}\) to globally scale the logits \(\mathbf{z}\) of the model. \[\hat{p}(\mathbf{m}|\omega_{i})=\sigma_{\mathrm{SM}}^{(i)}\left(\frac{\mathbf{z}}{T}\right) \tag{1}\] where \(\sigma_{\mathrm{SM}}^{(i)}\) describes the softmax score for the _i_-th class. \(T\) is optimized using the Negative Log \begin{table} \begin{tabular}{c|l|c|l|l} split & processing level & sequences ID of dataset & \#images & used for \\ \hline \(F\) & frames & \(\{3,4,5,6,8,15\}\) & \(15\,961\) & training CNNs for classification \& segmentation \\ \(S\) & sequences & \(\{2,9,10,11,12,13,14,16\}\) & \(18\,463\) & optimizing \(\lambda_{\mathcal{R}}\) (see Sec. 3.4.5) \\ \(T\) & sequences & \(\{0,7\}\) & \(5175\) & test of the detection pipeline (see Fig. 3) \\ \end{tabular} \end{table} Table 1: Description of our defined dataset’s splits and their usage. All frames of \(F\) are processed independently of their sequences during the training of the CNNs. Please read Sec. 3.1 for further details and our motivation. Figure 3: Structure of our proposed pipeline for the image-based localization of the endoscope during a VB. \(f\) maps the current frame \(\mathbf{m}_{n\in[1,\dots,N]}\) to its corresponding semantic segmentation \(\hat{s}_{n}\). The classifier \(e\) predicts the likelihood \(\hat{p}(\hat{s}_{n}|\omega_{n})\) for each possible anatomical label \(\omega\) based on \(\hat{s}_{n}\). Finally, a Hidden Markov Model (HMM) captures the temporal context and predicts the posterior probability \(\hat{p}(\omega_{n}|\mathbf{m}_{1,\dots,N})\) for the current frame given the whole sequence of frames. \(f\) and \(e\) are implemented via two CNNs with their trainable parameters \(\mathbf{\theta}\). Likelihood (NLL) to fit the model's prediction distribution to the ground truth distribution of the validation data with initial \(T_{0}=1\). ### Dynamic programming for time domain #### 3.4.1 Hidden Markov Model A VB can be considered as a sequence of frames \(\{\mathbf{m}_{n}\}_{n=1}^{N}\) with their likelihood probability distribution \(p(\mathbf{m}|\omega)\) for each anatomical class \(\omega\in\Omega\), where the likelihood is predicted by the calibrated CNN classifier. The Hidden Markov Model (HMM) models the temporal context within this sequence, obtaining the most likely label sequence via a Maximum A Posteriori (MAP) estimation. \[\hat{\omega}_{1,\dots,N}=\operatorname*{arg\,max}_{\omega_{1,\dots,N}}p(\mathbf{m }_{1,\dots,N}|\omega_{1,\dots,N})p(\omega_{1,\dots,N}) \tag{2}\] \[\hat{\omega}_{1,\dots,N}=\operatorname*{arg\,max}_{\omega_{1,\dots,N}}\left[ \left(\prod_{n=1}^{N}p(\mathbf{m}_{n}|\omega_{n})\right)\left(\prod_{n=2}^{N}p( \omega_{n}|\omega_{n-1})\right)\right] \tag{3}\] where Eq. 3 holds due to the HMM modelling the prior of each timestamp as the transition probability between different classes in \(\Omega\). A more generalized formulation considers the likelihood as a unitary data term \(\mathcal{D}_{n}(\omega_{n})\) as well as the prior as a pairwise regularization term \(\mathcal{R}_{n}(\omega_{n},\omega_{n-1})\) \[\hat{\omega}_{1,\dots,N}=\operatorname*{arg\,min}_{\omega_{1,\dots,N}}\left[ \sum_{n=1}^{N}\mathcal{D}_{n}(\omega_{n})+\sum_{n=2}^{N}\mathcal{R}_{n}( \omega_{n},\omega_{n-1})\right] \tag{4}\] with all probabilities considered as negated and logrithmized (23, Sec. 11.1). #### 3.4.2 Data and regularization term The data term models the likelihood \(p(\mathbf{m}_{n}|\omega_{n})\) of an anatomical region \(\omega\) is visible in the current frame \(\mathbf{m}_{n}\). Our CNN classifier can be directly optimized with the NLL to predict this likelihood. After its calibration, the prediction likelihood \(\hat{p}(\mathbf{m}|\omega_{i})\) of Eq. 1 is finally reformulated to match the minimization style of Eq. 4: \[\mathcal{D}_{n}(\omega_{i})=\frac{1-\hat{p}(\mathbf{m}|\omega_{i})}{|\Omega|-1}. \tag{5}\] We initialize \(\mathcal{D}_{0}\) and \(\mathcal{D}_{N}\) with the one-hot vector for the class trachea because every sequence in our database will start and end there. The generalized formulation of the regularization term as an arbitrary cost term and enables the explicit use of anatomical a priori knowledge. In the context of VB, the cost is represented as the distance between the bronchial branches to explicitly model the anatomical knowledge. Therefore, we model the bronchial tree as a tree graph with the trachea as its root, but consider it as an undirected graph apart from its definition (24, Sec. 2.4) (see Fig. 4). We precompute the distance matrix \(\mathbf{D}\in\mathbb{N}_{0}^{|\Omega|\times|\Omega|}\) with a simple depth-first search and normalize it by its maximum. \[\mathcal{R}(\omega_{i},\omega_{j})=\exp\left(\frac{\mathbf{D}_{ij}}{\max(\mathbf{D}_{ij })}\right) \tag{6}\] This kind of formulation penalizes any rapid label changes to a distant region that are not plausible within the anatomy of the bronchial tree, depending on its distance. Figure 4: An undirectional tree graph modelling the bronchial tree with labeled bronchial branches (nodes) covered by our phantom (see Fig. 1). For simplicity, the distance between adjacent bronchial branches is 1 regardless of their actual anatomical distance. #### 3.4.3 Viterbi algorithm Calculating every possible sequence to find the most likely one has an exponential runtime complexity \(\mathcal{O}(|\Omega|^{N})\) and is therefore not feasible for longer sequences like the number of frames of a VB in our use case. The Viterbi algorithm reduces the runtime complexity to a polynomial one \(\mathcal{O}(N|\Omega|^{2})\) implementing a dynamic programming approach. This approach breaks up this global minimization problem into multiple local minimization problems. This is achieved by passing the local solution of one sequence step to the following one via message passing. In this way, the solution of the next step is calculated recursively on the solution of the previous step: \[m_{n}(\omega_{n})=\mathcal{D}_{n}(\omega_{n})+\min_{\omega_{n-1}}[m_{n-1}( \omega_{n-1})+\lambda_{\mathcal{R}}\mathcal{R}(\omega_{n},\omega_{n-1})] \tag{7}\] with \(m_{0}(\omega_{n})=\mathcal{D}_{0}(\omega_{n})\) and where \(\lambda_{\mathcal{R}}\in\mathbb{R}_{0}^{+}\) describes the weighting of the data and regularization term as a hyperparameter (23, Sec. 11.2.1). #### 3.4.4 Approximation of the forward-backward algorithm The forward-backward algorithm(23, Sec. 11.4) calculates the sum of all paths through \(\omega_{n}\) to obtain not only the most likely class label for each time step, but also the probability distribution over all classes: \[p(\omega_{n}|\mathbf{m}_{1,\dots,N})\propto p(\mathbf{m}_{1,\dots,n}|\omega_{n})p( \omega_{n})p(\mathbf{m}_{n+1,\dots,N}|\omega_{n}) \tag{8}\] This right hand of the equation can be intuitively implemented via two Viterbi: One starting at the beginning of the sequence and calculates the most likely path up to \(\omega_{n}\) to approximate \(p(\mathbf{m}_{1,\dots,n}|\omega_{n})\) with \(m_{n}^{f}(\omega_{n})\). The other approximates \(p(\mathbf{m}_{n+1,\dots,N}|\omega_{n})\) starting at the last times step of the sequence and going backwards up to \(\omega_{n}\). Fig. 5 visualizes the schema of this intuition. Because both Viterbi consider the \(p(\omega_{n})\), it has to be subtracted ones to obtain the correct marginal distributions proportional to the posterior distribution (see Fig.5). However, due to the use of the minimum and not the sum as an aggregation function, the resulting marginal distribution can only be considered as an approximation of the distribution obtained by the forward-backward algorithm. \[p(\omega_{n}|\mathbf{m}_{1,\dots,N})\propto m_{n}^{f}(\omega_{n})+m_{n}^{b}(\omega _{n})-\mathcal{D}_{n}(\omega_{n}) \tag{9}\] #### 3.4.5 Optimization of \(\mathbf{\lambda_{\mathcal{R}}}\) We normalize the data as well as the regularization term to a comparable value range to avoid that the weighting \(\lambda_{\mathcal{R}}\) between those terms has to deal with too different scales and can therefore be only a semantic one. We use gradient descend to optimize \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}\) over the eight sequences of the validation split. This is possible due to the provided marginal class distributions for each time step, that can be compared to the ground truth using the NLL. We init \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}=1\) to start with an equal weighting for the data and regularization term. To guarantee the value space of \(\mathbb{R}_{0}^{+}\), we activate \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}\) with a Rectified Linear Unit (ReLU). The Figure 5: Intuition for the approximation of the forward-backward algorithm via two Viterbi, enabling the calculation of the marginal distribution over all classes being proportional to the posterior probabilities. The \(n\in N\) individual steps of the sequence with their possible labels \(\omega\in\Omega\) are shown from left to right. All paths considered by the forward-backward algorithm are drawn in red. The blue hull marks the incoming paths covered by the Viterbi \(m_{n}^{f}(\omega_{1})\) running forward through the sequence, and the orange hull the one running backwards and covering all outgoing paths \(m_{n}^{b}(\omega_{1})\). optimization of only one parameter enables the use of a memory-expensive second order optimizer like the L-BFGS[25], resulting in a short training time with few iterations necessary. To evaluate the results, we also obtain a minimum by a brute force search within a reasonable interval considering the NLL plot (see Fig. 6) with 240 samples. ## 4 Results The Viterbi enables the optimization of the weighting of data and regularization term \(\lambda_{\mathcal{R}}\) on the eight validation sequences of the dataset. The normalization of the data and regularization term value range to a comparable one of \([0,1]\) is a crucial step to stabilize this optimization. However, the additional exponential amplification of the regularization term after the normalization is necessary to prevent the label change between adjacent frames. Starting with \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}=1\) the optimization using the L-BFGS and NLL find a minimum at \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}=22.43\). We validate this result with the minimum obtained by the brute force search with \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{BF}}\in[0,60]\) and 240 samples resulting in a step size of 0.25 and a \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{BF}}=23.5\). Fig. 6 shows the NLL and accuracy of the eight individual sequences as well as their average for this search space. We evaluate the detection pipeline with the \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}\) on the two unused sequences of test split \(T\). Those sequences are manually chosen and cover the lower lopes of the left and right bronchial tree. We compare the performance of the frame-based classification (only CNN classification) with the one from the Viterbi. With the consideration of temporal context and use of anatomical knowledge for regularization, the accuracy for the first sequence can be improved by 17 points from 0.8061 to 0.9775 as well as for the second by 5 points from 0.6159 to 0.6649. Tab. 2 shows a collection of common classification metrics along with the average distance calculated based on matrix \(\mathbf{D}\) (see Eq. 6) comprising the shortest paths airways within our bronchial tree model (see Fig. 4). For a qualitative comparison, we visualize the cost for each airway label and frame (see Fig. 7) for the frame-based CNN classification as well as the Viterbi. We rearrange the labels according to their anatomical distance within the bronchial tree (collapsing the tree on to a line). This enables the interpretation of the cost over time as the path taken by the endoscope within the phantom of the bronchial tree. We highlight the predicted path as well as the ground truth. Figure 6: The search space covered by the brute-force method to find the optimal weighting \(\hat{\lambda}_{\mathcal{R}}\in[0,60]\) between data and regularization term. The upper figure shows the NLL and accuracy of the eight validation sequences, which are used for the optimization, the lower plot their average. The minimum found with gradient descent \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{GD}}=22.43\) sufficiently approximates the one found via brute-force search \(\hat{\lambda}_{\mathcal{R}}^{\mathrm{BF}}=23.5\) considered the gradient of NLL in this area. ## 5 Discussion The multiple use of the split of the data set (see Tab. 1) could course an unnoticed bias in the evaluation of our method. However, we have been limited due to the lack of other public available data sets. The approximation of the forward-backward algorithm by applying the Viterbi forwards and backwards on the data enables the visualization of the next likely path as well as the confidence of the algorithm. It also additionally enables the direct optimization of \(\hat{\lambda}_{\mathcal{R}}\) on the data set, which makes the manually tweaking of this hyperparameter obsolete. The result \(\hat{\lambda}_{\mathcal{R}}^{\text{GD}}=22.43\) using gradient descend is comparable to the one found using a brute-force search \(\hat{\lambda}_{\mathcal{R}}^{\text{BF}}=23.5\), which explicitly holds considering the gradient of the NLL in this area (see Fig. 6). To evaluate the generalization power of this method, but especially the estimated Figure 7: Visualization of the cost for each airway label and frame. The left plots show the frame-based classification (calibrated CNN), which is also used as the data term for the Viterbi. The right column shows the classification with the temporal awareness and anatomical regularization implemented with the Viterbi. The predicted path and the ground truth are highlighted. Due to the rapid change of labels between adjacent frames, the predicted path is omitted for the frame-based classification to improve readability. However, the minimum-cost path can be extracted by the color coding of the plot. \(\hat{\lambda}_{\mathcal{R}}\), additional datasets including in-vivo VB are needed. One other interesting aspect that has to be evaluated is the sensitivity of \(\hat{\lambda}_{\mathcal{R}}\) to different frame rates. It has to be mentioned that our airway label generation could lead to a noisy ground truth, especially at the passing to another airway, explaining the ground truth "hick ups" in the second sequence around the frame index 1700 (see Fig. 6(b)). The Viterbi overall improves the classification and all six metrics of the quantitative results shown in Tab. 2 by introducing the temporal context. The use of anatomical knowledge as a regularization prevents implausible jumps within a few frames. For example, without the Viterbi the prediction by the CNN would flicker from the left lower lobe of the bronchial tree to the right lower lobe (see Fig. 6(a) at frame index 600 and 1600) and vice versa (see Fig. 6(b)) covering long implausible anatomical distances like between the "LLB6" and "RLL7". So, the prediction of the CNN get successfully regularized to the most likely sequence that covers an anatomical plausible path within in the bronchial tree. This is also emphasized by the increase of the top 3 accuracy (Acc@3) from an average of 0.82 to 1, and particularly with the decrease of the average distance and its standard derivation within the bronchial tree graph from \(0.87\pm 1.82\) to \(0.18\pm 0.31\). This demonstrates the benefits of regularizing prediction w.r.t. anatomical constraints, i.e. favouring adjacent airways in the bronchial tree. A disadvantage of this method is the introduced inertia in changing to a new label, which is comparable to the group delay of a common low pass filter. This results in the need for a long sequence of frames with a fairly high confidence for a specific label. An example of such a situation can be seen in Fig. 6(b) around the frame index 1500: Even if the CNN shows a high confidence for the correct label "TriRLL", the Viterbi's prediction stays to "RLL7" due to the only short appearance of "TriRLL" and the associated additional and expensive changes in class. Even if this behavior is acceptable and a possible sequence for a VB, a much frequently label change would be preferable, especially consider the real-world application, where it is a common use case for the physician to quickly check an airway. ## 6 Conclusion In this work, we proposed a classification pipeline for an image-based localization of the endoscope during a VB. Our pipeline consists of multiple steps using a semantic bronchial orifice segmentation as an abstract scene representation and a CNN that classifies the anatomical region based on the segmentation. We demonstrated that the use of a Viterbi can successfully capture the temporal context and enables the explicit use of anatomical knowledge to force the classification to sequences being plausible within the bronchial tree. The approximation of the posterior distribution by applying the Viterbi forward and backward gives good insights on the confidence of the classification as well as the next likely alternative sequence. However, the Viterbi also \begin{table} \begin{tabular}{c l|c c c c c c} seq ID & classifier & Acc@1 & Acc@3 & Precision & Recall & F1 & AUC & \(\mathbf{D}(\mu\pm\sigma)\) \\ \hline \multirow{2}{*}{0} & frame-based & 0.81 & 0.88 & 0.81 & 0.81 & 0.81 & 0.95 & \(0.66\pm 1.62\) \\ & viterbi & 0.98 & 1.00 & 0.98 & 0.98 & 0.98 & 1.00 & \(0.02\pm 0.15\) \\ \hline \multirow{2}{*}{1} & frame-based & 0.62 & 0.76 & 0.62 & 0.62 & 0.62 & 0.91 & \(1.09\pm 2.02\) \\ & viterbi & 0.66 & 1.00 & 0.66 & 0.66 & 0.90 & \(0.34\pm 0.47\) \\ \hline \hline \multirow{2}{*}{average} & frame-based & 0.71 & 0.82 & 0.71 & 0.71 & 0.71 & 0.93 & \(0.87\pm 1.82\) \\ & Viterbi & 0.82 & 1.00 & 0.82 & 0.82 & 0.95 & \(0.18\pm 0.31\) \\ \end{tabular} \end{table} Table 2: Quantitative results on the two test sequence of our classifier without and with temporal awareness. All metrics use a micro manner weighting to compensate the high class imbalanced and to emphasize the overall performance over all classes equally. \(\mathbf{D}\) describes the distance of the predicted and the ground truth airway label in our bronchial tree graph (please see Fig. 4 and Eq. 6 for further details). Acc@3 and especially \(\mathbf{D}\) emphasize the superiority of the Viterbi with its time awareness and anatomically regularization, showing that the correct airway label is at least under the three most probable predictions and in the very anatomical neighborhood of the current one. introduces an inertia, requiring longer sequences with highly confident predictions to change to this class. Based on our outstanding results obtained from a phantom study, we will investigate the generalization capabilities of this approach to in-vivo datasets. Note, that (to our best knowledge) this is the first-time approach to vision-only bronchoscopy guidance using generic anatomy-regularized topological airway navigation omitting electromagnetic tracking and patient-specific CT scans. This is a first important step towards future bronchoscopy guidance enabling deployments also outside lung biopsies, e.g. at intensive care units. ## Declarations Some journals require declarations to be submitted in a standardised format. Please check the Instructions for Authors of the journal to which you are submitting to see if you need to complete this section. If yes, your manuscript must contain the following sections under the heading 'Declarations': * Funding * Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) * Ethics approval * Consent to participate * Consent for publication * Availability of data and materials * Code availability * Authors' contributions If any of the sections are not relevant to your manuscript, please include the heading and write 'Not applicable' for that section.
2306.07302
Impact of Experiencing Misrecognition by Teachable Agents on Learning and Rapport
While speech-enabled teachable agents have some advantages over typing-based ones, they are vulnerable to errors stemming from misrecognition by automatic speech recognition (ASR). These errors may propagate, resulting in unexpected changes in the flow of conversation. We analyzed how such changes are linked with learning gains and learners' rapport with the agents. Our results show they are not related to learning gains or rapport, regardless of the types of responses the agents should have returned given the correct input from learners without ASR errors. We also discuss the implications for optimal error-recovery policies for teachable agents that can be drawn from these findings.
Yuya Asano, Diane Litman, Mingzhi Yu, Nikki Lobczowski, Timothy Nokes-Malach, Adriana Kovashka, Erin Walker
2023-06-11T21:49:42Z
http://arxiv.org/abs/2306.07302v1
# Impact of Experiencing Misrecognition by Teachable Agents on Learning and Rapport ###### Abstract While speech-enabled teachable agents have some advantages over typing-based ones, they are vulnerable to errors stemming from misrecognition by automatic speech recognition (ASR). These errors may propagate, resulting in unexpected changes in the flow of conversation. We analyzed how such changes are linked with learning gains and learners' rapport with the agents. Our results show they are not related to learning gains or rapport, regardless of the types of responses the agents should have returned given the correct input from learners without ASR errors. We also discuss the implications for optimal error-recovery policies for teachable agents that can be drawn from these findings. Keywords:Teachable agents Automatic speech recognition Rapport. ## 1 Introduction and Related Work Students benefit from teaching others more than being tutored [1]. This effect of learning by teaching also holds when they teach a virtual agent [7] or an embodied robot [9] (called a teachable agent or robot). Although interaction with teachable agents can be done through typing [6] or speech [2], the literature suggests speech-based interaction is powerful in tutorial dialogues in general because speech enables learners to complete tasks faster than typing due to ease of production [3]. However, speech-based teachable agents are susceptible to misrecognition made by automatic speech recognition (ASR) when converting speech input to text. It may change the flow of the dialogue between students and agents and thus affect students' learning and perception of the agents. Past work has explored how misrecognition in different stages of speech-enabled tutorial dialogue systems (Fig. 1 shows the structure of our system) is related to students' learning gain and evaluation of the systems. D'Mello et al. [3] have found word error rates of ASR were not associated with students' learning gain but were related to their satisfaction with systems. Litman and Forbes-Riley [8] have found no correlations between errors made by a natural language understanding (NLU) module and students' learning. However, errors in earlier stages may not propagate to the outputs of a system. For example, mistakes in singular versus plural forms or "a" versus "the" made by ASR have little effect on NLU. Moreover, even if NLU fails to map "massive" to "big" because "massive" is out of vocabulary, the output may not change if it does not care whether a user says "big". Thus, errors in ASR or NLU may not represent misrecognition felt by end users. In this paper, we focus on mistakes that are retained until the final outputs of systems. Also, Dzikovska et al. [4] have shown that the frequency of a system not understanding user inputs and thus replying with a neutral response is negatively correlated with user satisfaction but not with learning gain. We instead distinguish the case where it is fine for dialogue systems to return a neutral response because learners say something irrelevant from the case where the systems should respond with something more specific. We fill a literature gap on the effect of errors by dialogue systems on learners. First, we extend the work on tutoring systems that appear more knowledgeable than learners to teachable agents that are at most the same level. Second, we analyze errors observable by learners, instead of errors internal to systems. We define _dialogue misrecognition_ as the situation where an agent responds differently to raw ASR inputs and "true" inputs. Finally, we evaluate an agent with students' sense of rapport with it rather than user satisfaction. Rapport is a predictor of learning for both human and agent tutes [9] and leaves a positive impression that helps agents establish a long-term relationship with learners [5]. Thus, it is likely a more direct metric than satisfaction with the effectiveness of human-agent collaborations. We have found dialogue misrecognition is not linked with learning or rapport. Our results are in line with the literature but contribute to it by measuring only misrecognition that impacts the flow of conversation and by using an outcome more suitable for human-agent collaborations. ## 2 Method ### Dataset We used the dataset from an experiment where 40 undergraduate students (35 female, 5 male; 17 White, 13 Asian, 5 Black, 1 Latino, 4 unknown; mean age \(=19.64\), \(SD=1.25\)) in a US city taught ratio word problems to a robot named Figure 1: The structure of our dialogue system. When it receives audio input, it converts audio to text. Then, Artificial Intelligence Markup Language (AIML) finds a pattern in the input text (NLU), selects an output (Dialogue Manager), and remembers it as a context. Finally, the system converts the text to speech. Emma using spoken dialogue for 30 minutes. The study was over Zoom due to COVID-19. To talk to Emma, they pushed and held a button on a web application. She is designed to follow up on student explanations with questions or her own explanations. For example, she says _"So I multiply because I have three times as many people?"_ after a student tells _"You have to multiply cups of seltzer by three."_ She also guides students to a correct solution when they are unsure about the answer. For instance, if a student's utterance contains _"I don't know"_, she suggests the next step, saying _"Me either. Maybe I start by dividing?"_. We had two experimental conditions: students teaching in pairs (n=28) or alone (n=12). Our design was quasi-experimental; if only one of the students showed up out of a pair, we had them teach Emma alone. Students in pairs taught her collaboratively by alternating between discussing the problems with their partners and talking to her. We excluded one pair from our analysis because one of the students did not speak to Emma during the session. The students individually took a pre-test before the session and a post-test and a survey after that to assess learning and rapport with Emma. We prepared two versions of pre- and post-tests each of which had 13 ratio problems similar to what they saw while teaching her. We removed five isomorphic problems from each test that proved to have different difficulties between versions, leaving eight problems for analysis [10]. On average, students scored 5.58 (\(SD=1.90\)) in the pre-test and 6.82 (\(SD=1.35\)) in the post-test. Rapport was measured on a six-point Likert scale devised by [9] (mean = 4.49, \(SD=.679\)) and represented the average of items asking about mutual positivity, attention, and coordination between the student and the robot [9, 2]. ### Quantification of dialogue misrecognition We simulated Emma's responses to "true" inputs as follows. First, we manually transcribed the students' utterances directed to her. Let \(U_{t}^{H}\) be the human transcribed utterance of a student directed to Emma in turn \(t\) and \(U_{t}^{ASR}\) be the utterance transcribed by ASR. Next, we sent \(U_{t}^{H}\) to Emma to get a simulated "true" response \(R_{t}^{H}\). She selects \(R_{t}^{H}\) from a set of pre-authored responses written in Artificial Intelligence Markup Language (AIML), based on patterns in \(U_{t}^{H}\) and context \(C_{t}\). We set \(C_{1}\) to none and \(C_{t+1}\) to \(R_{t}^{ASR}\), the response to \(U_{t}^{ASR}\), because \(U_{t+1}^{H}\) is a reply to \(R_{t}^{ASR}\). If there is no matching pattern in \(U_{t}^{H}\) given \(C_{t}\), Emma returns an utterance randomly selected from a set of _generic_ responses \(G\) that can make sense in any contexts such as _"I think I get it. What do I do next?"_. The size of \(G\) was 28. The responses in \(G\) do not change Emma's state, meaning that, unlike non-generic responses, they do not let students move to the next step of the problem. Finally, we calculated the proportions of the turns in which \(R_{t}^{H}\) differ from \(R_{t}^{ASR}\) to the total number of turns for each student (i.e. \(P(R_{t}^{H}\neq R_{t}^{ASR})\), the proportion of **Overall** dialogue misrecognition). We did not use raw numbers of errors to normalize the differences in the number of turns for each student. In the case of students in pairs, we did not use the turns where their partner spoke to calculate the proportions but did use those turns to define \(C_{t+1}\) and update Emma's dialogue manager. Note that we treated and \(R_{t}^{ASR}\) as the same when both are from \(G\), even if their surface forms are different. An example of the calculation of the proportions is shown in Table 1. We further categorized dialogue misrecognition into three cases: * **Prevented**: \(P(R_{t}^{H}\notin G\wedge R_{t}^{ASR}\in G)\). This means the student could not move to the next step because Emma misrecognized their input. * **Different**: \(P(R_{t}^{H}\neq R_{t}^{ASR}\wedge R_{t}^{H}\notin G\wedge R_{t}^{ASR}\notin G)\). This often implies Emma suggested a different way to solve a problem from what the student said. * **Proceeded**: \(P(R_{t}^{H}\in G\wedge R_{t}^{ASR}\notin G)\). This represents the case where Emma went to the next step by accident due to misrecognition. ## 3 Results and Discussion We examined how dialogue misrecognition is related to students' rapport with Emma and learning gain by running correlation analyses. We used Pearson's correlations for rapport and partial correlation for post-test scores controlled by pre-test scores because pre-test scores were positively correlated with post-test scores (\(r=.443\), \(p=.005\)). Table 2 summarizes our analysis of the 38 students from both conditions. **Overall** misrecognition is not correlated with either rapport (\(r=-.016\), \(p=.922\)) or learning (\(r=.246\), \(p=.142\)). Of the three types of dialogue misrecognition, **Different** was the highest. None of these types was significantly correlated with learning or rapport. However, **Prevented** is marginally negatively correlated with rapport (\(r=-.319\), \(p=.051\)). Our results provide another piece of evidence that misrecognition by teachable agents is not necessarily relevant to learning gains or learners' perception of agents. Furthermore, this implies that an optimal error-recovery strategy for \begin{table} \begin{tabular}{l|l|l|l} Speaker & Label & Utterance & Results \\ \hline A & \(U_{1}^{ASR}\) & what is the ratio between the volume of paint and the surface area & AIML pattern: no match \\ \cline{2-4} & \(U_{1}^{H}\) & Emma, what is the ratio between the volume of paint and the surface area? & AIML pattern: no match \\ \hline Emma & \(R_{t}^{ASR}\) & _I’m still learning this. I don’t get it._ & \\ \cline{2-4} & \(R_{1}^{H}\) & _I’m not sure actually._ & \(R_{1}^{ASR}=R_{1}^{H}\) \\ \hline B & \(U_{2}^{ASR}\) & the ratio between surface area and volume is 622 & AIML pattern: no match \\ \cline{2-4} & \(U_{2}^{H}\) & The ratio between surface area and volume is 6 to 2. & AIML pattern: “ratio... 6... 2” \\ \hline Emma & \(R_{2}^{ASR}\) & _Can you explain a little more?_ & \(R_{2}^{ASR}\neq R_{2}^{H}\) \\ \cline{2-4} & \(R_{2}^{H}\) & 6 to 2 is the same ratio I used for step one. But the ratio of 1 to 3 seems an easier place to start? & \\ \end{tabular} \end{table} Table 1: Example interaction between students in a pair and Emma. Her responses in italics come from the set of generic responses \(G\). In this example, each student had one turn, and Emma misrecognized student B’s input. Therefore, \(P(R_{t}^{H}\neq R_{t}^{ASR})=0\) for student A and \(P(R_{t}^{H}\neq R_{t}^{ASR})=1\) for student B. Results teachable agents should prefer moving to the next step, assuming that learners give reasonable inputs, rather than expressing they do not understand the inputs. This may sound counterintuitive because it may deprive learners of opportunities to realize their misunderstanding. Still, this disadvantage may be canceled out by exposure to correct solutions and more problems because the correlation between learning and the proportion that Emma returns a non-generic response when she is supposed to return a generic one is not significant. This policy may also aid inclusion because it can avoid generic responses stemming from ASR's poor performance in accepted speech and minoritized dialects. One limitation of this study is that many participants were at the ceiling (6 students scored 100% on the pre-test) and thus did not learn as part of the study, reducing our ability to examine correlations between dialogue misrecognition and learning. Another limitation is that our dialogue system is not a state-of-the-art end-to-end neural model. We used the Web Speech API for speech recognition off the shelf, which yielded a.226 word error rate on average \((SD=.102)\)3, and performed only pattern matching to decide Emma's response. Yet, our dialogue misrecognition measures can be used for end-to-end models that do not have internal components such as NLU. Also, due to the small sample size (\(n=38\)), we lack statistical power and could not include demographic variables or the experimental conditions as covariates. This stopped us from analyzing the effect of witnessing dialogue misrecognition encountered by a partner. Footnote 3: Word error rates were not correlated with rapport (\(r=.196\), \(p=.239\)), learning (\(\rho=.246\), \(p=.142\)), or overall dialogue misrecognition (\(r=-.155\), \(p=.354\)). We proposed new measures of dialogue misrecognition to explore how changes in a conversation flow caused by errors that propagate through a dialogue system are related to rapport with teachable agents and learning gain. Our results indicate these changes are not linked to learning or rapport. This implies we do not need a sophisticated dialogue system with little misrecognition for teachable agents and that an optimal error-recovery policy can be as simple as presuming inputs from learners are reasonable. Future research can test how this policy affects rapport, learning, and other outcomes such as engagement. \begin{table} \begin{tabular}{c|c|c|c} Misrecognition types & Mean (SD) & Rapport (p-value) & Learning (p-value) \\ \hline **Overall** &.146 (.080) & -.016 (.922) &.246 (.142) \\ \hline **Prevented** &.026 (.031) & -.319 (.051) &.188 (.488) \\ \hline **Different** &.103 (.070) &.141 (.399) &.271 (.104) \\ \hline **Proceeded** &.017 (.025) & -.056 (.737) & -.131 (.440) \\ \end{tabular} \end{table} Table 2: Descriptive statistics of the proportions of dialogue misrecognition and its correlations with rapport with Emma (Pearson’s r, \(df=36\)) and post-test scores (partial correlation controlled by pre-test scores, \(df=35\)). **Different** type was the most likely. No correlation was significant. ## 4 Acknowledgments We would like to thank anonymous reviewers for their thoughtful comments on this paper. This work was supported by Grant No. 2024645 from the National Science Foundation, Grant No. 220020483 from the James S. McDonnell Foundation, and a University of Pittsburgh Learning Research and Development Center internal award.
2307.10557
Fast Current Regulation and Persistent Current Maintenance of High-Temperature Superconducting Magnets with Contact Power Supply and Flux Pump
Due to the properties of high temperature superconducting (HTS) materials, current attenuation is inevitable during the closed-loop operation of HTS magnets. When a contact DC power supply is used to supplement this attenuation, it inevitably creates a huge thermal burden on the cryogenic system. The flux pump is a revolutionary new power source that can charge closed-loop HTS magnet wirelessly. However, for HTS magnets with a large inductance, such as particle accelerator magnets and magnetic confinement magnet in Tokamak devices, the flux pump cannot fast adjust the DC current of the magnet, due to its small DC output voltage. Here, we present a method to fast regulate the current in a closed-loop HTS magnet using a contact DC power supply and persistent current switch (PCS). After current regulation, the HTS magnet is operated in the persistent current mode (PCM) with a flux pump. By applying the "four-quadrant" control theory of the flux pump allows, the current in HTS magnet is controlled with high stability. This study provide a power strategy for the fast current regulation and maintenance of persistent current in the HTS magnet, enabling the industrial applications of flux pumps for HTS magnets with large inductance.
Chenghuai Wu, Wei Wang, Run Long, Hong Li, Li Zhou, Peng Liu
2023-07-20T03:49:58Z
http://arxiv.org/abs/2307.10557v1
Fast Current Regulation and Persistent Current Maintenance of High-Temperature Superconducting Magnets with Contact Power Supply and Flux Pump ###### Abstract Due to the properties of high temperature superconducting (HTS) materials, current attenuation is inevitable during the closed-loop operation of HTS magnets. When a contact DC power supply is used to supplement this attenuation, it inevitably creates a huge thermal burden on the cryogenic system. The flux pump is a revolutionary new power source that can charge closed-loop HTS magnet wirelessly. However, for HTS magnets with a large inductance, such as particle accelerator magnets and magnetic confinement magnet in Tokamak devices, the flux pump cannot fast adjust the DC current of the magnet, due to its small DC output voltage. Here, we present a method to fast regulate the current in a closed-loop HTS magnet using a contact DC power supply and persistent current switch (PCS). After current regulation, the HTS magnet is operated in the persistent current mode (PCM) with a flux pump. By applying the "four-quadrant" control theory of the flux pump allows, the current in HTS magnet is controlled with high stability. This study provide a power strategy for the fast current regulation and maintenance of persistent current in the HTS magnet, enabling the industrial applications of flux pumps for HTS magnets with large inductance. Flux pump, YBCO wire, persistent current mode (PCM), superconducting magnet, wireless power transfer. ## I Introduction The second-generation (2G) high temperature superconducting (HTS) wire exhibit outstanding performance in a strong magnetic field, and have excellent mechanical strength and flexibility [2, 3]. In recent years, HTS magnets wound by HTS wires have made significant advancements in high-field magnets [4, 5], nuclear magnetic resonance imaging magnets [6-8], magnetic levitation [9, 10], and superconducting motors [11-13], etc. However, HTS magnets cannot operate in a closed-loop, or in the PCM like low temperature superconducting (LTS) magnets, due to the inevitable soldering resistance [14] and the low \(n\) value in its superconducting \(E\)-\(J\) power relationship [15, 16]. If the traditional contact DC power supply is used to maintain the operation current, the presence of current leads cause huge heat leakage into the cryogenic system, and its own resistance also generates extra joule heat, resulting in the extremely high energy consumption. One potential solution to this problem is to use an HTS flux pump to wirelessly inject a large amount of DC current into the closed-loop HTS magnet and operate in PCM, thus eliminating the need for current leads and contact power supply, thus decrease the energy consumption by several orders of magnitude compared with conventional contact DC power supply. Over the last decade, a variety of HTS flux pumps have been developed to wirelessly charge HTS magnets, including HTS dynamo [17-19], linear-motor type flux pumps [20, 21], linear-pulse field flux pumps [22, 23], and transformer-rectifier flux pumps [24-28]. Of these, linear-motor type flux pump, HTS dynamo and linear-pulse field flux pumps are categorized as travelling wave flux pump [29]. Despite the prevalence of travelling wave flux pumps, the origin of their DC output has remained a theoretical challenge since their discovery. This is because it cannot be clearly explained by the induction law. Inspired by Giaever's "DC Transformer" experiment [30, 31], Wang [32] proposed macroscopic magnetic flux coupling theory to explain the source of the DC electromotive force of the travelling wave flux pump. The theory explains that the moving magnetic pole generated by the travelling wave flux pump can couple a large number of vortices on the superconducting films, and drag the vortices to move in a predetermined direction, thereby generating a DC electromotive force, given as: \[\vec{E}=\vec{B}\times\vec{v}_{f} \tag{1}\] where \(\vec{B}\) is the flux density of coupled vortices, and \(\vec{v}_{f}\) is the velocity of the travelling magnetic pole. Relying on Eqn.(1), Wang [1] then introduced a "four-quadrant" control method to accurately control the DC output of HTS travelling wave flux pumps. By controlling the direction of the travelling magnetic wave or the DC bias magnetic field, accurate control of the pumped current in the magnet is achieved, making the method promising for applications where high current accuracy is crucial. For example, in a reported 14 T MRI, 1400 A of current need to be passed to a superconducting magnet with a 300 H inductance, and the current ripple cannot exceed 1 ppm [33, 34]. In this study, we controlled the switch of the DC bias coil of the flux pump via feedback, allowing to maintain PCM at any value in both directions below the maximum output current value of the flux pump. The current ripple is less than 5%, making it suitable for most applications involving superconducting magnets. These results validates the viability of the "four-quadrant" control theory for achieving high-precision control of current ripple in travelling wave flux pumps. On the other hand, compared with conventional contact DC power supplies, the travelling wave flux pumps have relatively low output voltage [35], which have relatively slow current regulation rate for the HTS magnet with large inductance. However, in some application scenarios, fast excitation [36] and demagnetization [37, 38] or even fast current regulation [39-41] is necessary. For instance, in the cyclotron magnet, the superconducting magnet is required to be excited smoothly to 4-6 T within 1-10 s, and in the Tokamak device, superconducting magnets with caliber of 10-20 m are required to be excited to the maximum field intensity of 3-10 T within 1-20 s. To cope with this problem, here we propose a method to fast adjust the current in an HTS magnet operated in the PCM. During the current regulation stage, the contact DC power supply and PCS are used for fast current regulations. During the closed-loop operation phase, the flux pump is used to supplement the current attenuation caused by the soldering resistance. This method combines the advantages of contact DC power supplies and flux pumps, which can realize both the fast current regulation and PCM operation for HTS magnets. The research results of this work can provide a novel power strategy to enable the use of flux pumps for HTS magnets with large inductances, thus decrease the energy consumption by several orders of magnitude, which may accelerate the broad application of HTS magnets. ## II Working Principle ### Process and circuit principles for fast current regulation and maintain PCM operation The power strategy proposed in this work enables the fast switching between two operation modes: fast current regulation and the PCM operation. The equivalent circuit is shown in Fig. 1, which contains two circuit loops, the first is the current regulation loop composed of a contact DC power supply \(I_{s}\) and HTS coil \(L_{DPC}\), the second is the PCM operation loop comprised of the flux pump, HTS bridge (stator) and the HTS coil \(L_{DPC}\). The switching between the two operation modes are controlled by two switches \(S_{1}\) and \(S_{2}\), while \(S_{2}\) is the PCS controlled by a heater, i.e., when the heater is switched on, the HTS bridge becomes normal state, the PCS is "open". \(R_{j1}\) and \(R_{j2}\) are soldering resistances, \(R_{c}\) is the resistance of the current leads, and \(L_{DPC}\) is the inductance of the HTS DPC. \(I_{L}\) is the current through DPC and \(I_{2}\) is the current through the bridge when switch \(S_{2}\) is closed. \(V_{dc}\) and \(R_{d}\)[42] are DC _e.m.f._ and internal resistance generated by the flux pump, \(S_{3}\) and \(S_{4}\) are switches of the DC bias coil and AC coil of the flux pump respectively, i.e., based on Eqn.(2) and (3), by switching on the DC bias coil and the AC coil of the flux pump, which is equivalent to \(S_{3}\) and \(S_{4}\) is "open", the DC _e.m.f._\(V_{dc}\) is switched on, and current is pumped into the closed-loop. It is worth noting that only when the DC bias coil of the flux pump and the AC coil are opened at the same time, the flux pump will generate DC _e.m.f._ in the closed loop, only open the DC bias coil will not generate DC _e.m.f._. **1)** **Fast excitation and maintain PCM operation.** Fig. 2 shows the operation procedure and circuit principle of fast excitation and maintain PCM operation. First, turn on the heater (open the switch \(S_{2}\)), turn on the contact power supply (close the switch \(S_{1}\)), and adjust the current of contact DC power supply \(I_{s}\) so that \(I_{L}\) can reach its ideal value. The reason for this operation is to use the contact power supply for fast excitation of the DPC. Second, after the fast excitation is completed, it is necessary to switch to the closed-loop operation mode. Turn off the heater (close the switch \(S_{2}\)) and slowly reduce the contact DC power supply \(I_{s}\) current to 0 A. In this process, \(I_{s}+I_{2}=I_{L}\), where \(I_{L}\) remains unchanged due Fig. 1: Equivalent circuit diagram of the fast current regulation and PCM operation of an HTS magnet. Fig. 2: The operation procedure and circuit principle of fast excitation and maintain PCM operation. to the inductance of DPC, \(I_{s}\) decreases to 0 A, \(I_{2}\) becomes the same as \(I_{L}\), and DPC enters closed-loop operation. Finally, disconnect the contact power supply (open the switch \(S_{1}\)) and turn on the flux pump power supply (open the switch \(S_{3}\) and \(S_{4}\)) to maintain PCM operation. **2) Fast demagnetization.** Fig. 3 shows the operation procedure and circuit principle of fast demagnetization. First of all, ensure that the closed-loop operation magnet is in the natural decay state. If the flux pump is turned on, it needs to be turned off first (keep switch \(S_{3}\) and \(S_{4}\) close). Secondly, turn on the contact power supply (close the switch \(S_{1}\)) and adjust the knob until the current \(I_{2}\) is 0 A. In this process, \(I_{s}+I_{2}=I_{L}\), where \(I_{L}\) remains unchanged due to the inductance of DPC, \(I_{2}\) decreases to 0 A, \(I_{s}\) becomes the same as \(I_{L}\), and DPC enters open loop operation. Finally, turn on the heater (open the switch \(S_{2}\)) and reduce the current of the contact power supply to 0 A. So that \(I_{L}=I_{s}=0\) A, fast demagnetization is complete. **3) Fast adjust the current and maintain PCM operation.** Fig. 4 shows the operation procedure and circuit principle of fast adjust the current and maintain PCM operation. First of all, ensure that the closed-loop operation magnet is in the natural decay state. If the flux pump is turned on, it needs to be turned off first (keep switch \(S_{3}\) and \(S_{4}\) close). Secondly, turn on the contact power supply (close the switch \(S_{1}\)) and adjust the knob until the current \(I_{2}\) is 0 A. In this process, \(I_{s}+I_{2}=I_{L}\), where \(I_{L}\) remains unchanged due to the inductance of DPC, \(I_{2}\) decreases to 0 A, \(I_{s}\) becomes the same as \(I_{L}\), and DPC enters open loop operation. Third, turn on the heater (open the switch \(S_{2}\)) and adjust the current of the contact power supply to the ideal value. In this process, \(I_{L}=I_{s}\), completing the fast regulation of the current in the DPC. Fourth, turn off the heater (close the switch \(S_{2}\)) and slowly reduce the contact DC power supply \(I_{s}\) current to 0 A. In this process, \(I_{s}+I_{2}=I_{L}\), where \(I_{L}\) remains unchanged due to the inductance of DPC, \(I_{s}\) decreases to 0 A, \(I_{2}\) becomes the same as \(I_{L}\), and DPC enters closed-loop operation. Finally, disconnect the contact power supply (open the switch \(S_{1}\)) and turn on the flux pump power supply (open the switch \(S_{3}\) and \(S_{4}\)) to maintain PCM operation. **Fig. 4. The operation procedure and circuit principle of fast adjust the current and maintain PCM operation.** ### _Working principle of linear-motor type flux pump_ The linear-motor type flux pump is mainly composed of AC coils and DC bias coil, which generates a DC biased AC travelling magnetic wave in the air gap, as shown in Fig. 5, which can be expressed by a one-dimensional wave equation [43] as: Fig. 3: The operation procedure and circuit principle of fast demagnetization. \[B_{y}(x,t)=B_{ac}\sin(kx+\omega t)+B_{dc} \tag{2}\] where \(B_{ac}\) is the amplitude of AC travelling magnetic wave, \(B_{dc}\) is the value of DC bias field, \(k=\frac{2\pi}{\lambda}\) is the wave number, \(\lambda\) is the wavelength, \(\omega=2\pi f\) is the angular frequency, and \(f\) is the frequency. The DC biased AC travelling magnetic wave generated by the travelling wave flux pump can couple a large number of superconducting vortices on the superconducting stator, and drag the vortices to move in a predetermined direction. According to Eq. (1), the vortices move in a predetermined direction results in an output DC _e.m.f._, expressed as: \[V_{dc}=l_{eff}B_{y}\times\nu_{x} \tag{3}\] where \(V_{dc}\) is the averaged DC output _e.m.f._, \(l_{eff}\) is the effective length along the YBCO stator, \(B_{y}\) is the average flux density of coupled vortices, and \(\nu_{x}\) is the velocity of the travelling magnetic pole. ### "Four-quadrant" control theory accurately controls DC output Relying on Eqn.(2) and Eqn.(3), a "four-quadrant" method was introduced by Wang [1] to accurately control the DC output of HTS travelling wave flux pumps, as demonstrated in Fig. 6, i.e., by reversing either the direction of DC bias field \(B_{dc}\) or the travelling direction \(\nu_{x}\), the DC output _e.m.f._\(V_{dc}\) can be reversed in the HTS travelling wave flux pumps. Based on the "four-quadrant" control method, we designed a feedback control algorithm to control the on/off of the DC bias coils of the flux pump. we first run the flux pump at its maximum output capacity, i.e., ensuring \(|B_{dc}|=B_{ac}\)[43]. Then a feedback control program is used to control the DC bias coils of the flux pump to ensure that the pump current can be output at the preset current \(l_{preset}\) within an allowed fluctuation \(\Delta I\), which take any value and stabilize it as shown in Fig. 7. To accomplish the target, the feedback program will track the difference between the output current and the preset current in real time, i.e., when the output current value is less than the preset value, the DC bias coils is turned on for charging, and when the current value is greater than the preset value, the DC bias coils is turned off for attenuation. ## III Experimental ### Experimental setup In the experiments, the entire closed-loop HTS coil shown Fig. 5: The applied magnetic field \(B_{y}\) is perpendicular to the superconducting stator and the pumped current is perpendicular to the 2D plane. Fig. 6: Schematic diagram of “four-quadrant” control. This control method proposed by Wang [1] to accurately control the DC output of HTS travelling wave flux pumps, based on the theory of macroscopic magnetic coupling effect. Fig. 7: Block diagram of feedback control. By comparing the difference between the pump current and the preset current, we perform a real-time feedback control of the pump current. in Fig. 8 is submerged in liquid nitrogen at 77 K. A contact DC power supply is connected to the terminals of the HTS coil by two current leads, as shown in Fig. 8, to fast excite or demagnetize the HTS coil. After fast current regulation, the flux pump is used to maintain PCM operation of the superconducting magnet, and the heater is used to switch on/off of the PCS. The 12 mm wide YBCO wire made by _Shanghai Superconductor Technology Company_[44] is used as the stator in the air gap of the flux pump. An HTS closed-loop is formed by connecting the DPC and the superconducting stator by soldering, while the measured soldering resistance is 30 \(n\Omega\). In addition, two Hall sensors are installed in the middle of the DPC and the C-shaped iron ring to measure the magnetic field. By calculating the measured magnetic field, the current in both the DPC and the stator can be obtained. Detailed parameters of HTS DPC, stator and heater are shown in Table I. ### _Power Supplies, Measurement and Control System_ The AC coils of the linear-motor type flux pump are power by a three-phase inverter. The DC bias coils of the linear-motor type flux pump are powered by a programmable DC power supply. The current leads are connected to a contact superconducting DC power supply (Keysight 6680A), which can output a maximum current of 875 A. The two Hall sensors, installed in the center of the DPC and C-ring to measure current, are powered by a precise DC current supply. The data acquisition system is mainly composed of data acquisition instrument (Agilent 34972A), National Instruments PCI-4070 cards, which read the voltages from the two Hall sensors and calibrate to the value of pumped current. To accurately control the closed-loop current, the measured current data is transmitted to a LabView feedback control program. Based on the algorithm shown in Fig. 7, the LabView program controls the on/off state of the DC power supply connected to the DC bias coil of the flux pump. ## IV Result and Discussion ### _Fast excitation and maintain PCM operation_ The conventional contact DC power supply has the advantage of fast excitation speed, while the flux pump has the advantage of maintaining PCM operation. We combine the advantages of the two power supplies and propose a method of "fast regulation with contact DC power supply and maintain PCM operation with flux pump". In order to visually reflect the effect of the flux pump in maintaining PCM operation, we used the contact DC power supply to fast excitation the HTS magnet, then allow current attenuation in the closed-loop and use the flux pump to compensate the current losses, which enables the PCM of the closed-loop HTS magnet. The experimental operation and circuit principle of fast excitation and maintain PCM operation are described in Section A of Chapter II. The experimental results are shown in Fig. 9. In the experiment where only contact power was used, the current in the magnet attenuated fast after switching to the closed-loop operation due to the soldering resistances in the loop. However, in the experiment using contact power supply and flux pump, fast Fig. 8: The connection of the experimental apparatus. The current leads are used to fast adjust the current in the DPC, the heater is used as PCS and the flux pump is used to maintain the HTS magnet’s PCM operation. Fig. 9: Experimental results of fast excitation and maintain PCM operation experiment. The flux pump was used to maintain PCM operation experiment, no current attenuation was observed. However, the current decayed rapidly in the experiment without a flux pump to maintained PCM operation. excitation was realized with the help of contact power supply, and after switching to flux pump to maintain PCM operation, the closed-loop current maintained constant, only a little change occurred during the switching process. The conventional contact power supply can be used to excite HTS magnet rapidly, but the current lead must be pulled out because of the huge energy consumption caused by the use of contact power to maintain the current in HTS magnets. However, due to the existence of soldering resistance, the stable closed-loop operation of the HTS magnet cannot be maintained. The results of this comparison experiment show that using contact power supply to fast excitation, and then using flux pump to maintain PCM operation is an ideal solution for HTS magnets with large inductances which requires fast excitation and maintain PCM operation. ### Comparison of excitation and demagnetizing speeds According to the "four-quadrant" control theory, As shown in Fig. 6, by reversing the direction of DC bias field \(B_{dc}\), the DC output _e.m.f._\(V_{dc}\) can be reversed in the HTS travelling wave flux pumps. Therefore, when the HTS magnets needs demagnetization, the current in the HTS magnets can be reverse-charged to 0 A by reversing the direction of DC bias field \(B_{dc}\). Subsequently, we design experiments to compare the excitation and demagnetization speeds of the propose power strategy combining conventional contact power supply and flux pump, with the strategy of only use a flux pump. The experimental operation and circuit principles of fast excitation, maintain PCM operation, and fast demagnetization are described in Section A of Chapter II. Fig. 10 shows the experimental results. It takes 1614 s for the flux pump to pump 69 A into the closed-loop HTS magnet. For comparison, it only takes 98 s to excite the HTS magnets to 69 A by contact DC power supply. Similarly, we compared the speed of the demagnetization speeds with the two strategies: the flux pump takes 299 s to demagnetize the current from 69 A to 0 A, while contact DC power supply takes only 68 s. The excitation speed of the contact DC power supply is more than 16 times larger than the flux pump, and the demagnetization speed is more than 4 times. In addition, the excitation and demagnetization speed of contact DC power supply can be further improved by outputting a higher voltage. Therefore, with the help of contact DC power supply, the excitation and demagnetization speed of the HTS magnets can be greatly improved. The inductance of the HTS magnet used in this experiment is only 2.52 mH, but the excitation speed of the HTS magnet with the flux pump is 16 times slower than that of the contact power supply. If the flux pump is used to excite an HTS magnet with a large inductance, the excitation and demagnetizing speeds of flux pump is difficult to meet the industrial requirements. For instance, in the cyclotron magnet, the superconducting magnet is required to be excited smoothly to 4-6 T within 1-10 s, and in ITER magnets, fast demagnetization is required [36, 37]. The proposed combined power strategy may have the capability to solve this problem. ### Fast bipolar excitation and maintain PCM operation The above experiments demonstrate that the use of both contact DC power supply and flux pump can achieve fast excitation/demagnetization, while maintaining the PCM operation of the HTS magnet after current regulations. In applications, the load current in the HTS magnet needs to be controlled to a preset value with certain accuracy, in which case the feedback control of the flux pump's output current is required. For instance, in nuclear magnetic resonance imaging, the flux pump should not only compensate the magnetic field attenuation, but also need to minimize the ripple magnetic field to meet the required magnetic field stability. Figure 11: Experimental results of fast excitation to bipolar arbitrary value and maintain PCM operation. Figure 10: Comparison of excitation and demagnetization speed between two power supply. The excitation speed of the contact power is 16 times faster than that of the flux pump, and the demagnetizing speed is 4 times faster. Based on the "four-quadrant" control theory of travelling wave flux pump [1], control both the direction of DC bias field \(B_{dc}\) and the travelling direction \(v\) of AC travelling wave can control the direction of output _e.m.f._ of the flux pump, as shown in Fig. 6. In particular, the output current can be controlled by the on/off states of the DC bias coil or AC coils, while the control flowchart is shown in Fig. 7. In this section, the feedback current control of the flux pump based on "four-quadrant" control theory is utilized to maintain the PCM operation after fast current regulation. The experimental results for fast charging and maintain the PCM operation are shown in Fig. 11. In the experiments, we preset the target current in the feedback control program, such as \(+10\) A, \(+30\) A, \(+50\) A and \(+69\) A, respectively, then use the contact power to fast excitation to the target value, and then switch to the flux pump and open the feedback control program to maintain the target value to maintain the persistent current. For reverse charging, the operation process is the same as above, the only difference is to reverse the direction of the DC bias filed \(B_{dc}\). The procedure and circuit principle of fast excitation and maintain PCM operation are described in Section A of Chapter II. The experimental results show that the HTS magnets can be excited fast to any preset value in both directions by using the contact power supply, and maintaining its persistent current by the flux pump based on feedback control algorithm. In addition, we checked the current fluctuation when maintained at 30 A and found that the current fluctuation was only 0.15 A and the current stability reached 5%, which is consistent with the preset accuracy. This experiment verifies that, with the feedback current control based on the "four-quadrant" control theory, the travelling wave flux pump can work as a bipolar power supply to maintain PCM operation with high current precision. ### _Fast current regulation and maintain PCM operation_ In several applications such as the excitation coil of superconducting machine, cyclotron magnet, etc., fast regulation of the operation current is necessary. For instance, in proton heavy ion therapy, the current in the cyclotron magnet needs to be adjusted rapidly in order to kill cancer cells at different depths [41].In this section, we demonstrate the switching from flux pump to contact power supply to enable fast current regulations, while the closed-loop current can be fast changed or even reversed in the HTS magnet. As shown in Fig. 12, a contact power supply is used to rapidly excite the magnet, and then switch to the flux pump to maintain PCM operation. After maintaining the persistent current for a period of time, the circuit is switched back to the contact power supply to fast adjust the current to any desired value. In the experiments, we demonstrate the fast adjusted to \(+50\) A, \(+30\) A and \(+10\) A, demagnetization to 0 A, and reverse adjust to -10 A, -30 A, -50 A, and -69 A. After the contact power supply fast adjusts the current, the circuit is switched to the flux pump with feedback control to maintain the persistent current in the PCM, with the current ripple less than 0.15 A. See Section A of Chapter II for the operating procedure and circuit principle of fast excitation, maintain PCM operation and fast adjust the current. The above experiments proves that the persistent current maintained by a flux pump in a closed-loop HTS magnet can be rapidly regulated with a contact power supply, while the PCM operation and fast current regulation mode can be rapidly switched. This power strategy incorporate both the advantages of conventional contact power and flux pump. Despite the conventional power supply has very high energy consumption, time for current regulation accounts for a very short period of time, then switched back to the ultra low energy consumption PCM operation maintained by the flux pump, while the persistent current has no current attenuation and with high accuracy. ### _Discussion_ HTS magnets are used in many fields, such as accelerator physics, Tokamak devices, proton heavy ion therapy, nuclear magnetic resonance imaging, and maglev trains, etc. The operation currents in these magnets are ranging from hundreds of amperes to tens of thousands of amperes. If conventional contact power supplies are used to maintain the currents in the HTS magnets, the energy consumptions are extremely high, while the current leads bring heavy thermal load to the cryogenic system. Fig. 13 demonstrates the two different operation modes, such as with the contact power supply and with the flux pump, respectively. As shown in Fig. 13(a), the PCM operation maintained by contact power supply has welding resistance \(R_{j}\) and current lead resistance \(R_{c}\). The energy consumption required by contact power supply to maintain PCM operation is \(W_{c}=I^{2}(R_{c1}+R_{c2}+R_{j})\). As shown in Fig. 13 (b), the PCM operation maintained by the flux pump has welding resistance \(R_{j}\) and internal resistance \(R_{d}\) of flux pump in the superconducting loop due to the action of the travelling magnetic wave field on the superconducting wire [42], so the energy consumption of the flux pump during operation is \(W_{fp}=I^{2}(R_{d}+R_{f})\). When a thousand or ten thousand amps Fig. 12: Experimental results of fast adjust current to bipolar arbitrary values and maintain PCM operation. of current is required, the resistance of the current lead is generally 4-6 orders of magnitude larger than the internal resistance of the flux pump. As a result, the energy consumption of the two power sources to maintain PCM operation differs by 4-6 orders of magnitude. And that doesn't take into account the extra heat that the current leads introduce. In practice, the current leads will also bring additional heat burden to the refrigeration equipment as they pass through the low and high temperature environments, as shown in Fig. 13 (a). The thermal conductivity of copper at 293 K is 397 W/m.K. In addition, the thermal conductivity power is related to the internal and external temperature difference and contact area. Superconducting magnets in Tokamak devices often require tens of thousands of amperes of current, the size of the current lead is very huge, and the heat leakage caused by it cannot be ignored. Therefore, the advantage of flux pump to maintain PCM operation is highlighted, which can maintain the closed-loop operation of high-current magnet without attenuation with very low energy consumption. In addition, based on the "four-quadrant" control theory, we controlled the on/off of the DC bias coil of the flux pump by feedback, and realized that the flux pump could maintain PCM operation at any value of positive or negative, with a ripple of 0.15 A and a current stability of 5%, as shown in Fig. 11. When the flux pump is applied to the medical magnetic resonance system, the research direction is to make efforts to minimize the magnetic field ripple while compensating the magnetic field attenuation to meet the magnetic resonance system's requirements for magnetic field stability [33]. At present, because the output current is controlled only by switching on/off the DC bias coil of the flux pump, the AC coil is always on, which causes the superconducting wire to generate internal resistance under the action of the alternating magnetic field, and there will be large current ripple and decay of persistent current. Next, we plan to achieve accurate current control by controlling the on/off of the AC travelling magnetic wave field, which will further improve the current stability and further reduce the energy consumption of the flux pump. This method has great application prospect in the fields that require high current stability such as nuclear magnetic resonance. Although the flux pump combined with feedback control has the ability to output current at positive or negative values and maintain PCM operation with high accuracy and stability, due to the low _e.m.f._ generated by the flux pump, the speed will be very slow if the industrial magnets with large inductances are charged by the flux pump. As shown in Fig. 10, for the magnet whose inductance is 2.52 mH used in this test, the excitation and demagnetization speeds of the flux pump are only 1/16 and 1/4 of the contact power supply, which cannot meet the industrial requirements to excite the magnet to the target magnetic field in a short period of time. In order to solve this problem, this paper proposes the use of contact power supply in excitation, demagnetization and current regulation phase, and the use of flux pump power supply in the PCM operation maintenance phase. And when the contact power supply for current regulation, the speed of regulation is proportional to the voltage that the contact power supply can provide. After achieving the target current, the flux pump is used to maintain persistent current, which significantly reduces the energy consumption compared with continue to use the contact power supply to sustain the load current. Pluggable current leads can also be used to further decrease the heat leakage into the cryogenic system during PCM operation. As shown in Fig. 12, the current in the magnet can be fast adjusted by switching between the closed-loop operation mode of the magnet to the open-loop operation mode, which is very promising in many application scenarios. For instance, in the high gradient magnetic separation device, the excitation speed generally reaches 2 T/min, and the maximum working field intensity is 5 T, and the stable magnetic field needs to be maintained for a long time under different intensities [40]. If this regulation method can be used, it will be of great significance to reduce operating costs and improve working efficiency. The successful experiment of this operation mode combines the advantages of both the contact power supply and the flux pump power supply, which can not only fast adjust the current of superconducting magnets, but also maintain PCM operation in an ultra low power consumption. It can meet the requirements of fast excitation, demagnetization, current regulation and maintain PCM operation of industrial HTS magnets with large inductances. ## V Conclusion This paper proposes the use of a flux pump to maintain PCM operation in HTS magnets, which are used in fields such as accelerator physics, Tokamak devices, proton heavy ion therapy, nuclear magnetic resonance imaging, and maglev trains. The use of flux pump greatly reduces the energy consumption of PCM operation, because the resistance and heat conduction of the current lead of the conventional contact DC power supply will bring huge heat burden to the cryogenic system. The on/off of the DC bias coil of the flux pump is controlled by feedback, so that the flux pump can maintain the PCM of the magnet in preset positive or negative value with high stability. The current stability is 5% and the current ripple is as low as 0.15 A, which can meet the industrial application Fig. 13: Two ways to maintain PCM operation. (a) Using contact power supply to maintain PCM operation, there are problems of heat leakage and large energy consumption of current lead resistance. (b) PCM is maintained by a flux pump, with only internal resistance and welding resistance causing energy consumption. of most HTS magnets. In addition, for industrial magnets with large inductance, such as particle accelerator magnets and magnetic confinement magnet in Tokamak devices, the speed of flux pump for current regulation is slow, so it is proposed to use contact DC power supply for fast excitation, fast demagnetization and fast current regulation, while flux pump is used to maintain the PCM operation after reaching the target current. This new operating mode combines the advantages of the contact DC power supply and the flux pump power supply, and can meet the requirements of fast current regulation and PCM operation maintenance of industrial HTS magnets with large inductance. This research results can provide reference for fast current regulation and maintain PCM operation of HTS magnets and accelerate the wide application of HTS magnets and travelling wave flux pumps.
2304.09556
Geometric Properties of the 2-D Peskin Problem
The 2-D Peskin problem describes a 1-D closed elastic string immersed and moving in a 2-D Stokes flow that is induced by its own elastic force. The geometric shape of the string and its internal stretching configuration evolve in a coupled way, and they combined govern the dynamics of the system. In this paper, we show that certain geometric quantities of the moving string satisfy extremum principles and decay estimates. As a result, we can prove that the 2-D Peskin problem admits a unique global solution when the initial data satisfies a medium-size geometric condition on the string shape, while no assumption on the size of stretching is needed.
Jiajun Tong, Dongyi Wei
2023-04-19T10:52:25Z
http://arxiv.org/abs/2304.09556v2
# Geometric properties of the 2-D Peskin problem ###### Abstract. The 2-D Peskin problem describes a 1-D closed elastic string immersed and moving in a 2-D Stokes flow that is induced by its own elastic force. The geometric shape of the string and its internal stretching configuration evolve in a coupled way, and they combined govern the dynamics of the system. In this paper, we show that certain geometric quantities of the moving string satisfy extremum principles and decay estimates. As a result, we can prove that the 2-D Peskin problem admits a unique global solution when the initial data satisfies a medium-size geometric condition on the string shape, while no assumption on the size of stretching is needed. ## 1. Introduction ### The 2-D Peskin problem In this paper, we study the 2-D Peskin problem, also known as the Stokes immersed boundary problem in 2-D. It describes a 1-D closed elastic string immersed and moving in a 2-D Stokes flow induced by itself. With \(s\in\mathbb{T}=\mathbb{R}/(2\pi\mathbb{Z})\) being the Lagrangian coordinate, we use \(X=X(s,t)\in\mathbb{R}^{2}\) to represent position of the string point with the Lagrangian label \(s\) at time \(t\). Then the 2-D Peskin problem reads that \[-\Delta u+\nabla p=\int_{\mathbb{T}}F_{X}(s,t)\delta(x-X(s,t))\,ds, \tag{1.2}\] \[\operatorname{div}u=0,\quad|u|,|p|\to 0\text{ as }|x|\to\infty,\] (1.3) \[\frac{\partial X}{\partial t}(s,t)=u(X(s,t),t),\quad X(s,0)=X_{0 }(s). \tag{1.1}\] Here (1.1) and (1.2) are the stationary Stokes equation. We use \(u=u(x,t)\) and \(p=p(x,t)\) to denote the flow velocity field and the pressure in \(\mathbb{R}^{2}\), respectively. The right-hand side of (1.1) gives the force applied to the fluid that is singularly supported along the curve \(X(\mathbb{T},t):=\{X(s,t):\,s\in\mathbb{T}\}\). \(F_{X}\) is the elastic force density in the Lagrangian coordinate. In general [1], \[F_{X}(s,t)=\partial_{s}\left(\mathcal{T}(|X^{\prime}(s,t)|,s,t)\frac{X^{ \prime}(s,t)}{|X^{\prime}(s,t)|}\right). \tag{1.4}\] For brevity, here and in what follows, we shall use \(X^{\prime}(s,t)\) and \(X^{\prime\prime}(s,t)\) to denote \(\partial_{s}X(s,t)\) and \(\partial_{ss}X(s,t)\), respectively. In (1.4), \(\mathcal{T}\) represents the tension along the string, which is determined by the elastic property of the string material. In the simple case of Hookean elasticity, \(\mathcal{T}(|X^{\prime}(s,t)|,s,t)=k_{0}|X^{\prime}(s,t)|\) with \(k_{0}>0\) being the Hooke's constant, so \(F_{X}(s,t)=k_{0}X^{\prime\prime}(s,t)\). (1.3) specifies that each string point should move with the flow. In early 1970s, aiming at studying blood flows around heart valves, Peskin [1, 2] introduced the general immersed boundary problem as well as the well-known numerical immersed boundary Introduction Let \(X\) be a smooth smooth smooth manifold with smooth problem and proved its well-posedness with general nonlinear elastic laws. Readers are also directed to [11] for a study on the case where the elastic string has both stretching and bending energy. To better understand the nonlinear structure of the Peskin problem as well as its possible blow-up mechanisms, the first author of this paper recently derived a new scalar model called the tangential Peskin problem [12]. It considers the 2-D Peskin problem in the case of an infinitely long and straight 1-D elastic string deforming only tangentially. Global solutions were constructed for initial datum in the energy class \(H^{1}\), and their properties were characterized. Let us highlight that, despite its very different geometric setup, this problem turns out to have a surprising connection with the original 2-D Peskin problem with a closed circular string; see Theorem 1.2 or Theorem 3.1 below. Let us also mention that the first author of this paper introduced the regularized Peskin problem in [13], and proved its well-posedness and convergence to the original Peskin problem as the regularization diminishes. The Peskin problem has noticeable mathematical similarities with several other evolution free boundary problems, especially the Muskat problem (see [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] and the references therein). Nevertheless, a distinct feature of the Peskin problem is that the elastic string is not just a time-varying curve \(X(\mathbb{T},t):=\{X(s,t)\in\mathbb{R}^{2}:\,s\in\mathbb{T}\}\), but it has internal elastic structure. More precisely, the planar curve \(X(\mathbb{T},t)\) only captures _geometric information_ of the string, with the function \((s,t)\mapsto X^{\prime}(s,t)/|X^{\prime}(s,t)|\) representing its unit tangent vector, whereas the function \((s,t)\mapsto|X^{\prime}(s,t)|\) encodes the _stretching configuration_. Strings with identical shape may have quite different internal stretching configurations, leading to different dynamics. Thus, we need to track both the geometric and stretching information of the string at the same time, while they evolve in a strongly coupled way. ### Main results In this paper, we point out that certain _geometric_ quantities of the curve \(X(\mathbb{T},t)\) enjoy extremum principles and decay estimates. This holds regardless of the stretching configuration of the string. As a result, we can prove that, if the initial data satisfies some medium-size _geometric_ condition in addition to those classic assumptions, the 2-D Peskin problem admits a unique global solution. No additional restriction is needed on size of the stretching \(|X^{\prime}(s,t)|\). We can also show that the global solution converges exponentially to a final equilibrium state as \(t\to+\infty\). In the rest of the paper, we shall treat \(X=X(s,t)\) as a complex-valued function instead of a vector-valued function, i.e., \(X=X_{1}+iX_{2}\) with \(X_{1},X_{2}\in\mathbb{R}\). Then (1.7) can be recast as \[\begin{split}\partial_{t}X(s)&=\frac{1}{4\pi}\text {p.v.}\int_{\mathbb{T}}\text{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{ \prime})-X(s))^{2}}\right]\left(X(s^{\prime})-X(s)\right)ds^{\prime}\\ &=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\frac{X^{\prime}(s^{ \prime})^{2}}{X(s^{\prime})-X(s)}\,ds^{\prime}-\frac{i}{4\pi}\int_{\mathbb{T} }\text{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2}} \right]\left(X(s^{\prime})-X(s)\right)ds^{\prime}.\end{split} \tag{1.9}\] Here and in what follows, the time dependence will get omitted whenever it is convenient. (1.9) is equipped with the initial condition \[X(s,0)=X_{0}(s). \tag{1.10}\] Before we state the main results, let us introduce some notations. Given \(X(\cdot,t)\), we define \(R_{X}>0\) by \[\pi R_{X}^{2}=\frac{1}{2}\int_{\mathbb{T}}\operatorname{Im}\bigl{[}\overline{X( s^{\prime})}X^{\prime}(s^{\prime})\bigr{]}\,ds^{\prime}. \tag{1.11}\] The right-hand side gives the area of the planar region enclosed by the curve \(X(\mathbb{T},t)\), so \(R_{X}\) will be referred as the _effective radius_ of \(X\). When \(X(s,t)\) is a sufficiently smooth solution to (1.9), \(R_{X}\) is time-invariant since the flow field in \(\mathbb{R}^{2}\) is divergence-free and the area is conserved (see [3, 4]). Assume that \(X(\cdot,t)\in C^{1}(\mathbb{T})\) satisfies the well-stretched condition defined in (1.8). This implies that \(X(s_{1},t)\neq X(s_{2},t)\) for distinct \(s_{1},s_{2}\in\mathbb{T}\), and that \(\inf_{s\in\mathbb{T}}|X^{\prime}(s,t)|>0\). Then we let \[\Phi(s_{1},s_{2},t):=\begin{cases}\arg\left[\frac{X^{\prime}(s_{1},t)X^{\prime }(s_{2},t)}{(X(s_{1},t)-X(s_{2},t))^{2}}\right],&\text{if $s_{1},s_{2}\in\mathbb{T}$, $s_{1}\neq s_{2}$},\\ 0,&\text{if $s_{1}=s_{2}\in\mathbb{T}$}.\end{cases} \tag{1.12}\] Under the above assumptions, the angle \(\Phi(\cdot,\cdot,t)\) is pointwise well-defined on \(\mathbb{T}\times\mathbb{T}\) with its value in \(\mathbb{T}\). It is fully determined by the geometric shape of \(X(\mathbb{T},t)\), but hardly depends on the internal stretching configuration \(|X^{\prime}|\). If \(X(\mathbb{T})\) is in a perfectly circular shape, then \(\Phi(s_{1},s_{2})\equiv 0\) for all \(s_{1},s_{2}\in\mathbb{T}\); in fact, the converse is also true (see Lemma 3.3). In general, \(\Phi(s_{1},s_{2})\) measures the asymmetry of the curve \(X(\mathbb{T})\) when observed from the points \(X(s_{1})\) and \(X(s_{2})\). It is invariant under translation, rotation, and dilation of the system. In our analysis below, \(\Phi\) will play an extremely important role. Given \(\alpha\in(0,1)\) and a function \(Y:\mathbb{T}\to\mathbb{C}\), we recall definitions of the \(C^{1,\alpha}(\mathbb{T})\)-semi-norm and the \(C^{1,\alpha}(\mathbb{T})\)-norm: \[\|Y\|_{C^{1,\alpha}(\mathbb{T})}:=\sup_{s_{1},s_{2}\in\mathbb{T} \atop s_{1}\neq s_{2}}\frac{|Y^{\prime}(s_{1})-Y^{\prime}(s_{2})|}{|s_{1}-s_{ 2}|_{\mathbb{T}}^{\alpha}},\] \[\|Y\|_{C^{1,\alpha}(\mathbb{T})}:=\|Y\|_{L^{\infty}(\mathbb{T})}+ \|Y^{\prime}\|_{L^{\infty}(\mathbb{T})}+\|Y\|_{\dot{C}^{1,\alpha}(\mathbb{T})}.\] We use \(h^{1,\alpha}(\mathbb{T})\) to denote the little Holder space, which is the completion of \(C^{\infty}(\mathbb{T})\) under the \(C^{1,\alpha}(\mathbb{T})\)-norm. Now we can state our main result. **Theorem 1.1**.: _Suppose that \(X_{0}(s)\in h^{1,\alpha}(\mathbb{T})\) for some \(\alpha\in(0,1)\), and it satisfies the well-stretched condition. Let \(R_{X}\) be defined in (1.11) in terms of \(X_{0}\). Define \(\Phi_{0}(s,s^{\prime})\) in terms of \(X_{0}\) as in (1.12). If \(|\Phi_{0}(s,s^{\prime})|<\pi/4\) for all \(s,s^{\prime}\in\mathbb{T}\), then (1.9) and (1.10) admit a unique global solution \(X=X(s,t)\) in the class \(C_{loc}([0,+\infty);C^{1,\alpha}(\mathbb{T}))\cap C^{1}_{loc}([0,+\infty);C^{ \alpha}(\mathbb{T}))\), in the sense that_ \[X(s,t)\text{ satisfies \eqref{eq:c_1} in $\mathbb{T}\times(0,+\infty)$, and $X(\cdot,t)\to X_{0}(\cdot)$ in $C^{1,\alpha}(\mathbb{T})$ as $t\to 0$.}\] _Moreover, the solution has the following properties:_ 1. _For any_ \(\delta>0\) _and any_ \(k\in\mathbb{N}\)_,_ \(X\in C^{1}_{loc}([\delta,+\infty);C^{k}(\mathbb{T}))\)_._ 2. _There exists a universal non-negative and strictly increasing function_ \(\lambda_{\circ}(t)\) _defined on_ \([0,+\infty)\)_, with_ \(\lambda_{\circ}(0)=0\)_, such that_ \(X(\cdot,t)\) _satisfies the well-stretched condition with constant_ \(\lambda_{\circ}(t)R_{X}\) _._ 3. _Let_ \(\Phi_{*}(t):=\sup_{s,s^{\prime}\in\mathbb{T}}|\Phi(s,s^{\prime},t)|\)_. Then_ \(\Phi_{*}(t)\) _is continuous and non-increasing on_ \([0,+\infty)\)_, and_ \[0\leq\Phi_{*}(t)\leq\Phi_{*}(0)\min\big{\{}e^{-\mu t},\,Ce^{-t/\pi^{2}}\big{\}},\] _where_ \(\mu=(4-2\sqrt{2})/\pi^{3}\) _and where_ \(C>0\) _is a universal constant._ 4. _Let_ \(\kappa(s,t)\) _be the curvature of the curve_ \(X(\mathbb{T},t)\) _at the point_ \(X(s,t)\)_, given by_ \[\kappa(s,t)=\frac{\operatorname{Im}[X^{\prime\prime}(s,t)/X^{\prime}(s,t)]}{| X^{\prime}(s,t)|}.\] _Then there exist universal constants_ \(C,c>0\)_, such that for any_ \(t>0\)_,_ \[\|\kappa(s,t)R_{X}-1\|_{L^{\infty}(\mathbb{T})}\leq C\exp\left[C\left(\int_{0} ^{t}\cos 2\Phi_{*}(\tau)\,d\tau\right)^{-1}-ct\right].\] 5. _There exist_ \(x_{\infty}\in\mathbb{C}\) _and_ \(\xi_{\infty}\in\mathbb{T}\)_, such that,_ \(X(\cdot,t)\) _converges exponentially to an equilibrium_ \[X_{\infty}(s):=x_{\infty}+R_{X}e^{i(s+\xi_{\infty})}\] _in_ \(H^{2}(\mathbb{T})\) _as_ \(t\to+\infty\)_. More precisely, for any_ \(\delta>0\)_, there exists a constant_ \(C>0\) _depending on_ \(\delta\) _and_ \(X_{0}\)_, and a universal constant_ \(c>0\) _not depending on_ \(\delta\) _or_ \(X_{0}\)_, such that_ \[\|X(\cdot,t)-X_{\infty}(\cdot)\|_{H^{2}(\mathbb{T})}\leq Ce^{-ct}\] _for all_ \(t\geq\delta\)_._ In addition to the classic regularity assumption and the well-stretched condition on the initial data, Theorem 1.1 only adds a _geometric_ condition \(|\Phi_{0}|<\pi/4\) to guarantee the global well-posedness, and imposes no restriction on the size of stretching of the string, which is very different from all the existing results on global well-posedness [3, 4, 5, 7]. The threshold \(\pi/4\) reflects the nature of the Stokeslet. We will see later that, if \(X_{0}(s)\) is \(O(1)\)-close to an equilibrium in the \(C^{1}(\mathbb{T})\)-seminorm, the corresponding \(\Phi_{0}\) will satisfy \(|\Phi_{0}|<\pi/4\). More generally, if we additionally let \(\psi:\mathbb{T}\to\mathbb{T}\) be an arbitrary bijective diffeomorphism that is suitably smooth, and define \(\tilde{\Phi}_{0}\) in terms of the reparameterized configuration \(\tilde{X}_{0}=X_{0}\circ\psi\) with \(X_{0}\) as above, then \(|\tilde{\Phi}_{0}|<\pi/4\) since \(\tilde{\Phi}_{0}(s,s^{\prime})=\Phi_{0}(\psi(s),\psi(s^{\prime}))\). Thus, Theorem 1.1 applies to such initial data \(X_{0}\circ\psi\), and there is no restriction on the size of \(\psi\). See the precise statements in Remark 3.1. In view of this (and also Proposition 3.2), the condition \(|\Phi_{0}|<\pi/4\) can be understood as a medium-size \(C^{1}\)-condition on the geometry of the initial string. A key ingredient in proving Theorem 1.1 is to show that, under the condition \(|\Phi_{0}(s,s^{\prime})|<\pi/4\), \(\Phi(s,s^{\prime},t)\) and \(\kappa(s,t)\) satisfy some extremum principles and decay estimates (see Proposition 3.1 and Proposition 4.1, respectively). Such extremum principles can even be generalized to the case of general elasticity; see Section 3.2 and Remark 4.2. We use the sub-critical assumption \(X_{0}\in h^{1,\alpha}(\mathbb{T})\) here for convenience. One may be able to replace it by a weaker regularity assumption on the stretching function \(|X^{\prime}(s,t)|\) and a separate geometric assumption on the curve \(X(\mathbb{T})\). However in that case, the analysis would become much more involved. In order to highlight our findings and not to distract the readers, we decide to work with the current assumptions, and leave further generalizations to future works. With that being said, in the special case when \(X_{0}(\mathbb{T})\) is a circle but \(X_{0}(s)\) is not necessarily an equilibrium, it is relatively straightforward to weaken the assumption to \(X_{0}\in H^{1}(\mathbb{T})\). Note that \(H^{1}(\mathbb{T})\) is the energy class for the 2-D Peskin problem in the Hookean case, which is super-critical according to scaling of the equation. For \(H^{1}\)-circular initial data satisfying some other natural assumptions, we can construct a global solution \(X(s,t)\), with \(X(\mathbb{T},t)\) being a circle of the same radius for all time. Motion of the center of the circle can be determined, and surprisingly, the tangential deformation of the string is described _exactly_ by the tangential Peskin problem [12]. **Theorem 1.2** (Short version of Theorem 3.1).: _Assume \(X_{0}\in H^{1}(\mathbb{T})\), such that \(X_{0}(\mathbb{T})\) is a circle of radius \(R_{X}\) centered at \(x_{0}\in\mathbb{C}\). Suppose for a strictly increasing continuous function \(\theta_{0}:\mathbb{R}\to\mathbb{R}\) which satisfies \(\theta_{0}(s+2\pi)=\theta_{0}(s)+2\pi\) in \(\mathbb{R}\), it holds that \(X_{0}(s)=x_{0}+R_{X}e^{i\theta_{0}(s)}\) in the notation of complex numbers. Under suitable assumptions on \(\theta_{0}(s)\), (1.9) and (1.10) admit a global solution \(X=X(s,t)\) in \(\mathbb{T}\times[0,+\infty)\), which can be constructed as follows. Let \(\theta=\theta(s,t)\) be a solution to the tangential Peskin problem [12, Corollary 2.1]_ \[\partial_{t}\theta(s,t)=-\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{R}}\frac{( \partial_{s^{\prime}}\theta(s^{\prime},t))^{2}}{\theta(s,t)-\theta(s^{\prime},t)}\,ds^{\prime},\quad\theta(s,0)=\theta_{0}(s)\] _in \(\mathbb{R}\times[0,+\infty)\). Let_ \[v(t):=-\frac{R_{X}}{8\pi}\int_{\mathbb{T}}e^{i\theta(s^{\prime},t)}\big{(} \partial_{s^{\prime}}\theta(s^{\prime},t)\big{)}^{2}\,ds^{\prime}.\] _Then_ \[X(s,t)=x(t)+R_{X}e^{i\theta(s,t)}\text{ with }x(t):=x_{0}+\int_{0}^{t}v( \tau)\,d\tau\] _gives the desired solution to (1.9) and (1.10). Moreover, \(X(s,t)\) is smooth in \(\mathbb{T}\times(0,+\infty)\). For any \(t>0\), \(X(\mathbb{T},t)\) is a circle, and \(X(\cdot,t)\) satisfies the well-stretched condition. For arbitrary \(k\in\mathbb{N}\), \(X(s,t)\) converges exponentially to a final equilibrium configuration \(X_{\infty}(s)\) in \(H^{k}(\mathbb{T})\)-norms as \(t\to+\infty\)._ ### Organization of the paper A major part of this work is devoted to proving various a priori estimates for sufficiently smooth solutions. In Section 2, we set up the problem, introduce necessary notations, and derive some basic equations. Section 3 and Section 4 constitute the most important part of the paper, where we study geometric properties of the curve \(X(\mathbb{T})\). In Section 3, we show that under the assumption \(\sup_{s,s^{\prime}\in\mathbb{T}}|\Phi_{0}(s,s^{\prime})|<\pi/4\), \(|\Phi|\) satisfies the maximum principle and a decay estimate. With the bound on \(|\Phi|\), we can quantify geometric features of the curve \(X(\mathbb{T})\). We also prove Theorem 1.2 (i.e., Theorem 3.1) on \(H^{1}\)-initial data with a circular shape. Section 4 is focused on the curvature \(\kappa(s,t)\), and proves its extremum principle and decay estimate. We extend the extremum principles on \(\Phi\) and \(\kappa\) to the case of general elasticity in Section 3.2 and Remark 4.2, respectively. In Section 5, we turn to deriving estimates for the stretching configuration \(|X^{\prime}|\). Finally, we prove Theorem 1.1 in Section 6. **Acknowledgement**.: Both authors are supported by the National Key R&D Program of China under the grant 2021YFA1001500. ## 2. Preliminaries ### Setup and the notations Throughout this paper, except for the proof of the main results in Section 6, we will always assume that for some \(T>0\), \(X(s,t)\) solves (1.9) in \(\mathbb{T}\times[0,T]\), and satisfies 1. For each \(t\in[0,T]\), \(s\mapsto X(s,t)\) is injective, and the image \(X(\mathbb{T},t)\) is a closed \(C^{2}\)-curve in \(\mathbb{C}\); 2. \(X^{\prime}(s,t),X^{\prime\prime}(s,t),X^{\prime\prime\prime}(s,t)\in C^{1}( \mathbb{T}\times[0,T])\); 3. \(|X^{\prime}(s,t)|>0\) in \(\mathbb{T}\times[0,T]\). Besides, we assume the elastic string is parameterized by \(X=X(s,t)\) in the counter-closewise direction (i.e., its winding number with respect to an arbitrary point in the interior of \(X(\mathbb{T})\) is \(1\)). Given these assumptions, it is not difficult to show that, for each \(t\in[0,T]\), \[\inf_{s\neq s^{\prime}}\frac{|X(s,t)-X(s^{\prime},t)|}{|s-s^{\prime}|}>0. \tag{2.1}\] We first prove a lemma that will be used repeatedly in the calculation below. **Lemma 2.1**.: _Fix \(t\in[0,T]\), which will be omitted below. For any \(s\in\mathbb{T}\),_ \[\text{\rm p.v.}\int_{\mathbb{T}}\frac{X^{\prime}(s^{\prime})}{X(s^{\prime})-X (s)}\,ds^{\prime}=\pi i,\] _and_ \[\int_{\mathbb{T}}\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})X^{\prime }(s)}{(X(s^{\prime})-X(s))^{2}}\right]ds^{\prime}=0.\] Proof.: We calculate the first integral that \[\text{\rm p.v.}\int_{\mathbb{T}}\frac{X^{\prime}(s^{\prime})}{X(s^{\prime})-X (s)}\,ds^{\prime}=\lim_{\varepsilon\to 0^{+}}\ln(X(s+2\pi-\varepsilon)-X(s))-\ln(X(s+ \varepsilon)-X(s)).\] Its imaginary part is \(\pi i\), as we assumed that \(X\in C^{1}(\mathbb{T})\) and the curve \(X(\mathbb{T})\) is parameterized in the counter-clockwise direction. It suffices to show the real part is \(0\). In fact, \[\lim_{\varepsilon\to 0^{+}}\ln\frac{|X(s-\varepsilon)-X(s)|}{|X(s+ \varepsilon)-X(s)|}=\lim_{\varepsilon\to 0^{+}}\ln\frac{|X(s-\varepsilon)-X(s)|/ \varepsilon}{|X(s+\varepsilon)-X(s)|/\varepsilon}=\ln\frac{|X^{\prime}(s)|}{ |X^{\prime}(s)|}=0.\] This proves the first claim. For the second integral, we first observe that \[\begin{split}\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})X ^{\prime}(s)}{(X(s^{\prime})-X(s))^{2}}\right]&=\operatorname{ Im}\left[\frac{X^{\prime}(s^{\prime})-\frac{X(s^{\prime})-X(s)}{s^{\prime}-s}}{X(s^{ \prime})-X(s)}\cdot\frac{X^{\prime}(s)-\frac{X(s^{\prime})-X(s)}{s^{\prime}-s} }{X(s^{\prime})-X(s)}\right]\\ &\quad-2\operatorname{Im}\left[\frac{\frac{X(s^{\prime})-X(s)}{s ^{\prime}-s}-X^{\prime}(s)-\frac{1}{2}(s^{\prime}-s)X^{\prime\prime}(s)}{(s^{ \prime}-s)(X(s^{\prime})-X(s))}\right]\\ &\quad+\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})-X^{ \prime}(s)-(s^{\prime}-s)X^{\prime\prime}(s)}{(s^{\prime}-s)(X(s^{\prime})-X(s ))}\right].\end{split} \tag{2.2}\] One can show that this is bounded by using (2.1), the assumption \(X\in C^{3}(\mathbb{T})\), and the Taylor expansion. Then we calculate the second integral that \[\int_{\mathbb{T}}\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})X^{ \prime}(s)}{(X(s^{\prime})-X(s))^{2}}\right]ds^{\prime}\] \[= \lim_{\varepsilon\to 0^{+}}\mathrm{Im}\left[X^{\prime}(s)\int_{ \mathbb{T}\setminus[-\varepsilon,\varepsilon]}\partial_{s^{\prime}}\left(- \frac{1}{X(s+s^{\prime})-X(s)}\right)ds^{\prime}\right]\] \[= \lim_{\varepsilon\to 0^{+}}\mathrm{Im}\left[\frac{X^{\prime}(s)}{X(s+ \varepsilon)-X(s)}-\frac{X^{\prime}(s)}{X(s-\varepsilon)-X(s)}\right]\] \[= -\lim_{\varepsilon\to 0^{+}}\frac{\mathrm{Im}[\overline{X^{ \prime}(s)}(X(s+\varepsilon)-X(s))]}{|X(s+\varepsilon)-X(s)|^{2}}+\lim_{ \varepsilon\to 0^{+}}\frac{\mathrm{Im}[\overline{X^{\prime}(s)}(X(s- \varepsilon)-X(s))]}{|X(s-\varepsilon)-X(s)|^{2}}.\] We find \[\lim_{\varepsilon\to 0^{+}}\frac{\mathrm{Im}[\overline{X^{ \prime}(s)}(X(s+\varepsilon)-X(s))]}{|X(s+\varepsilon)-X(s)|^{2}} = \lim_{\varepsilon\to 0^{+}}\frac{\varepsilon^{-2}\mathrm{Im}[ \overline{X^{\prime}(s)}(X(s+\varepsilon)-X(s)-\varepsilon X^{\prime}(s))]}{ \varepsilon^{-2}|X(s+\varepsilon)-X(s)|^{2}}\] \[= \frac{\mathrm{Im}(\overline{X^{\prime}(s)}X^{\prime\prime}(s))}{ 2|X^{\prime}(s)|^{2}},\] and similarly, \[\lim_{\varepsilon\to 0^{+}}\frac{\mathrm{Im}[\overline{X^{\prime}(s)}(X(s- \varepsilon)-X(s))]}{|X(s-\varepsilon)-X(s)|^{2}}=\frac{\mathrm{Im}(\overline {X^{\prime}(s)}X^{\prime\prime}(s))}{2|X^{\prime}(s)|^{2}}.\] Then the second claim follows. Using Lemma 2.1, we differentiate (1.9) to obtain \[\partial_{t}X^{\prime}(s)\] \[= \frac{1}{4\pi}\partial_{s}\left(\int_{\mathbb{T}}\frac{X^{\prime }(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}(s))}{X(s^{\prime})-X(s)}\,ds^{ \prime}+\pi iX^{\prime}(s)\right) \tag{2.3}\] \[-\frac{i}{4\pi}\int_{\mathbb{T}}\partial_{s}\left[\mathrm{Im} \left(\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2}}\right)\left( X(s^{\prime})-X(s)\right)\right]ds^{\prime}\] \[= X^{\prime}(s)\cdot\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}} \frac{X^{\prime}(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}(s))}{(X(s^{ \prime})-X(s))^{2}}\,ds^{\prime}\] \[-\frac{i}{4\pi}\int_{\mathbb{T}}\left\{\mathrm{Im}\left[\frac{2X ^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\left( X(s^{\prime})-X(s)\right)-\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{ \prime})-X(s))^{2}}\right]X^{\prime}(s)\right\}ds^{\prime}.\] No principal value integral is needed in the last line because the integrand is bounded given the assumptions on \(X\). This can be justified as in (2.2). We introduce a few more notations that will be used throughout the paper. For \(s\in\mathbb{T}\), let \[\alpha(s):=\arg X^{\prime}(s)\in\mathbb{T}.\] Here \(\alpha(s)\) and the angles defined below should always be understood in the modulo \(2\pi\). For distinct \(s_{1},s_{2}\in\mathbb{T}\), let \[I(s_{1},s_{2}):=\frac{X^{\prime}(s_{1})}{X(s_{1})-X(s_{2})},\] and \[J(s_{1},s_{2}):=\frac{X^{\prime}(s_{1})X^{\prime}(s_{2})}{(X(s_{1})-X(s_{2}))^{2}}= \frac{\partial I}{\partial s_{2}}(s_{1},s_{2}).\] As a result, for distinct \(s_{1},s_{2}\in\mathbb{T}\) (cf. (1.12)), \[\Phi(s_{1},s_{2})=\arg J(s_{1},s_{2}).\] Since we additionally defined \(\Phi(s,s)=0\) for all \(s\in\mathbb{T}\), thanks to the regularity of \(X(s,t)\), \(\Phi(s_{1},s_{2},t)\) is (at least) a \(C^{1}\)-function in \(\mathbb{T}\times\mathbb{T}\times[0,T]\). Obviously, \[J(s_{1},s_{2})=J(s_{2},s_{1}),\quad\Phi(s_{1},s_{2})=\Phi(s_{2},s_{1}).\] Besides, define for distinct \(s_{0},s_{1},s_{2}\in\mathbb{T}\), \[\tilde{J}(s_{0},s_{1},s_{2})=\frac{X^{\prime}(s_{0})(X(s_{1})-X(s_{2}))}{(X(s_ {0})-X(s_{1}))(X(s_{0})-X(s_{2}))}=I(s_{0},s_{1})-I(s_{0},s_{2}).\] The total length of the elastic string \(X(s,t)\) is given by \[\mathcal{L}(t):=\int_{\mathbb{T}}|X^{\prime}(s,t)|\,ds. \tag{2.4}\] By the isoperimetric inequality, \(\mathcal{L}(t)\geq 2\pi R_{X}\), where \(R_{X}\) was defined in (1.11). The elastic energy of the string, which is also the total energy of the system (1.1)-(1.3) in the case \(F_{X}(s,t)=X^{\prime\prime}(s,t)\), is given by \[\mathcal{E}(t):=\frac{1}{2}\int_{\mathbb{T}}|X^{\prime}(s,t)|^{2}\,ds. \tag{2.5}\] Lastly, let \[d_{*}:=\sup_{s,s^{\prime}\in\mathbb{T}}|X(s)-X(s^{\prime})|. \tag{2.6}\] It is clear that \(2d_{*}\leq\mathcal{L}(t)\). ### The equation for \(|X^{\prime}|\) Let us derive the equation for \(|X^{\prime}|\), which characterizes local stretching of the string. Since \[|X^{\prime}|\cdot\partial_{t}|X^{\prime}|=\frac{1}{2}\partial_{t}|X^{\prime}| ^{2}=\operatorname{Re}\left[\partial_{t}X^{\prime}(s)\overline{X^{\prime}(s) }\right],\] we derive from (2.3) that \[\begin{split}&\partial_{t}|X^{\prime}(s)|\\ =&|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{ \mathbb{T}}\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})(X^{\prime}(s^{ \prime})-X^{\prime}(s))}{(X(s^{\prime})-X(s))^{2}}\right]ds^{\prime}\\ &+|X^{\prime}(s)|\cdot\frac{1}{4\pi}\int_{\mathbb{T}}\operatorname {Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s) )^{3}}\right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)} \right]ds^{\prime}.\end{split} \tag{2.7}\] With the notations introduced above, \[\partial_{t}|X^{\prime}(s)|\] \[=|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}} \operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{ \prime})-X(s))^{3}}\cdot\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}-J(s^{\prime},s )\right]ds^{\prime}\] \[\quad+|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{ T}}\operatorname{Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{ \prime})-X(s))^{3}}\right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^ {\prime}(s)}\right]ds^{\prime}\] \[=|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}} \left\{\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{( X(s^{\prime})-X(s))^{3}}\right]\operatorname{Re}\left[\frac{X(s^{\prime})-X(s)}{X^ {\prime}(s)}\right]-\operatorname{Re}J(s^{\prime},s)\right\}ds^{\prime}\] \[\quad+|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{ T}}\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{ \prime})-X(s))^{3}}\right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^ {\prime}(s)}\right]ds^{\prime}\] \[=|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}} \left\{\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{( X(s^{\prime})-X(s))^{3}}\cdot\frac{\overline{X(s^{\prime})-X(s)}}{\overline{X^{ \prime}(s)}}\right]-\operatorname{Re}J(s^{\prime},s)\right\}ds^{\prime}\] \[=|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}} \operatorname{Re}\left[J(s^{\prime},s)^{2}\cdot\frac{|X(s^{\prime})-X(s)|^{2} }{|X^{\prime}(s)|^{2}}-J(s^{\prime},s)\right]ds^{\prime}.\] Therefore, \[\partial_{t}|X^{\prime}(s)|\] \[=|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}} \frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|^{2}}\left[|X^{\prime}(s^ {\prime})|\cos 2\Phi(s^{\prime},s)-|X^{\prime}(s)|\cos\Phi(s^{\prime},s) \right]ds^{\prime}. \tag{2.8}\] ### The equation for \(\alpha(s)\) Although \(\alpha\) is a \(\mathbb{T}\)-valued function, its time derivative is well-defined as a real-valued function \[\partial_{t}\alpha=\operatorname{Im}\frac{\partial_{t}X^{\prime}(s)}{X^{ \prime}(s)}.\] Using Lemma 2.1, we derive from (2.3) that \[\operatorname{Im}\frac{\partial_{t}X^{\prime}(s)}{X^{\prime}(s)}\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Im}\left[ \frac{X^{\prime}(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}(s))}{(X(s^{ \prime})-X(s))^{2}}\right]ds^{\prime} \tag{2.9}\] \[\quad-\frac{1}{4\pi}\int_{\mathbb{T}}\left\{\operatorname{Im} \left[\frac{2X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3} }\right]\operatorname{Re}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)} \right]-\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime}) -X(s))^{2}}\right]\right\}ds^{\prime}\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\left\{\operatorname{ Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2}}\right]- \operatorname{Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{ \prime})-X(s))^{3}}\right]\operatorname{Re}\left[\frac{X(s^{\prime})-X(s)}{X^ {\prime}(s)}\right]\right\}ds^{\prime}\] \[=\frac{1}{2\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right] \operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]ds^{ \prime}.\] In order to take its derivative later, let us give an alternative form of this equation. Again using Lemma 2.1, we derive from (2.9) that \[\begin{split}&\operatorname{Im}\frac{\partial_{t}X^{\prime}(s)}{X^{ \prime}(s)}\\ &=\frac{1}{2\pi}\int_{\mathbb{T}}\left\{\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}} \right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right] -\frac{1}{2}\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})X^{\prime\prime }(s)}{X^{\prime}(s)(X(s^{\prime})-X(s))}\right]\right\}ds^{\prime}\\ &\quad+\operatorname{Im}\left[\frac{i}{4}\frac{X^{\prime\prime}(s )}{X^{\prime}(s)}\right].\end{split} \tag{2.10}\] ## 3. The Angle \(\Phi(s_{1},s_{2})\) In this section, we first prove in Proposition 3.1 that \(\sup_{s_{1},s_{2}\in\mathbb{T}}|\Phi(s_{1},s_{2},t)|\) satisfies a maximum principle and a decay estimate if it is initially less than \(\pi/4\). This holds even when the elasticity law takes a more general form. Then we study the geometric properties of the curve \(X(\mathbb{T})\) when there is a bound for \(|\Phi|\) (see Proposition 3.2). Lastly, we state and prove Theorem 3.1 on the global solution starting from \(H^{1}\)-initial data with a circular shape. ### Maximum principle and decay of \(\Phi\) We start from deriving the equation for \(\Phi(s_{1},s_{2})\). **Lemma 3.1**.: _For distinct \(s^{\prime},s_{1},s_{2}\in\mathbb{T}\), denote_ \[\theta=\theta(s^{\prime},s_{1},s_{2}):=\Phi(s^{\prime},s_{1})+\Phi(s^{\prime}, s_{2})-\Phi(s_{1},s_{2}).\] _Then given distinct \(s_{1},s_{2}\in\mathbb{T}\), in the sense of modulo \(2\pi\), \(\Phi(s_{1},s_{2})\) satisfies_ \[\begin{split}&\partial_{t}\Phi(s_{1},s_{2})\\ &=\frac{1}{4\pi}\text{\rm p.v.}\int_{\mathbb{T}}\left\{\frac{|X^{ \prime}(s^{\prime})|^{2}(\sin\theta-\sin 2\Phi(s^{\prime},s_{1}))}{|X(s^{\prime})-X(s_{1})|^ {2}}+\frac{|X^{\prime}(s^{\prime})|^{2}(\sin\theta-\sin 2\Phi(s^{\prime},s_{2}))}{|X(s^{ \prime})-X(s_{2})|^{2}}\right\}ds^{\prime}.\end{split} \tag{3.1}\] Proof.: By (1.12), \[\partial_{t}\Phi(s_{1},s_{2})=\operatorname{Im}\frac{\partial_{t}X^{\prime}(s _{1})}{X^{\prime}(s_{1})}+\operatorname{Im}\frac{\partial_{t}X^{\prime}(s_{2} )}{X^{\prime}(s_{2})}-2\operatorname{Im}\frac{\partial_{t}(X(s_{1})-X(s_{2}) )}{X(s_{1})-X(s_{2})}. \tag{3.2}\] From the first line of (1.9), we derive that \[\begin{split}\operatorname{Im}\frac{\partial_{t}(X(s_{1})-X(s_{2 }))}{X(s_{1})-X(s_{2})}&=\frac{1}{4\pi}\text{\rm p.v.}\int_{ \mathbb{T}}\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{ \prime})-X(s_{1}))^{2}}\right]\operatorname{Im}\frac{X(s^{\prime})-X(s_{1})}{ X(s_{1})-X(s_{2})}\,ds^{\prime}\\ &\quad-\frac{1}{4\pi}\text{\rm p.v.}\int_{\mathbb{T}}\operatorname {Re}\left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s_{2}))^{2}} \right]\operatorname{Im}\frac{X(s^{\prime})-X(s_{2})}{X(s_{1})-X(s_{2})}\,ds^ {\prime}.\end{split}\] Note that \[\operatorname{Im}\frac{X(s^{\prime})-X(s_{1})}{X(s_{1})-X(s_{2})}=\operatorname {Im}\frac{X(s^{\prime})-X(s_{2})}{X(s_{1})-X(s_{2})},\] so \[\operatorname{Im}\frac{\partial_{t}(X(s_{1})-X(s_{2}))}{X(s_{1})-X (s_{2})}\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s_{1}))^{2}}-\frac{X^{\prime }(s^{\prime})^{2}}{(X(s^{\prime})-X(s_{2}))^{2}}\right]\operatorname{Im}\frac{X (s^{\prime})-X(s_{1})}{X(s_{1})-X(s_{2})}\,ds^{\prime}\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}(X(s_{1})-X(s_{2}))}{(X(s^{\prime})-X(s_{1}))^ {2}(X(s^{\prime})-X(s_{2}))}\right]\operatorname{Im}\frac{X(s^{\prime})-X(s_{ 2})}{X(s_{1})-X(s_{2})}\,ds^{\prime}\] \[\quad+\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re} \left[\frac{X^{\prime}(s^{\prime})^{2}(X(s_{1})-X(s_{2}))}{(X(s^{\prime})-X(s_{ 2}))^{2}(X(s^{\prime})-X(s_{1}))}\right]\operatorname{Im}\frac{X(s^{\prime})-X (s_{1})}{X(s_{1})-X(s_{2})}\,ds^{\prime}.\] Thanks to (2.9), we also have that \[\operatorname{Im}\frac{\partial_{t}X^{\prime}(s_{1})}{X^{\prime}( s_{1})} =\frac{1}{2\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s_{1})}{(X(s^{\prime})-X(s_{1}))^{ 3}}\right]\operatorname{Im}\frac{X(s^{\prime})-X(s_{1})}{X^{\prime}(s_{1})}\,ds ^{\prime},\] \[\operatorname{Im}\frac{\partial_{t}X^{\prime}(s_{2})}{X^{\prime} (s_{2})} =\frac{1}{2\pi}\text{p.v.}\int_{\mathbb{T}}\operatorname{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s_{2})}{(X(s^{\prime})-X(s_{2}))^{ 3}}\right]\operatorname{Im}\frac{X(s^{\prime})-X(s_{2})}{X^{\prime}(s_{2})}\, ds^{\prime}.\] Hence, \[\partial_{t}\Phi(s_{1},s_{2})=\frac{1}{2\pi}\text{p.v.}\int_{\mathbb{T}}(K_{1} +K_{2})\,ds^{\prime}\] where \[K_{1} :=\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{ \prime}(s_{1})}{(X(s^{\prime})-X(s_{1}))^{3}}\right]\operatorname{Im}\frac{X( s^{\prime})-X(s_{1})}{X^{\prime}(s_{1})}\] \[\quad-\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}(X( s_{1})-X(s_{2}))}{(X(s^{\prime})-X(s_{1}))^{2}(X(s^{\prime})-X(s_{2}))}\right] \operatorname{Im}\frac{X(s^{\prime})-X(s_{2})}{X(s_{1})-X(s_{2})},\] \[K_{2} :=\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{ \prime}(s_{2})}{(X(s^{\prime})-X(s_{2}))^{3}}\right]\operatorname{Im}\frac{X( s^{\prime})-X(s_{2})}{X^{\prime}(s_{2})}\] \[\quad-\operatorname{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}(X( s_{1})-X(s_{2}))}{(X(s^{\prime})-X(s_{2}))^{2}(X(s^{\prime})-X(s_{1}))}\right] \operatorname{Im}\frac{X(s^{\prime})-X(s_{1})}{X(s_{1})-X(s_{2})}.\] For arbitrary complex numbers \(A,B,C\) with \(B,C\neq 0\), \[\operatorname{Re}\left[A/B\right]\operatorname{Im}B-\operatorname{ Re}\left[A/C\right]\operatorname{Im}C\] \[=\frac{1}{4i}\left[(A/B+\overline{A}/\overline{B})(B-\overline{B })-(A/C+\overline{A}/\overline{C})(C-\overline{C})\right]\] \[=\frac{1}{4i}\left[\overline{A}(B/\overline{B}-C/\overline{C})-A (\overline{B}/B-\overline{C}/C)\right]\] \[=\frac{1}{2}\operatorname{Im}\left[A\overline{C}/C-A\overline{B }/B\right]\] \[=\frac{1}{2}\operatorname{Im}\left[A/C^{2}\right]|C|^{2}-\frac{1}{ 2}\operatorname{Im}\left[A/B^{2}\right]|B|^{2}.\] Hence, we find that \[K_{1}=\frac{1}{2}\operatorname{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}(X(s_{ 1})-X(s_{2}))^{2}}{(X(s^{\prime})-X(s_{1}))^{2}(X(s^{\prime})-X(s_{2}))^{2}} \right]\frac{|X(s^{\prime})-X(s_{2})|^{2}}{|X(s_{1})-X(s_{2})|^{2}}\] \[-\frac{1}{2}\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s_{1})^{2} }{(X(s^{\prime})-X(s_{1}))^{4}}\right]\frac{|X(s^{\prime})-X(s_{1})|^{2}}{|X^{ \prime}(s_{1})|^{2}},\] and similarly, \[K_{2}=\frac{1}{2}\left(\sin\left[\Phi(s^{\prime},s_{1})+\Phi(s^{\prime},s_{2}) -\Phi(s_{1},s_{2})\right]-\sin 2\Phi(s^{\prime},s_{2})\right)\frac{|X^{\prime}(s^{ \prime})|^{2}}{|X(s^{\prime})-X(s_{2})|^{2}}.\] Combining all these calculations, we conclude with (3.1). **Lemma 3.2**.: _Suppose that \(\Phi(s_{1},s_{2})\in[\Phi_{-},\Phi_{+}]\subset(-\frac{\pi}{3},\frac{\pi}{3})\) for all \(s_{1},s_{2}\in\mathbb{T}\). Then for all distinct \(s_{0},s_{1},s_{2}\in\mathbb{T}\),_ \[\theta(s_{0},s_{1},s_{2})=\Phi(s_{0},s_{1})+\Phi(s_{0},s_{2})-\Phi(s_{1},s_{2 })\in[2\Phi_{-},2\Phi_{+}].\] Proof.: Without loss of generality, we can assume \(s_{0}<s_{2}<s_{1}<s_{0}+2\pi\). Since \(\frac{\partial I}{\partial s}(s_{0},s)=J(s_{0},s)\), \[\tilde{J}(s_{0},s_{1},s_{2})=I(s_{0},s_{1})-I(s_{0},s_{2})=\int_{s_{2}}^{s_{1} }J(s_{0},s)\,ds.\] Hence, \(\arg\tilde{J}(s_{0},s_{1},s_{2})\in[\Phi_{-},\Phi_{+}]\). We also note that \[\arg\tilde{J}(s_{0},s_{1},s_{2})^{2}=\arg\frac{X^{\prime}(s_{0})^{2}(X(s_{1}) -X(s_{2}))^{2}}{(X(s_{0})-X(s_{1}))^{2}(X(s_{0})-X(s_{2}))^{2}}=\Phi(s_{0},s_{ 1})+\Phi(s_{0},s_{2})-\Phi(s_{1},s_{2}),\] which proves the claim. **Lemma 3.3**.: _Suppose that \(\Phi(s_{1},s_{2})\in[\Phi_{-},\Phi_{+}]\subset(-\frac{\pi}{3},\frac{\pi}{3})\) for all \(s_{1},s_{2}\in\mathbb{T}\). If \(\Phi_{+}\leq 0\) or \(\Phi_{-}\geq 0\), then \(\Phi(s_{1},s_{2})\equiv 0\) for all \((s_{1},s_{2})\in\mathbb{T}\), and thus \(X(\mathbb{T})\) is a circle._ Proof.: Suppose \(\Phi_{+}\leq 0\). Then for all distinct \(s^{\prime},s\in\mathbb{T}\), \[\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})X^{\prime}(s)}{(X(s^{\prime})-X(s ))^{2}}\right]=\frac{|X^{\prime}(s^{\prime})||X^{\prime}(s)|}{|X(s^{\prime})-X( s)|^{2}}\sin\Phi(s^{\prime},s)\leq 0,\] and the equality holds if any only if \(\Phi(s^{\prime},s)=0\). Hence, by Lemma 2.1, \(\Phi\equiv 0\). Now fix \(s\in\mathbb{T}\). Assume \(X(s)=0\) and \(X^{\prime}(s)\) is a positive real number without loss of generality; one can always achieve this by suitable rotation and translation. Then \(\Phi\equiv 0\) implies that, for all \(s^{\prime}\neq s\), \[\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})X^{\prime}(s)}{(X(s^{\prime})-X( s))^{2}}\right]\equiv 0,\] and thus \(\frac{X^{\prime}(s^{\prime})}{X(s^{\prime})^{2}}\in\mathbb{R}\). Hence, there exists some constant \(C_{*}\), such that \[\mathrm{Im}\left[\frac{1}{X(s^{\prime})}\right]=C_{*}\quad\forall\,s^{\prime }\in\mathbb{T}\setminus\{s\}.\] If \(C_{*}=0\), \(X(\mathbb{T})\subset\mathbb{R}\). This is not possible because \(X(s)\) is injective. If \(C_{*}\neq 0\), \(X(\mathbb{T})\) is contained in a circle that goes through the origin. Since \(X(s)\) is injective, \(X(\mathbb{T})\) must be the whole circle. The case \(\Phi_{-}\geq 0\) can be handled similarly. **Lemma 3.4**.: _Suppose \(\Phi(s,s^{\prime})\in[\Phi_{-},\Phi_{+}]\subset[-\frac{\pi}{4},\frac{\pi}{4}]\) for all \(s,s^{\prime}\in\mathbb{T}\). Let \(\Phi_{*}:=\max\{\Phi_{+},|\Phi_{-}|\}\leq\frac{\pi}{4}\). For distinct \(s^{\prime},s_{1},s_{2}\in\mathbb{T}\), denote_ \[\Phi_{1}:=\Phi(s^{\prime},s_{1}),\quad\Phi_{2}:=\Phi(s^{\prime},s_{2}),\quad \theta:=\Phi_{1}+\Phi_{2}-\Phi(s_{1},s_{2}).\] _Then \(\theta,2\Phi_{1},2\Phi_{2}\in[2\Phi_{-},2\Phi_{+}]\), and the following holds._ 1. _If_ \(\Phi(s_{1},s_{2})=\Phi_{+}=\Phi_{*}\)_, then_ \(|\Phi_{1}-\Phi_{2}|\leq\Phi_{+}\)_. Moreover,_ \(\sin\theta-\sin 2\Phi_{j}\leq 0\)__\((j=1,2)\)_, and_ \[(\sin\theta-\sin 2\Phi_{1})+(\sin\theta-\sin 2\Phi_{2})\leq 2\sin\Phi_{*}-2\sin 2 \Phi_{*}.\] 2. _If_ \(\Phi(s_{1},s_{2})=\Phi_{-}=-\Phi_{*}\)_, then_ \(|\Phi_{1}-\Phi_{2}|\leq|\Phi_{-}|\)_. Moreover,_ \(\sin\theta-\sin 2\Phi_{j}\geq 0\)__\((j=1,2)\)_, and_ \[(\sin\theta-\sin 2\Phi_{1})+(\sin\theta-\sin 2\Phi_{2})\geq 2\sin 2\Phi_{*}-2\sin \Phi_{*}.\] Proof.: That \(\theta,2\Phi_{1},2\Phi_{2}\in[2\Phi_{-},2\Phi_{+}]\) follows from Lemma 3.2. Suppose \(\Phi(s_{1},s_{2})=\Phi_{+}\). If \(\Phi_{+}=0\), there is nothing to prove, as \(\Phi\equiv 0\) by virtue of Lemma 3.3. If \(\Phi(s_{1},s_{2})=\Phi_{+}>0\), \(s_{1}\) and \(s_{2}\) must be distinct. By Lemma 3.2, for any \(s^{\prime}\neq s_{1},s_{2}\), \[\Phi(s_{1},s_{2})+\Phi(s^{\prime},s_{1})-\Phi(s^{\prime},s_{2})\leq 2\Phi_{+}.\] Since \(\Phi(s_{1},s_{2})=\Phi_{+}\), we obtain \(\Phi_{1}-\Phi_{2}\leq\Phi_{+}\). Interchanging \(s_{1}\) and \(s_{2}\) yields \(\Phi_{1}-\Phi_{2}\geq-\Phi_{+}\). Hence, \(|\Phi_{1}-\Phi_{2}|\leq\Phi_{+}\). This further implies \(\theta\leq 2\Phi_{j}\) for \(j=1,2\) by the definition of \(\theta\). Given \(\Phi_{*}\leq\frac{\pi}{4}\), \(\theta,2\Phi_{j}\in[-\frac{\pi}{2},\frac{\pi}{2}]\). This together with \(\theta\leq 2\Phi_{j}\) implies \(\sin\theta-\sin 2\Phi_{j}\leq 0\). Now we additionally assume \(\Phi_{+}=\Phi_{*}\). We derive that \[(\sin\theta-\sin 2\Phi_{1})+(\sin\theta-\sin 2\Phi_{2})=2\sin\theta-2\sin(\Phi_{ 1}+\Phi_{2})\cos(\Phi_{1}-\Phi_{2}).\] If \(\Phi_{1}+\Phi_{2}\leq 0\), we have \[\theta+\Phi_{+}/2 =\Phi_{1}+\Phi_{2}-\Phi_{+}/2\leq\Phi_{1}+\Phi_{2}\leq 0,\] \[\theta+\Phi_{+}/2 \geq 2\Phi_{-}+\Phi_{+}/2\geq-2\Phi_{*}+\Phi_{*}/2=-3\Phi_{*}/2>- \pi/2.\] Hence, \[\begin{split}&\quad(\sin\theta-\sin 2\Phi_{1})+(\sin\theta- \sin 2\Phi_{2})\\ &\leq 2\sin\theta-2\sin(\Phi_{1}+\Phi_{2})=2\sin\theta-2\sin(\theta+ \Phi_{+})\\ &=\,-4\cos(\theta+\Phi_{+}/2)\sin(\Phi_{+}/2)\leq-4\cos(-3\Phi_{ *}/2)\sin(\Phi_{*}/2)\\ &=2\sin\Phi_{*}-2\sin 2\Phi_{*}.\end{split} \tag{3.3}\] Next consider \(\Phi_{1}+\Phi_{2}\geq 0\). We find \[0\leq|\Phi_{1}-\Phi_{2}|=2\max(\Phi_{1},\Phi_{2})-(\Phi_{1}+\Phi_{2})\leq 2 \Phi_{+}-\Phi_{1}-\Phi_{2}\leq 2\Phi_{+}\leq\pi/2.\] This implies \[\begin{split}&\quad(\sin\theta-\sin 2\Phi_{1})+(\sin\theta-\sin 2 \Phi_{2})\\ &\leq 2\sin\theta-2\sin(\Phi_{1}+\Phi_{2})\cos(2\Phi_{+}-\Phi_{1}- \Phi_{2})\\ &=2\sin\theta-\sin(2\Phi_{1}+2\Phi_{2}-2\Phi_{+})-\sin(2\Phi_{+} )\\ &=2\sin\theta-\sin 2\theta-\sin 2\Phi_{*}.\end{split}\] Since \(f(\theta):=2\sin\theta-\sin 2\theta\) is increasing on \([-\pi/2,\pi/2]\), and \(-\pi/2\leq 2\Phi_{-}\leq\theta=\Phi_{1}+\Phi_{2}-\Phi_{+}\leq\Phi_{+}=\Phi_{*}<\pi/2\), we find \(f(\theta)\leq f(\Phi_{*})\). Hence, \[(\sin\theta-\sin 2\Phi_{1})+(\sin\theta-\sin 2\Phi_{2})\] \[\leq 2\sin\Phi_{*}-\sin 2\Phi_{*}-\sin 2\Phi_{*}=2\sin\Phi_{*}-2 \sin 2\Phi_{*},\] Combining this with (3.3), we obtain (i). The case \(\Phi(s_{1},s_{2})=\Phi_{-}\) can be studied similarly, which is omitted. Now we are ready to show that \(|\Phi(s_{1},s_{2},t)|\) satisfies a maximum principle and a decay estimate if \(\sup_{s_{1},s_{2}}|\Phi(s_{1},s_{2},0)|<\frac{\pi}{4}\). **Proposition 3.1**.: _Define \(\Phi_{*}(t):=\sup_{s_{1},s_{2}\in\mathbb{T}}|\Phi(s_{1},s_{2},t)|\). If \(\Phi_{*}(0)<\frac{\pi}{4}\), then \(\Phi_{*}(t)\) is a non-increasing Lipschitz function. It satisfies_ \[0\leq\Phi_{*}(t)\leq\Phi_{*}(0)\min\big{\{}e^{-\mu t},\,Ce^{-t/\pi^{2}}\big{\}},\] _where \(\mu:=(4-2\sqrt{2})/\pi^{3}\) and where \(C\) is a universal constant._ _In particular, if \(\Phi(s_{1},s_{2},0)\equiv 0\), then \(\Phi(s_{1},s_{2},t)\equiv 0\) for all \(t\). In other words, if \(X_{0}(\mathbb{T})\) is a circle, then \(X(\mathbb{T},t)\) must be a circle of the same radius._ Proof.: Take an arbitrary \(t\). We assume that for some distinct \(s_{1},s_{2}\in\mathbb{T}\), \(|\Phi(s_{1},s_{2},t)|=\Phi_{*}(t)\). Without loss of generality, assume \(\Phi(s_{1},s_{2},t)\geq 0\), so \(\Phi_{*}=\Phi_{+}\geq|\Phi_{-}|\). We shall derive an upper bound for \(\partial_{t}\Phi(s_{1},s_{2})\). Recall that \(d_{*}\) was defined in (2.6). By Lemma 3.1 and Lemma 3.4(i), \[\partial_{t}\Phi(s_{1},s_{2})\] \[\leq\frac{1}{4\pi}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime})| ^{2}}{d_{*}^{2}}\big{[}(\sin\theta-\sin 2\Phi(s^{\prime},s_{1}))+(\sin\theta- \sin 2\Phi(s^{\prime},s_{2}))\big{]}\,ds^{\prime}\] \[\leq\,-\frac{\sin 2\Phi_{*}-\sin\Phi_{*}}{2\pi d_{*}^{2}}\int_{ \mathbb{T}}|X^{\prime}(s^{\prime})|^{2}\,ds^{\prime}.\] The Cauchy-Schwarz inequality implies that \[(2d_{*})^{2}\leq\mathcal{L}(t)^{2}\leq\int_{\mathbb{T}}|X^{\prime}(s^{\prime}) |^{2}\,ds^{\prime}\int_{\mathbb{T}}1\,ds^{\prime},\] so we obtain \[\partial_{t}\Phi(s_{1},s_{2})\leq-\frac{1}{\pi^{2}}\big{(}\sin 2\Phi_{*}-\sin \Phi_{*}\big{)}.\] Similar analysis can be carried out if \(\Phi(s_{1},s_{2},t)=-\Phi_{*}\leq 0\). Given the assumptions (A1)-(A3) on \(X\), \(\Phi(s_{1},s_{2},t)\) is \(C^{1}\) in \(\mathbb{T}\times\mathbb{T}\times[0,T]\), so \(\Phi_{*}(t)\) is a Lipschitz function. By combining the above estimates and following the argument in e.g. [25], we find that, if \(\Phi_{*}(t)<\frac{\pi}{4}\), then \(\Phi_{*}(t)\geq 0\) satisfies for almost all \(t\) that \[\Phi_{*}^{\prime}(t)\leq-\frac{1}{\pi^{2}}\big{(}\sin 2\Phi_{*}(t)-\sin\Phi_{*}( t)\big{)}\leq-\frac{4-2\sqrt{2}}{\pi^{3}}\Phi_{*}(t).\] Since \(\Phi_{*}(0)<\frac{\pi}{4}\), this implies \(\Phi_{*}(t)<\frac{\pi}{4}\) for all \(t\in[0,T]\), and \(\Phi_{*}(t)\leq\Phi_{*}(0)e^{-\mu t}\) with \(\mu=(4-2\sqrt{2})/\pi^{3}\). On the other hand, the above differential inequality can be written as \[\frac{d}{dt}\left[\frac{1}{2}\ln|\cos\Phi_{*}-1|+\frac{1}{6}\ln(\cos\Phi_{*}+ 1)-\frac{2}{3}\ln\left|\cos\Phi_{*}-\frac{1}{2}\right|\right]\leq-\frac{1}{ \pi^{2}},\] Hence, with \(\Phi_{*}(0)<\frac{\pi}{4}\), \[\frac{(1-\cos\Phi_{*})(\cos\Phi_{*}+1)^{1/3}}{(\cos\Phi_{*}-\frac{1}{2})^{4/3}} \leq\frac{(1-\cos\Phi_{*}(0))(\cos\Phi_{*}(0)+1)^{1/3}}{(\cos\Phi_{*}(0)-\frac {1}{2})^{4/3}}e^{-2t/\pi^{2}},\] and thus \[1-\cos\Phi_{*}(t)\leq C\Phi_{*}(0)^{2}e^{-2t/\pi^{2}},\] where \(C>0\) is a universal constant. Then the desired decay estimate follows. The last claim follows from the monotonicity of \(\Phi_{*}(t)\), Lemma 3.3, and the time-invariance of \(R_{X}\). ### Remark on the case of general elasticity Although this paper is mainly focused on the 2-D Peskin problem with a Hookean string, we make a detour in this subsection to show that the maximum principle for \(|\Phi|\) also holds for strings with more general elasticity laws. Some new notations and equations will be introduced here, but we note that they should only apply within this subsection and Remark 4.2 below. Assume that in (1.1), \(F_{X}(s,t)\) is given by \[F_{X}(s,t)=\partial_{s}\big{[}q(s,t)X^{\prime}(s,t)\big{]}\] for some positive function \(q(s,t)\) that is as smooth as \(X^{\prime}(s,t)\) (cf. (1.4) and (A1)-(A3) in Section 2.1). Then (1.9) should be modified to become \[\partial_{t}X(s)=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\text{Re}\left[ \frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2}}\right]\big{(}X(s^{ \prime})-X(s)\big{)}q(s^{\prime})\,ds^{\prime}. \tag{3.4}\] Suppose that \(X=X(s,t)\) solves this equation with \(X(0,t)=X_{0}(s)\), and satisfies the assumptions in Section 2.1. We still define \(\Phi(s_{1},s_{2},t)\) by (1.12), and let \(\Phi_{*}(t)\) be defined as in Proposition 3.1. Then we claim that, if \(\Phi_{*}(0)<\frac{\pi}{4}\), \(\Phi_{*}(t)\) should still be a non-increasing Lipschitz function in \(t\). This claim can be justified by simply following the same argument as above, but we shall present a different proof that avoids lengthy calculation. Fix \(t\). Let \[k_{0}:=2\pi\left(\int_{\mathbb{T}}\frac{1}{q(s)}\,ds\right)^{-1}>0.\] and define for all \(s\in[0,2\pi)\), \[\xi(s):=\int_{0}^{s}\frac{k_{0}}{q(s^{\prime})}\,ds^{\prime}.\] Thanks to the assumptions on \(q\) (see (A1)-(A3)), \(s\mapsto\xi(s)\) is a strictly increasing \(C^{4}\)-bijection from \([0,2\pi)\) to itself, so it can be further understood as a \(C^{4}\)-diffeomorphism from \(\mathbb{T}\) to \(\mathbb{T}\). Then we define \(Y(\xi,t)\) such that \[Y(\xi(s),t)\equiv X(s,t)\text{ for all }s\in\mathbb{T}.\] \(Y\) is well-defined, with \(Y(\mathbb{T},t)=X(\mathbb{T},t)\) and \(Y^{\prime}(\xi(s))=k_{0}^{-1}q(s)X^{\prime}(s)\). In addition, by (3.4) and change of variables, \[\partial_{t}X(s,t) =\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\text{Re}\left[\frac{k _{0}^{-1}q(s^{\prime})^{2}X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2} }\right]\left(X(s^{\prime})-X(s)\right)\frac{k_{0}}{q(s^{\prime})}\,ds^{\prime}\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\text{Re}\left[\frac{k _{0}Y^{\prime}(\xi(s^{\prime}))^{2}}{(Y(\xi(s^{\prime}))-Y(\xi(s)))^{2}} \right]\left(Y(\xi(s^{\prime}))-Y(\xi(s))\right)d\xi(s^{\prime})\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\text{Re}\left[\frac{k _{0}Y^{\prime}(\eta)^{2}}{(Y(\eta)-Y(\xi(s)))^{2}}\right]\left(Y(\eta)-Y(\xi( s))\right)d\eta.\] Note that given the assumptions on \(q\), all the principal value integrals above can be justified. If we let \(Y(\xi,\tau)\) solve the Peskin problem with the Hookean elasticity (cf. (1.9)), \[\partial_{\tau}Y(\xi,\tau)=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\text{Re }\left[\frac{Y^{\prime}(\eta,\tau)^{2}}{(Y(\eta,\tau)-Y(\xi,\tau))^{2}}\right] \left(Y(\eta,\tau)-Y(\xi,\tau)\right)d\eta,\] then for the given \(t\), \[\partial_{t}X(s,t)=k_{0}\partial_{\tau}Y(\xi(s),t). \tag{3.5}\] Please be reminded that this may not hold for later times. Using this, we calculate that \[\text{Im}\frac{\partial_{t}X^{\prime}(s,t)}{X^{\prime}(s,t)}=\text {Im}\frac{k_{0}\partial_{\tau}Y^{\prime}(\xi(s),t)\xi^{\prime}(s)}{Y^{\prime}( \xi(s),t)\xi^{\prime}(s)}=k_{0}\,\left[\text{Im}\frac{\partial_{\tau}Y^{ \prime}(\xi,\tau)}{Y^{\prime}(\xi,\tau)}\right]\biggr{|}_{(\xi,\tau)=(\xi(s),t )},\] \[\text{Im}\frac{\partial_{t}(X(s_{1},t)-X(s_{2},t))}{X(s_{1},t)-X( s_{2},t)}=k_{0}\,\left[\text{Im}\frac{\partial_{\tau}(Y(\xi_{1},\tau)-Y(\xi_{2}, \tau))}{Y(\xi_{1},\tau)-Y(\xi_{2},\tau)}\right]\biggr{|}_{(\xi_{1},\xi_{2}, \tau)=(\xi(s_{1}),\xi(s_{2}),t)}. \tag{3.6}\] Define \(\Phi_{Y}=\Phi_{Y}(\xi_{1},\xi_{2},\tau)\) in terms of \(Y(\xi,\tau)\) by (1.12). Combining these identities with (3.2) yields \[\partial_{t}\Phi(s_{1},s_{2},t)=k_{0}\partial_{\tau}\Phi_{Y}(\xi(s_{1}),\xi(s_ {2}),t). \tag{3.7}\] Following the argument in Proposition 3.1, we conclude that \(\Phi_{*}(t)\) is a non-increasing Lipschitz function, which satisfies \[\Phi_{*}^{\prime}(t)\leq-\frac{k_{0}}{\pi^{2}}\big{(}\sin 2\Phi_{*}(t)-\sin \Phi_{*}(t)\big{)}\] for almost all \(t\). Given further information of \(q(s,t)\), a quantitative decay estimate for \(\Phi_{*}(t)\) may also be established, but we omit the discussion here. It may be of independent interest that, by Lemma 3.1, (3.7), and the definitions of \(Y\), \(\Phi_{Y}\), and \(\theta_{Y}\), \(\Phi(s_{1},s_{2},t)\) in this case satisfies \[\partial_{t}\Phi(s_{1},s_{2})=\frac{1}{4\pi}\text{p.v.}\int_{ \mathbb{T}} \left\{\frac{|X^{\prime}(s^{\prime})|^{2}(\sin\theta(s^{\prime},s_{1},s_{2} )-\sin 2\Phi(s^{\prime},s_{1}))}{|X(s^{\prime})-X(s_{1})|^{2}}\right.\] \[\left.+\frac{|X^{\prime}(s^{\prime})|^{2}(\sin\theta(s^{\prime},s_ {1},s_{2})-\sin 2\Phi(s^{\prime},s_{2}))}{|X(s^{\prime})-X(s_{2})|^{2}}\right\}q(s^{\prime})\,ds^ {\prime}.\] ### Geometric characterizations of \(X(\mathbb{T},t)\) It is conceivable that the bound for \(\Phi_{*}(t)\) should provide useful geometric information of the curve \(X(\mathbb{T},t)\). We prove a few in the following proposition. The time-dependence is omitted. **Proposition 3.2**.: _Suppose \(\Phi_{*}<\pi/2\). Recall that \(R_{X}\) and \(d_{*}\) were defined in (1.11) and (2.6), respectively. Then the following holds._ 1. \(d_{*}\geq 2R_{X}\)_._ 2. _There exists_ \(z_{*}\in\mathbb{C}\setminus X(\mathbb{T})\)_, such that_ \(X(\mathbb{T})\) _can be parameterized in the polar coordinate centered at_ \(z_{*}\)_, i.e., there exists_ \(\rho=\rho(\omega)\) _defined on_ \(\mathbb{T}\)_, such that_ \[X(s)-z_{*}=\rho(\omega(s))e^{i\omega(s)},\text{ where }\omega(s)=\arg(X(s)-z_{*})\in\mathbb{T}=[-\pi,\pi).\] _Here_ \(\omega(\cdot)\) _is a bijection from_ \(\mathbb{T}\) _to_ \(\mathbb{T}\)_, and it is orientation-preserving in the sense that, as_ \(s\) _increases from_ \(-\pi\) _to_ \(\pi\) _in_ \(\mathbb{T}\)_,_ \(\omega(s)\) _goes through_ \(\mathbb{T}\) _in the same positive (counter-clockwise) direction. Moreover,_ \(|\rho^{\prime}(\omega)/\rho(\omega)|\leq\tan\Phi_{*}\) _and_ (3.8) \[\rho(\omega)\in\left[\frac{d_{*}}{2}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2} \right),\,\frac{d_{*}}{2}\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right) \right].\] _As a result,_ \[R_{X}\geq\frac{d_{*}}{2}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right).\] 3. _Let_ \(d(s):=\sup_{s^{\prime}\in\mathbb{T}}|X(s)-X(s^{\prime})|\)_. For all_ \(s\in\mathbb{T}\)_,_ (3.9) \[d(s)\geq d_{*}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right)\geq 2R_{X} \tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right).\] 4. _The curve_ \(X(\mathbb{T})\) _satisfies a chord-arc condition, i.e., for all_ \(s,s^{\prime}\in\mathbb{T}\)_,_ (3.10) \[|X(s)-X(s^{\prime})|\geq d_{*}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2} \right)\sin\left[\frac{L(s,s^{\prime})}{d_{*}}(1-\sin\Phi_{*})\right].\] _Here,_ \(L(s,s^{\prime})\) _denotes the length of the shorter arc between_ \(X(s)\) _and_ \(X(s^{\prime})\)_. For_ \(s\leq s^{\prime}\leq s+2\pi\) _without loss of generality,_ \[L(s,s^{\prime})=L(s^{\prime},s)=\min\left\{\int_{s}^{s^{\prime}}|X^{\prime}(s^ {\prime\prime})|\,ds^{\prime\prime},\,\int_{s^{\prime}}^{s+2\pi}|X^{\prime}(s ^{\prime\prime})|\,ds^{\prime\prime}\right\}.\] _It satisfies_ (3.11) \[L(s,s^{\prime})\leq\frac{\pi d_{*}}{2(1-\sin\Phi_{*})}.\] _In particular, if_ \(\Phi_{*}\leq\pi/4\)_, there exists some universal constant_ \(C>0\)_, such that_ \[|X(s)-X(s^{\prime})|\geq CL(s,s^{\prime}).\] 5. _Recall that_ \(\mathcal{L}(t)\) _was defined in (_2.4_). If_ \(\Phi_{*}\leq\pi/4\)_, for any_ \(s\in\mathbb{T}\)_,_ (3.12) \[c_{1}R_{X}\leq 2d(s)\leq 2d_{*}\leq\mathcal{L}(t)\leq c_{2}d_{*}\leq c_{3}R_{X},\] _where_ \(c_{j}\)__\((j=1,2,3)\) _are universal constants._ Proof.: By definition, \(d_{*}\) is also the diameter of the convex hull of \(X(\mathbb{T})\). By the isodiametric inequality [26], \(\pi R_{X}^{2}\leq\frac{1}{4}\pi d_{*}^{2}\), which gives \(d_{*}\geq 2R_{X}\). Assume that \(d_{*}\) is attained at some \(s_{1},s_{2}\in\mathbb{T}\), i.e., \(d_{*}=|X(s_{1})-X(s_{2})|\). By the maximality of \(|X(s_{1})-X(s_{2})|^{2}\), \[\big{|}\arg X^{\prime}(s_{j})-\arg(X(s_{1})-X(s_{2}))\big{|}=\frac{\pi}{2}, \quad j=1,2.\] Since \(|\Phi(s_{1},s_{2})|<\pi/2\), it must hold \(|\arg X^{\prime}(s_{1})-\arg X^{\prime}(s_{2})|=\pi\) (as otherwise \(|\Phi(s_{1},s_{2})|=\pi\)) and \[\arg\left[\frac{X^{\prime}(s_{1})}{X(s_{1})-X(s_{2})}\right]=-\arg\left[\frac{X ^{\prime}(s_{2})}{X(s_{1})-X(s_{2})}\right]\in\left\{\frac{\pi}{2},\,-\frac{ \pi}{2}\right\}. \tag{3.13}\] Denote the straight line that goes through \(X(s_{1})\) and \(X(s_{2})\) by \(l\). We claim that \(X(\mathbb{T})\cap l=\{X(s_{1}),\,X(s_{2})\}\). Indeed, if not, suppose \(X(s)\) lies on this line \((s\neq s_{1},s_{2})\). Since \[\frac{J(s_{1},s)}{J(s_{2},s)}=\frac{X^{\prime}(s_{1})}{X^{\prime}(s_{2})}\cdot \frac{(X(s)-X(s_{2}))^{2}}{(X(s)-X(s_{1}))^{2}},\text{ and }\frac{(X(s)-X(s_{2}))^{2}}{(X(s)-X(s_{1}))^{2}}>0,\] we find \(\Phi(s_{1},s)-\Phi(s_{2},s)=\arg[J(s_{1},s)/J(s_{2},s)]=\arg X^{\prime}(s_{1}) -\arg X^{\prime}(s_{2})=\pi\) in the modulo \(2\pi\). This contradicts with \(|\Phi(s_{1},s)|<\pi/2\) and \(|\Phi(s_{2},s)|<\pi/2\). Let \(z_{*}:=(X(s_{1})+X(s_{2}))/2\not\in X(\mathbb{T})\). In the rest of the proof, we want to show that the curve \(X(\mathbb{T})\) can be parameterized in the polar coordinate centered at \(z_{*}\). We proceed in three steps. _Step 1_.: Take an arbitrary \(s\in\mathbb{T}\). We first bound \(|X(s)-z_{*}|\). By the definition of \(\Phi\), \[2\big{|}\arg(X(s)-X(s_{1}))-\arg(X(s)-X(s_{2}))\big{|}\] \[=\big{|}\arg X^{\prime}(s_{1})-\arg X^{\prime}(s_{2})+\Phi(s,s_{2} )-\Phi(s,s_{1})\big{|}\in\big{[}\pi-2\Phi_{*},\,\pi+2\Phi_{*}\big{]}.\] Hence, \[\big{|}\!\cos\big{[}\arg(X(s)-X(s_{1}))-\arg(X(s)-X(s_{2}))\big{]}\big{|}\leq \sin\Phi_{*}.\] By virtue of the law of cosines, \[d_{*}^{2} =|X(s_{1})-X(s_{2})|^{2}\leq\,(1+\sin\Phi_{*})\,\big{[}|X(s)-X(s_ {1})|^{2}+|X(s)-X(s_{2})|^{2}\big{]},\] \[d_{*}^{2} =|X(s_{1})-X(s_{2})|^{2}\geq(1-\sin\Phi_{*})\big{[}|X(s)-X(s_{1}) |^{2}+|X(s)-X(s_{2})|^{2}\big{]}.\] Then using the parallelogram identity, \[2|X(s)-X(s_{1})|^{2}+2|X(s)-X(s_{2})|^{2}=d_{*}^{2}+4|X(s)-z_{*}|^{2},\] we obtain that \[\frac{d_{*}}{2}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right)\leq|X(s)-z_{ *}|\leq\frac{d_{*}}{2}\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right). \tag{3.14}\] _Step 2_.: Next we bound the angle between \(X^{\prime}(s)\) and \((X(s)-z_{*})\) by proving that \[\left|\cos\left(\arg\left[\frac{X^{\prime}(s)}{X(s)-z_{*}}\right]\right)\right| \leq\sin\Phi_{*}. \tag{3.15}\] For convenience, denote \(z:=X(s)\) and let \[\psi(w):=\ln\left(\frac{w-X(s_{1})}{w-X(s_{2})}\right),\quad w\in\mathbb{C} \setminus l.\] \(\psi\) is well-defined as a single-valued function on \(\mathbb{C}\setminus l\), with \(\operatorname{Im}\psi(w)\in(-\pi,\pi)\setminus\{0\}\). Define \[\zeta:=\overline{\psi^{\prime}(z)}=\overline{\left(\frac{X(s_{1})-X(s_{2})}{( z-X(s_{1}))(z-X(s_{2}))}\right)}.\] It is worth noting that \(\zeta\) gives the tangent direction of the level set of \(\operatorname{Im}\psi(w)\) at the point \(z\), because \(\operatorname{Im}(\zeta\psi^{\prime}(z))=0\). Then we derive that \[\Phi(s,s_{1}) = \arg\left[\frac{X^{\prime}(s)}{\zeta}\cdot\frac{X^{\prime}(s_{1}) }{X(s_{1})-X(s_{2})}\cdot\frac{z-X(s_{2})}{z-X(s_{1})}\right]\] \[= \arg\left[\frac{X^{\prime}(s)}{\zeta}\right]+\arg\left[\frac{X^{ \prime}(s_{1})}{X(s_{1})-X(s_{2})}\right]-\operatorname{Im}\psi(z),\] and similarly, \[\Phi(s,s_{2})=\arg\left[\frac{X^{\prime}(s)}{\zeta}\right]+\arg\left[\frac{X^{ \prime}(s_{2})}{X(s_{1})-X(s_{2})}\right]+\operatorname{Im}\psi(z).\] These equalities should be understood in the modulo \(2\pi\). Take sines and cosines on both sides of them. In view of (3.13), \(|\sin\Phi(s,s_{j})|\leq\sin\Phi_{*}\), and \(\cos\Phi(s,s_{j})\geq\cos\Phi_{*}>0\), we find that \[\left|\cos\left(\arg\left[\frac{X^{\prime}(s)}{\zeta}\right]\pm\operatorname{ Im}\psi(z)\right)\right|\leq\sin\Phi_{*},\] and \[\sin\left(\arg\left[\frac{X^{\prime}(s)}{\zeta}\right]\pm\operatorname{Im} \psi(z)\right)\text{ have opposite signs}.\] For convenience, denote \[\gamma_{1}:=\arg\left[\frac{X^{\prime}(s)}{\zeta}\right],\quad\gamma_{2}:= \operatorname{Im}\psi(z).\] Then they further imply that \[|\cos\gamma_{1}\cos\gamma_{2}|+|\sin\gamma_{1}\sin\gamma_{2}| \leq\sin\Phi_{*}, \tag{3.17}\] \[|\sin\gamma_{1}\cos\gamma_{2}|-|\cos\gamma_{1}\sin\gamma_{2}|<0. \tag{3.16}\] We also derive that \[\gamma_{3}:=\arg\left[\frac{z-z_{*}}{\zeta}\right] = \arg\left[[(z-X(s_{1}))+(z-X(s_{2}))]\psi^{\prime}(z)\right]\] \[= \arg\left[\frac{z-X(s_{2})}{z-X(s_{1})}-\frac{z-X(s_{1})}{z-X(s_{ 2})}\right]=\arg\left[e^{-\psi(z)}-e^{\psi(z)}\right].\] Hence, \[|\cos\gamma_{3}| = \frac{|\operatorname{Re}(e^{-\psi(z)}-e^{\psi(z)})|}{|e^{-\psi(z) }-e^{\psi(z)}|}\] \[\leq \frac{(e^{|\operatorname{Re}\psi(z)|}-e^{-|\operatorname{Re}\psi (z)|})|\cos\operatorname{Im}\psi(z)|}{e^{|\operatorname{Re}\psi(z)|}-e^{-| \operatorname{Re}\psi(z)|}}=|\cos\gamma_{2}|,\] which also gives \(|\sin\gamma_{2}|\leq|\sin\gamma_{3}|\). Using this and (3.17) yields \[|\sin\gamma_{1}\cos\gamma_{3}|\leq|\sin\gamma_{1}\cos\gamma_{2}|<|\cos\gamma_ {1}\sin\gamma_{2}|\leq|\cos\gamma_{1}\sin\gamma_{3}|.\] Hence, \[|\sin\gamma_{1}\cos\gamma_{3}|-|\cos\gamma_{1}\sin\gamma_{3}|\leq|\sin\gamma_ {1}\cos\gamma_{2}|-|\cos\gamma_{1}\sin\gamma_{2}|<0.\] Since for \(j=2,3\), \[1=\left(|\cos\gamma_{1}\cos\gamma_{j}|+|\sin\gamma_{1}\sin\gamma_{j}|\right) ^{2}+\left(|\sin\gamma_{1}\cos\gamma_{j}|-|\cos\gamma_{1}\sin\gamma_{j}|\right) ^{2},\] we use (3.16) to obtain that \[|\cos\gamma_{1}\cos\gamma_{3}|+|\sin\gamma_{1}\sin\gamma_{3}|\leq|\cos\gamma_{1} \cos\gamma_{2}|+|\sin\gamma_{1}\sin\gamma_{2}|\leq\sin\Phi_{*}.\] Since \[\arg\left[\frac{X^{\prime}(s)}{X(s)-z_{*}}\right]=\arg\left[\frac{X^{\prime}(s) }{\zeta}\right]-\arg\left[\frac{z-z_{*}}{\zeta}\right]=\gamma_{1}-\gamma_{3},\] (3.15) follows immediately. _Step 3_.: With \(\omega\in\mathbb{T}=[-\pi,\pi)\), let \(l_{\omega}:=\{z_{*}+te^{i\omega}:\,t\geq 0\}\) be the ray emanating from \(z_{*}\) with the directional angle \(\omega\). Then that \(X(\mathbb{T})\cap l=\{X(s_{1}),\,X(s_{2})\}\) implies that \(l_{\omega}\cap X(\mathbb{T})\) contains exactly one element when \(\omega=\arg(X(s_{j})-z_{*})\) (\(j=1,2\)). In addition, (3.15) implies that the number of element of \(l_{\omega}\cap X(\mathbb{T})\) changes continuously as \(\omega\) varies, and thus it must be identically \(1\). Therefore, \(X(\mathbb{T})\) can be parameterized in the polar coordinate centered at \(z_{*}\). More precisely, there exists \(\rho=\rho(\omega)\) defined on \(\mathbb{T}\) such that \(X(s)-z_{*}=\rho(\omega(s))e^{i\omega(s)}\), where \(\omega(s)=\arg(X(s)-z_{*})\in\mathbb{T}\). Here \(\omega(\cdot)\) is bijective from \(\mathbb{T}\) to \(\mathbb{T}\). Since \(X(\cdot,t)\) is assumed to parameterize the curve \(X(\mathbb{T},t)\) in the counter-clockwise direction, the map \(\omega(s)\) is orientation-preserving. (3.15) implies that \(|\rho^{\prime}(\omega)/\rho(\omega)|\leq\tan\Phi_{*}\). Moreover, (3.14) gives (3.8). The estimate \(R_{X}\geq\frac{d_{*}}{2}\tan(\frac{\pi}{4}-\frac{\Phi_{*}}{2})\) follows from the fact that the disk of radius \(\frac{d_{*}}{2}\tan(\frac{\pi}{4}-\frac{\Phi_{*}}{2})\) centered at \(z_{*}\) is contained in the interior of \(X(\mathbb{T})\). (3.9) follows from part (i) and part (ii). For arbitrary \(s,s^{\prime}\in\mathbb{T}\), \[|X(s)-X(s^{\prime})|\geq d_{*}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2} \right)\left|\sin\frac{\omega(s)-\omega(s^{\prime})}{2}\right|.\] Indeed, by the law of cosines, with \(\rho_{\dagger}:=\min\{\rho(\omega(s)),\rho(\omega(s^{\prime}))\}\), \[|X(s)-X(s^{\prime})| =\,\left[\rho(\omega(s))^{2}+\rho(\omega(s^{\prime}))^{2}-2\rho( \omega(s))\rho(\omega(s^{\prime}))\cos\left(\omega(s)-\omega(s^{\prime}) \right)\right]^{1/2}\] \[\geq\,\left[\rho_{\dagger}^{2}+\rho_{\dagger}^{2}-2\rho_{\dagger} ^{2}\cos\left(\omega(s)-\omega(s^{\prime})\right)\right]^{1/2}\] \[\geq d_{*}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right)\left| \sin\frac{\omega(s)-\omega(s^{\prime})}{2}\right|.\] On the other hand, assume \(\omega(s^{\prime})\in[\omega(s),\omega(s)+\pi]\) without loss of generality (otherwise, one may consider \(\omega(s)\in[\omega(s^{\prime}),\omega(s^{\prime})+\pi]\) instead). By (3.8) and (3.15), \[\cos\Phi_{*}L(s,s^{\prime}) \leq\,\cos\Phi_{*}\int_{\omega(s)}^{\omega(s^{\prime})}\left(\rho( \eta)^{2}+\rho^{\prime}(\eta)^{2}\right)^{1/2}d\eta\] \[\leq\,\int_{\omega(s)}^{\omega(s^{\prime})}\rho(\eta)\,d\eta\leq \frac{d_{*}}{2}\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right)|\omega(s)- \omega(s^{\prime})|_{\mathbb{T}},\] where \(|\omega(s)-\omega(s^{\prime})|_{\mathbb{T}}\in[0,\pi]\) denotes the distance between \(\omega(s)\) and \(\omega(s^{\prime})\) along \(\mathbb{T}\). Then (3.10) and (3.11) follow. Finally, (3.12) follows from parts (i)-(iv) and the fact that \(\mathcal{L}(t)=2\sup_{s^{\prime}}L(s,s^{\prime})\) for arbitrary \(s\in\mathbb{T}\). If we assume \(\Phi_{*}(0)<\pi/4\), Proposition 3.2 implies that \(X_{0}(\mathbb{T})\) can have at most a medium-size deviation from being a perfect circle. The following remark may be viewed a converse statement in some sense, which says that if \(X_{0}(s)\) is \(O(1)\)-close to an equilibrium in the \(C^{1}(\mathbb{T})\)-seminorm up to suitable reparameterizations, then the corresponding \(\Phi_{0}\) will satisfy \(|\Phi_{0}|<\pi/4\). _Remark 3.1_.: There exists a universal constant \(c>0\), such that, if \(X_{0}\in C^{1}(\mathbb{T})\) satisfies that \[\left\|X_{0}(s)-(x_{0}+Re^{i(s+\xi_{0})})\right\|_{\dot{C}^{1}(\mathbb{T})} \leq cR\] for some \(x_{0}\in\mathbb{C}\), \(R>0\), and \(\xi_{0}\in\mathbb{T}\), then \(\Phi_{0}(s,s^{\prime})\) defined in terms of \(X_{0}\) will satisfy \(|\Phi_{0}(s,s^{\prime})|<\pi/4\) for all \(s,s^{\prime}\in\mathbb{T}\). Here \(R\) does not have to coincide with \(R_{X}\). More generally, if we additionally let \(\psi:\mathbb{T}\to\mathbb{T}\) be an arbitrary suitably smooth bijective diffeomorphism, and define \(\tilde{\Phi}_{0}(s,s^{\prime})\) in terms of the reparameterized configuration \(\tilde{X}_{0}=X_{0}\circ\psi\), then \(|\tilde{\Phi}_{0}(s,s^{\prime})|<\pi/4\) still holds. Indeed, the second claim follows from the first one and the fact that \(\tilde{\Phi}_{0}(s,s^{\prime})=\Phi_{0}(\psi(s),\psi(s^{\prime}))\) for all \(s,s^{\prime}\in\mathbb{T}\). To show the first claim, we denote \(Y_{0}(s):=x_{0}+Re^{i(s+\xi_{0})}\). Then \(|Y_{0}^{\prime}(s)|\equiv R\), and for distinct \(s,s^{\prime}\in\mathbb{T}\), \[\arg\left[\frac{Y_{0}^{\prime}(s)Y_{0}^{\prime}(s^{\prime})}{(Y_{0}(s)-Y_{0}(s ^{\prime}))^{2}}\right]\equiv 0.\] Assume \(c\leq 1/2\). Then the facts \(|X_{0}^{\prime}(s)-Y_{0}^{\prime}(s)|\leq cR\) and \(|Y_{0}^{\prime}(s)|=R\) imply that \[\big{|}\arg\big{(}X_{0}^{\prime}(s)/Y_{0}^{\prime}(s)\big{)}\big{|}\leq\arcsin c.\] For distinct \(s,s^{\prime}\in\mathbb{T}\), we assume \(s<s^{\prime}\leq s+\pi\) without loss of generality. Then \[\left|\frac{Y_{0}(s^{\prime})-Y_{0}(s)}{s^{\prime}-s}\right|=\frac{\left|2R \sin(\frac{s^{\prime}-s}{2})\right|}{|s^{\prime}-s|_{\mathbb{T}}}\geq\frac{2} {\pi}R,\] and \[\left|\frac{X_{0}(s^{\prime})-X_{0}(s)}{s^{\prime}-s}-\frac{Y_{0}(s^{\prime}) -Y_{0}(s)}{s^{\prime}-s}\right|\leq\frac{1}{|s^{\prime}-s|_{\mathbb{T}}}\int _{s}^{s^{\prime}}\left|X_{0}^{\prime}(s^{\prime\prime})-Y_{0}^{\prime}(s^{ \prime\prime})\right|ds^{\prime\prime}\leq cR.\] Hence, \[\left|\arg\left[\frac{Y_{0}(s^{\prime})-Y_{0}(s)}{X_{0}(s^{\prime})-X_{0}(s)} \right]\right|=\left|\arg\left[\frac{\frac{1}{s^{\prime}-s}(Y_{0}(s^{\prime}) -Y_{0}(s))}{\frac{1}{s^{\prime}-s}(X_{0}(s^{\prime})-X_{0}(s))}\right]\right| \leq\arcsin\frac{\pi c}{2},\] which implies \[|\Phi_{0}(s,s^{\prime})|=\left|\arg\left[\frac{X_{0}^{\prime}(s)X_{0}^{\prime} (s^{\prime})}{Y_{0}^{\prime}(s)Y_{0}^{\prime}(s^{\prime})}\left(\frac{Y_{0}(s ^{\prime})-Y_{0}(s)}{X_{0}(s^{\prime})-X_{0}(s)}\right)^{2}\right]\right|\leq 2 \arcsin c+2\arcsin\frac{\pi c}{2}.\] Assuming \(c\) to be suitably small but universal, we can achieve that \(|\Phi_{0}(s,s^{\prime})|<\pi/4\). ### Strings with a circular shape In Proposition 3.1, we show that if the initial curve \(X_{0}(\mathbb{T})\) is a circle of radius \(R_{X}\) (see (1.11)), then for all \(t>0\) at which the solution \(X(s,t)\) is well-defined, \(X(\mathbb{T},t)\) should still be a circle of radius \(R_{X}\), possibly centered at a different point. Given that, it seems feasible to pursue a more precise characterization of the solution starting from such initial data with a circular shape. We thus make the following derivation. Under the assumption that \(\tilde{X}(s,t)\neq 0\) for all \((s,t)\), we consider the following normalized problem (we still use the notation of complex numbers): \[\partial_{t}\tilde{X}(s)=\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left[ \frac{\tilde{X}^{\prime}(s^{\prime})^{2}}{(\tilde{X}(s^{\prime})-\tilde{X}(s))^ {2}}+\frac{\overline{\tilde{X}^{\prime}(s^{\prime})}^{2}}{(\overline{\tilde{X}(s ^{\prime})}-\overline{\tilde{X}(s)})^{2}}\right]\big{(}\tilde{X}(s^{\prime})- \tilde{X}(s)\big{)}\,ds^{\prime}-v(t), \tag{3.18}\] with \[v(t)=\frac{1}{8\pi}\int_{\mathbb{T}}\frac{\tilde{X}^{\prime}(s^{\prime})^{2}}{ \tilde{X}(s^{\prime})}\,ds^{\prime}. \tag{3.19}\] It is clear that, if \((\tilde{X},v)\) solves (3.18)-(3.19), then \(\tilde{X}(s)+\int_{0}^{t}v(\tau)\,d\tau\) satisfies (1.9). In addition, we find \(\tilde{X}\) enjoys the following property. **Lemma 3.5**.: _Suppose \(\tilde{X}_{0}(s)\) satisfies that \(\tilde{X}_{0}(\mathbb{T})\) is a circle of radius \(R_{X}\) centered at the origin, and \(|\tilde{X}^{\prime}_{0}(s)|>0\). Let \(\tilde{X}(s,t)\) be a solution to (3.18)-(3.19) in \(\mathbb{T}\times[0,T]\) corresponding to the initial condition \(\tilde{X}(s,0)=\tilde{X}_{0}(s)\), satisfying the assumptions (A1)-(A3) with \(X\) there replaced by \(\tilde{X}\) and that \(|\tilde{X}(s,t)|>0\). Then \(\tilde{X}(\mathbb{T},t)\) is the circle of radius \(R_{X}\) centered at the origin for all \(t\in[0,T]\)._ _Remark 3.2_.: In view of this, \(X(\mathbb{T},t)\) should be a circle of all time, and \(v(t)\) in (3.19) can be interpreted as the velocity of its center. Proof.: Thanks to the assumptions on \(\tilde{X}\), it suffices to prove \(|\tilde{X}(s,t)|\equiv R_{X}\) for all \((s,t)\in\mathbb{T}\times[0,T]\). Plugging (3.19) into (3.18) yields that \[\partial_{t}\tilde{X}(s)=\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left[ \frac{\tilde{X}^{\prime}(s^{\prime})^{2}\tilde{X}(s)}{(\tilde{X}(s^{\prime})- \tilde{X}(s))\tilde{X}(s^{\prime})}+\overline{\tilde{X}^{\prime}(s^{\prime}) }^{2}\cdot\frac{\tilde{X}(s^{\prime})-\tilde{X}(s)}{(\tilde{X}(s^{\prime})- \tilde{X}(s))^{2}}\right]ds^{\prime}. \tag{3.20}\] Since \[\frac{\tilde{X}(s^{\prime})-\tilde{X}(s)}{(\tilde{X}(s^{\prime})-\tilde{X}(s) )^{2}}=-\frac{\tilde{X}(s)}{(\tilde{X}(s^{\prime})-\tilde{X}(s))\overline{ \tilde{X}(s^{\prime})}}+\frac{|\tilde{X}(s^{\prime})|^{2}-|\tilde{X}(s)|^{2}} {(\tilde{X}(s^{\prime})-\tilde{X}(s))^{2}\tilde{X}(s^{\prime})},\] (3.20) becomes \[\begin{split}\frac{\partial_{t}\tilde{X}(s)}{\tilde{X}(s)}& =\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left[\frac{\tilde {X}^{\prime}(s^{\prime})^{2}}{(\tilde{X}(s^{\prime})-\tilde{X}(s))\tilde{X}(s ^{\prime})}-\frac{\overline{\tilde{X}^{\prime}(s^{\prime})}^{2}}{(\tilde{X}(s ^{\prime})-\tilde{X}(s))\tilde{X}(s^{\prime})}\right]ds^{\prime}\\ &\quad+\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\overline{ \tilde{X}^{\prime}(s^{\prime})}^{2}\cdot\frac{|\tilde{X}(s^{\prime})|^{2}-| \tilde{X}(s)|^{2}}{(\overline{X(s^{\prime})}-\overline{X(s)})^{2}\tilde{X}(s ^{\prime})}\tilde{X}(s)\,ds^{\prime}.\end{split} \tag{3.21}\] Hence, \[\begin{split}\frac{\partial_{t}|\tilde{X}(s)|}{|\tilde{X}(s)|}& =\mathrm{Re}\left[\frac{\partial_{t}\tilde{X}(s)}{\tilde{X}(s)} \right]=\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[\frac{ \overline{\tilde{X}^{\prime}(s^{\prime})}^{2}(|\tilde{X}(s^{\prime})|^{2}-| \tilde{X}(s)|^{2})}{(\tilde{X}(s^{\prime})-\tilde{X}(s))^{2}\overline{\tilde{X} (s^{\prime})}\tilde{X}(s)}\right]ds^{\prime}\\ &=\frac{1}{8\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[ \frac{\tilde{X}^{\prime}(s^{\prime})^{2}\tilde{X}(s)}{(\tilde{X}(s^{\prime})- \tilde{X}(s))^{2}\tilde{X}(s^{\prime})}\right]\left(\frac{|\tilde{X}(s^{ \prime})|^{2}}{|\tilde{X}(s)|^{2}}-1\right)ds^{\prime}.\end{split} \tag{3.22}\] Define for \(s\neq s^{\prime}\) and \(t\in[0,T]\) that \[B(s,s^{\prime},t):=\mathrm{Re}\left[\frac{\tilde{X}^{\prime}(s^{\prime},t)^{2 }\tilde{X}(s,t)}{(\tilde{X}(s^{\prime},t)-\tilde{X}(s,t))^{2}\tilde{X}(s^{ \prime},t)}\right].\] Let \[T_{*}:=\sup\left\{t\in[0,T]:\,B(s,s^{\prime},\tau)\geq 0\text{ for all }s\neq s^{\prime}\text{ and }\tau\in[0,t)\right\}.\] We claim that \(T_{*}>0\). Suppose not. Then there exists a sequence \(\{(s_{k},s^{\prime}_{k},t_{k})\}_{k=1}^{\infty}\), such that \(t_{k}\to 0^{+}\) as \(k\to+\infty\), \(s_{k}\neq s^{\prime}_{k}\), and \(B(s_{k},s^{\prime}_{k},t_{k})<0\). Using the smoothness assumptions on \(\tilde{X}\), one can justify by Taylor expansion that there exists \(\delta>0\) depending on \(\tilde{X}\), such that \(B(s,s^{\prime},t)\geq 0\) whenever \(t\in[0,\delta]\) and \(|s-s^{\prime}|_{\mathbb{T}}\leq\delta\). Hence, up to a subsequence, we may assume that for some \(s,s^{\prime}\in\mathbb{T}\), \(s_{k}\to s\) and \(s^{\prime}_{k}\to s^{\prime}\) as \(k\to+\infty\), where \(|s-s^{\prime}|_{\mathbb{T}}\geq\delta\). By the continuity of \(\tilde{X}\), \[0\geq\lim_{k\to+\infty}B(s_{k},s^{\prime}_{k},t_{k})=B(s,s^{ \prime},0) =\text{Re}\left[\frac{\tilde{X}^{\prime}_{0}(s^{\prime})\tilde{X}^ {\prime}_{0}(s)}{(\tilde{X}_{0}(s^{\prime})-\tilde{X}_{0}(s))^{2}}\cdot\frac{ \tilde{X}^{\prime}_{0}(s^{\prime})/\tilde{X}_{0}(s^{\prime})}{\tilde{X}^{ \prime}_{0}(s)/\tilde{X}_{0}(s)}\right]\] \[=\frac{|\tilde{X}^{\prime}_{0}(s^{\prime})||\tilde{X}^{\prime}_{0 }(s)|}{|\tilde{X}_{0}(s^{\prime})-\tilde{X}_{0}(s)|^{2}}\cdot\frac{|\tilde{X}^ {\prime}_{0}(s^{\prime})|/|\tilde{X}_{0}(s^{\prime})|}{|\tilde{X}^{\prime}_{0}( s)|/|\tilde{X}_{0}(s)|}>0.\] In the last line, we used the assumptions on \(\tilde{X}_{0}\). This leads to a contradiction, so \(T_{*}>0\). Then for \(t\in[0,T_{*})\), we may use (3.22) and proceed as in the proof of Proposition 3.1 to show that \(\max_{s}|\tilde{X}(s,t)|\) does not increase in \(t\), and \(\min_{s}|\tilde{X}(s,t)|\) does not decrease in \(t\). That implies \(|\tilde{X}(s,t)|\equiv R_{X}\) for all \(s\in\mathbb{T}\) and \(t\in[0,T_{*})\). By the continuity of \(\tilde{X}\), \(|\tilde{X}(\cdot,T_{*})|\equiv R_{X}\). If \(T_{*}<T\), viewing \(T_{*}\) as the new initial time, we argue as before to find that \(B(s,s^{\prime},t)\) should stay non-negative beyond \(T_{*}\) at least for some time. This contradicts with the definition of \(T_{*}\). Therefore, \(T_{*}=T\) and we complete the proof. Since \(|\tilde{X}(s)|\equiv R_{X}\), we can write \(\tilde{X}(s)=R_{X}e^{i\theta(s)}\) with slight abuse of notation, where \(\theta(s)\in\mathbb{R}\). Then (3.21) implies that \[\begin{split} i\partial_{t}\theta(s)&=\frac{ \partial_{t}\tilde{X}(s)}{\tilde{X}(s)}=\frac{1}{8\pi}\text{p.v.}\int_{\mathbb{T }}\left[\frac{\tilde{X}^{\prime}(s^{\prime})^{2}}{(\tilde{X}(s^{\prime})- \tilde{X}(s))\tilde{X}(s^{\prime})}-\frac{\overline{\tilde{X}^{\prime}(s^{ \prime})}^{2}}{(\tilde{X}(s^{\prime})-\overline{\tilde{X}}(s))}\overline{ \tilde{X}(s^{\prime})}\right]ds^{\prime}\\ &=\frac{i}{8\pi}\text{p.v.}\int_{\mathbb{T}}\theta^{\prime}(s^{ \prime})^{2}\cot\frac{\theta(s^{\prime})-\theta(s)}{2}\,ds^{\prime}\\ &=\frac{i}{4\pi}\text{p.v.}\int_{\mathbb{R}}\frac{\theta^{\prime }(s^{\prime})^{2}}{\theta(s^{\prime})-\theta(s)}\,ds^{\prime}.\end{split} \tag{3.23}\] In the last equality, we extended \(\theta=\theta(s^{\prime})\) to the entire real line such that \(\theta(s^{\prime}+2\pi)=\theta(s^{\prime})+2\pi\) for all \(s^{\prime}\in\mathbb{R}\). If \(\theta\) is strictly increasing and suitably smooth in \(\mathbb{R}\), the \(\theta\)-equation is equivalent to the tangential Peskin problem in 2-D [12, Section 2.2]. Indeed, (3.23) is exactly the tangential Peskin problem in the Lagrangian coordinate. If we define \(f=f(x,t)\) for \(x\in\mathbb{R}\) and \(t>0\) such that \(f(\theta(s,t),t)=\theta^{\prime}(s,t)\), then \(f(\cdot,t)\) is \(2\pi\)-periodic on \(\mathbb{R}\), and \(f\) solves \[\partial_{t}f=\frac{1}{4}\big{(}\mathcal{H}f\cdot\partial_{x}f-f\cdot\partial_ {x}\mathcal{H}f\big{)},\] where \[\mathcal{H}f(x):=\frac{1}{\pi}\text{p.v.}\int_{\mathbb{R}}\frac{f(y)}{x-y}\,dy =\frac{1}{2\pi}\text{p.v.}\int_{\mathbb{T}}f(y)\cot\frac{x-y}{2}\,dy.\] This is the tangential Peskin problem in the Eulerian coordinate. Therefore, we can apply the results in [12, Corollary 2.1] to obtain a global solution to (1.9) starting from initial data that has a circular shape but is not necessarily in equilibrium. **Theorem 3.1**.: _Assume \(X_{0}\in H^{1}(\mathbb{T})\), such that the curve \(X_{0}(\mathbb{T})\) is a circle of radius \(R_{X}\) centered at \(x_{0}\in\mathbb{C}\). Suppose that, for a strictly increasing continuous function \(\theta_{0}:\mathbb{R}\to\mathbb{R}\) which satisfies \(\theta_{0}(s+2\pi)=\theta_{0}(s)+2\pi\) for all \(s\in\mathbb{R}\), it holds that \(X_{0}(s)=x_{0}+R_{X}e^{i\theta_{0}(s)}\) in the notation of complex numbers. Also assume that the inverse function of \(\theta_{0}\) on \([0,2\pi]\) is absolutely continuous. Then (1.9) and (1.10) admit a solution \(X=X(s,t)\) in \(\mathbb{T}\times[0,+\infty)\) in the following sense:_ 1. \(X(s,t)\) _is a classic solution to (_1.9_) in_ \(\mathbb{T}\times(0,+\infty)\)_, i.e., (_1.9_) holds pointwise in_ \(\mathbb{T}\times(0,+\infty)\)_._ 2. \(X(\cdot,t)\) _converges uniformly to_ \(X_{0}(\cdot)\) _as_ \(t\to 0^{+}\)_._ _More precisely, the solution is constructed as follows. Let \(\theta=\theta(s,t)\) be a solution to (cf. (3.23))_ \[\partial_{t}\theta(s,t)=-\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{R}}\frac{ \left(\partial_{s^{\prime}}\theta(s^{\prime},t)\right)^{2}}{\theta(s,t)- \theta(s^{\prime},t)}\,ds^{\prime},\quad\theta(s,0)=\theta_{0}(s)\] _in \(\mathbb{R}\times[0,+\infty)\), which is defined in Corollary 2.1 of [12] (with \(X\) and \(X_{0}\) there replaced by \(\theta\) and \(\theta_{0}\), respectively). Let (cf. (3.19))_ \[v(t):=-\frac{R_{X}}{8\pi}\int_{\mathbb{T}}e^{i\theta(s^{\prime},t)}\big{(} \partial_{s^{\prime}}\theta(s^{\prime},t)\big{)}^{2}\,ds^{\prime}. \tag{3.24}\] _Then_ \[X(s,t)=x(t)+R_{X}e^{i\theta(s,t)}\text{ with }x(t):=x_{0}+\int_{0}^{t}v(\tau) \,d\tau \tag{3.25}\] _gives the desired solution to (1.9) and (1.10). It has the following properties:_ 1. _For any_ \(t\geq 0\)_, the curve_ \(X(\mathbb{T},t)\) _is a circle of radius_ \(R_{X}\) _centered at_ \(x(t)\)_._ 2. \(X(s,t)\) _is smooth in_ \(\mathbb{T}\times(0,+\infty)\)_. For any_ \(\alpha\in(0,\frac{1}{2})\)_,_ \(X(s,t)\in C^{\alpha}(\mathbb{T}\times[0,+\infty))\)_._ 3. \(\|X(\cdot,t)\|_{\dot{H}^{1}(\mathbb{T})}=R_{X}\|\theta(\cdot,t)\|_{\dot{H}^{1 }(\mathbb{T})}\) _is non-increasing in_ \(t\in[0,+\infty)\)_._ 4. _For any_ \(t>0\)_,_ \(|X^{\prime}(\cdot,t)|\) _has a positive lower bound. As a result,_ \(X(\cdot,t)\) _satisfies the well-stretched condition, i.e., for some_ \(\lambda=\lambda(t)>0\)__ \[|X(s_{1},t)-X(s_{2},t)|\geq\lambda(t)|s_{1}-s_{2}|_{\mathbb{T}}\quad\forall\, s_{1},s_{2}\in\mathbb{T}.\] _In fact, we may choose_ \(\lambda(t)\) _to be strictly increasing in_ \(t\)_._ 5. _There exist_ \(x_{\infty}\in\mathbb{C}\) _and_ \(\xi_{\infty}\in\mathbb{T}\) _such that, as_ \(t\to+\infty\)_,_ \(X(\cdot,t)\) _converges uniformly to_ \(X_{\infty}(s):=x_{\infty}+R_{X}e^{i(s+\xi_{\infty})}\) _with an exponential rate. The exponential convergence also holds in_ \(H^{k}(\mathbb{T})\)_-norms with arbitrary_ \(k\in\mathbb{N}\)_._ Proof.: According to Corollary 2.1 of [12], \(\theta(s,t)\) constructed here is smooth in \(\mathbb{R}\times(0,+\infty)\). Besides, \(\|\theta(\cdot,t)\|_{\dot{H}^{1}(\mathbb{T})}\) is uniformly bounded for all \(t\), and \(\|\theta^{\prime}(\cdot,t)-1\|_{L^{2}(\mathbb{T})}\) decays to \(0\) exponentially as \(t\to+\infty\). Hence, by (3.24), \(v(t)\) is smooth in \((0,+\infty)\), and \[|v(t)|=\frac{R_{X}}{8\pi}\left|\int_{\mathbb{T}}e^{i\theta(s^{\prime},t)} \partial_{s^{\prime}}\theta(s^{\prime},t)\big{(}\partial_{s^{\prime}}\theta(s^ {\prime},t)-1\big{)}\,ds^{\prime}\right|\leq\frac{R_{X}}{8\pi}\|\theta\|_{\dot{H }^{1}(\mathbb{T})}\|\theta^{\prime}-1\|_{L^{2}(\mathbb{T})}.\] This implies that \(v(t)\) is uniformly bounded and decays exponentially as \(t\to+\infty\), so \(x(t)\) defined by (3.25) converges to some \(x_{\infty}\in\mathbb{C}\) exponentially. These facts together with the properties of \(\theta(s,t)\) established in [12, Corollary 2.1] imply the desired claims. ## 4. The Curvature \(\kappa(s)\) Let \(\kappa(s,t)\) denote the curvature of the curve \(X(\mathbb{T},t)\) at the point \(X(s,t)\). It is given by \[\kappa(s)=\frac{\mathrm{Im}[\overline{X^{\prime}(s)}X^{\prime\prime}(s)]}{|X^{ \prime}(s)|^{3}}=\frac{\mathrm{Im}[X^{\prime\prime}(s)/X^{\prime}(s)]}{|X^{ \prime}(s)|}. \tag{4.1}\] The sign convention of the curvature is that, if \(X(\mathbb{T})\) is a circle, \(\kappa(s)\) is positive. By the assumptions (A1)-(A3) on \(X\), \(\kappa(s,t)\in C^{1}(\mathbb{T}\times[0,T])\). In this section, we want to establish an extremum principle and a decay estimate for \(\kappa(s,t)\) under the condition that \(\Phi_{*}(0)<\pi/4\). The main result of this section is Proposition 4.1. We start from deriving the equation for \(\kappa(s)\). **Lemma 4.1**.: _The curvature \(\kappa(s,t)\) satisfies_ \[\partial_{t}\kappa(s)=\frac{3}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\frac{|X^{ \prime}(s^{\prime})|^{2}\cos 2\Phi(s^{\prime},s)}{|X(s^{\prime})-X(s)|^{2}} \left[\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}-\frac{1}{2}\kappa(s )\right]ds^{\prime}. \tag{4.2}\] Proof.: By definition, \[|X^{\prime}(s)|\partial_{t}\kappa(s)=\partial_{t}\mathrm{Im}\frac{X^{\prime \prime}(s)}{X^{\prime}(s)}-\kappa(s)\partial_{t}|X^{\prime}(s)|.\] We differentiate (2.10) to obtain that \[\partial_{t}\mathrm{Im}\frac{X^{\prime\prime}(s)}{X^{\prime}(s)}= \partial_{s}\mathrm{Im}\frac{\partial_{t}X^{\prime}(s)}{X^{\prime}(s)}\] \[=\frac{1}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left\{\mathrm{Re} \left[\frac{3X^{\prime}(s^{\prime})^{2}X^{\prime}(s)^{2}}{(X(s^{\prime})-X(s)) ^{4}}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\right.\] \[\qquad\qquad+\mathrm{Re}\left[\frac{X^{\prime}(s^{\prime})^{2}X^ {\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\cdot\frac{X^{\prime\prime}(s)}{X^{ \prime}(s)}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\] \[\qquad\qquad\left.-\mathrm{Re}\left[\frac{X^{\prime}(s^{\prime})^ {2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\mathrm{Im}\left[\frac{X(s ^{\prime})-X(s)}{X^{\prime}(s)}\cdot\frac{X^{\prime\prime}(s)}{X^{\prime}(s)} \right]-\frac{1}{2}\mathrm{Im}\left[\frac{X^{\prime}(s^{\prime})X^{\prime \prime}(s)}{(X(s^{\prime})-X(s))^{2}}\right]\right\}ds^{\prime}.\] Here we used Lemma 2.1. Since \[\frac{X^{\prime\prime}(s)}{X^{\prime}(s)}=\mathrm{Re}\frac{X^{\prime\prime}(s) }{X^{\prime}(s)}+i\kappa(s)|X^{\prime}(s)|, \tag{4.3}\] we find that \[\partial_{t}\mathrm{Im}\frac{X^{\prime\prime}(s)}{X^{\prime}(s)}\] \[=\frac{1}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left\{\mathrm{Re} \left[\frac{3X^{\prime}(s^{\prime})^{2}X^{\prime}(s)^{2}}{(X(s^{\prime})-X(s) )^{4}}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\right.\] \[\qquad\qquad-\kappa(s)|X^{\prime}(s)|\mathrm{Im}\left[\frac{X^{ \prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\mathrm{ Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\] \[\qquad\qquad\qquad-\frac{1}{2}\kappa(s)|X^{\prime}(s)|\mathrm{ Re}\left[\frac{X^{\prime}(s^{\prime})X^{\prime}(s)}{(X(s^{\prime})-X(s))^{2}}\right] \bigg{\}}\,ds^{\prime}.\] We used Lemma 2.1 again in the last term. Combining this with (2.7), we obtain that \[|X^{\prime}(s)|\partial_{t}\kappa(s)=\partial_{t}\mathrm{Im}\frac{X ^{\prime\prime}(s)}{X^{\prime}(s)}-\kappa(s)\partial_{t}|X^{\prime}(s)|\] \[=\frac{1}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left\{3\mathrm{Re} \left[J(s^{\prime},s)^{2}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^ {\prime}(s)}\right]\right.\] \[\qquad\qquad-2\kappa(s)|X^{\prime}(s)|\mathrm{Im}\left[\frac{X^{ \prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\mathrm{ Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\] \[\qquad\qquad-\kappa(s)|X^{\prime}(s)|\mathrm{Re}\left[\frac{X^{ \prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\mathrm{ Re}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\] \[\qquad\qquad-\frac{1}{2}\kappa(s)|X^{\prime}(s)|\mathrm{Re} \left[\frac{X^{\prime}(s^{\prime})^{2}}{(X(s^{\prime})-X(s))^{2}}\right] \bigg{\}}\,ds^{\prime}\] \[=\frac{1}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left\{3\mathrm{Re} \left[J(s^{\prime},s)^{2}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X ^{\prime}(s)}\right]\right.\] \[\qquad\qquad-\frac{3}{2}\kappa(s)|X^{\prime}(s)|\mathrm{Re} \left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3} }\right]\mathrm{Re}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right] \right\}ds^{\prime}\] \[=\frac{3}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left\{\mathrm{Re} \left[J(s^{\prime},s)^{2}\right]\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X ^{\prime}(s)}\right]\right.\] \[\qquad\qquad\left.-\frac{1}{2}\kappa(s)|X^{\prime}(s)|\mathrm{Re }\left[\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3 }}\cdot\frac{\overline{X(s^{\prime})-X(s)}}{\overline{X^{\prime}(s)}}\right] \right\}\,ds^{\prime}\] \[=\frac{3}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[J(s ^{\prime},s)^{2}\right]\left\{\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{ \prime}(s)}\right]-\frac{1}{2}\kappa(s)|X^{\prime}(s)|\frac{|X(s^{\prime})-X(s )|^{2}}{|X^{\prime}(s)|^{2}}\right\}ds^{\prime}\] \[=\frac{3}{2\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[J(s ^{\prime},s)^{2}\right]\frac{|X(s^{\prime})-X(s)|^{2}}{|X^{\prime}(s)|^{2}} \left\{\mathrm{Im}\,I(s,s^{\prime})-\frac{1}{2}\kappa(s)|X^{\prime}(s)| \right\}ds^{\prime}.\] Here we used \(\mathrm{Re}(AB)=\mathrm{Re}\,A\mathrm{Re}\,B-\mathrm{Im}\,A\mathrm{Im}\,B\) and \(\mathrm{Re}(A\bar{B})=\mathrm{Re}\,A\mathrm{Re}\,B+\mathrm{Im}\,A\mathrm{Im}\,B\) with \[A=\frac{X^{\prime}(s^{\prime})^{2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}, \quad B=\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)},\quad AB=\frac{X^{\prime}(s^{ \prime})^{2}}{(X(s^{\prime})-X(s))^{2}}.\] We also used the definitions of \(I(s,s^{\prime})\) and \(J(s,s^{\prime})\) (cf. Section 2.1), as well as \[\mathrm{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]=\mathrm{Im} \left[\frac{-1}{I(s,s^{\prime})}\right]=\frac{\mathrm{Im}\,I(s,s^{\prime})}{|I (s,s^{\prime})|^{2}}.\] This completes the proof of (4.2). We prove the following lemma to study the sign of the bracket on the right-hand side of (4.2). **Lemma 4.2**.: _Suppose \(\Phi_{*}\leq\pi/4\). Denote \(\kappa_{+}(t):=\max_{s\in\mathbb{T}}\kappa(s,t)\) and \(\kappa_{-}(t):=\min_{s\in\mathbb{T}}\kappa(s,t)\). Let \(d(s)\) be defined in Proposition 3.2. Then the following holds._ 1. _For all distinct_ \(s,s^{\prime}\in\mathbb{T}\)_,_ \(\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}\leq\frac{\kappa_{+}}{2\cos \Phi_{*}}\)_._ 2. _Let_ \(C_{0}:=\frac{2\cos(\Phi_{*}/2)}{\cos(3\Phi_{*}/2)}\)_. If_ \(\kappa_{+}\geq C_{0}d(s)^{-1}\)_, then_ \(\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}\leq\frac{\kappa_{+}}{2}\) _for all_ \(s^{\prime}\in\mathbb{T}\setminus\{s\}\)_._ _._ 3. _For all distinct_ \(s,s^{\prime}\in\mathbb{T}\)_,_ \(\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}\geq\frac{\kappa_{-}}{2}\)_._ Proof.: Fix an arbitrary \(s\in\mathbb{T}\). Denote \[Y(s^{\prime}):=\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}=\frac{X^{\prime}(s)/|X^{ \prime}(s)|}{X(s)-X(s^{\prime})}\quad\forall\,s^{\prime}\in\mathbb{T}\setminus \{s\}.\] We calculate that \[|Y(s^{\prime})|=\frac{1}{|X(s)-X(s^{\prime})|},\quad Y^{\prime}(s^{\prime})= \frac{J(s,s^{\prime})}{|X^{\prime}(s)|},\quad|Y^{\prime}(s^{\prime})|=\frac{|X ^{\prime}(s^{\prime})|}{|X(s)-X(s^{\prime})|^{2}}, \tag{4.4}\] so \(\lim_{s^{\prime}\to s}|Y(s^{\prime})|=+\infty\) and \(\arg Y^{\prime}(s^{\prime})=\Phi(s,s^{\prime})\in[-\Phi_{*},\Phi_{*}]\). Hence, there exists a \(C^{2}\)-function \(f:\mathbb{R}\to\mathbb{R}\), such that \(\|f^{\prime}\|_{L^{\infty}}\leq\tan\Phi_{*}\leq 1\), and \[\mathrm{Im}\,Y(s^{\prime})=f\big{(}\mathrm{Re}\,Y(s^{\prime})\big{)}\quad \forall\,s^{\prime}\in\mathbb{T}\setminus\{s\}.\] In fact, the function \(\mathrm{Re}\,Y\) maps \(\mathbb{T}\setminus\{s\}\) onto \(\mathbb{R}\), so \(Y(\mathbb{T}\setminus\{s\})\) is exactly the graph of \(f\). As a result, it suffices to study the upper and lower bounds for \(f\). With \(\eta=\mathrm{Re}\,Y(s^{\prime})\), we write \[X(s^{\prime})=X(s)-\frac{X^{\prime}(s)}{|X^{\prime}(s)|Y(s^{\prime})}=X(s)- \frac{X^{\prime}(s)}{|X^{\prime}(s)|(\eta+if(\eta))}=:g(\eta).\] Note that here \(s\) is treated as a given constant. Since the curvature formula (4.1) applies to any non-degenerate parameterization of the curve, we have \(\kappa(s^{\prime})=\mathrm{Im}[g^{\prime\prime}(\eta)/g^{\prime}(\eta)]/|g^{ \prime}(\eta)|\) under the change of variable \(\eta=\mathrm{Re}\,Y(s^{\prime})\). Then we calculate that \[g^{\prime}(\eta)=\frac{X^{\prime}(s)(1+if^{\prime}(\eta))}{|X^{ \prime}(s)|(\eta+if(\eta))^{2}},\quad|g^{\prime}(\eta)|=\frac{|1+if^{\prime}( \eta)|}{\eta^{2}+f(\eta)^{2}},\] \[\frac{g^{\prime\prime}(\eta)}{g^{\prime}(\eta)}=\frac{if^{\prime \prime}(\eta)}{1+if^{\prime}(\eta)}-2\cdot\frac{1+if^{\prime}(\eta)}{\eta+if( \eta)},\] \[\mathrm{Im}\frac{g^{\prime\prime}(\eta)}{g^{\prime}(\eta)}=\frac{ f^{\prime\prime}(\eta)}{1+|f^{\prime}(\eta)|^{2}}+2\cdot\frac{f(\eta)-\eta f^{ \prime}(\eta)}{\eta^{2}+f(\eta)^{2}}.\] Hence, still with \(\eta=\mathrm{Re}\,Y(s^{\prime})\), we obtain that \[\kappa(s^{\prime})=\frac{1}{|g^{\prime}(\eta)|}\mathrm{Im}\frac{g^{\prime \prime}(\eta)}{g^{\prime}(\eta)}=\frac{(\eta^{2}+f(\eta)^{2})f^{\prime\prime}( \eta)}{(1+|f^{\prime}(\eta)|^{2})^{3/2}}+2\frac{f(\eta)-\eta f^{\prime}(\eta) }{(1+|f^{\prime}(\eta)|^{2})^{1/2}}\in[\kappa_{-},\kappa_{+}]. \tag{4.5}\] We then proceed in four steps. _Step 1_.: We first show that, for all \(s^{\prime}\in\mathbb{T}\setminus\{s\}\), \[|\mathrm{Im}\,Y(s^{\prime})|\leq\tan\Phi_{*}|\mathrm{Re}\,Y(s^{\prime})|+(d(s )\cos\Phi_{*})^{-1}. \tag{4.6}\] Assume that, at some \(s_{*}\in\mathbb{T}\setminus\{s\}\), \(|X(s)-X(s_{*})|=\sup_{s^{\prime}}|X(s)-X(s^{\prime})|=d(s)\). Then \[|Y(s_{*})|=\frac{1}{|X(s)-X(s_{*})|}=d(s)^{-1}.\] Since \(\arg Y^{\prime}(s^{\prime})\in[-\Phi_{*},\Phi_{*}]\), for all \(s^{\prime}\in\mathbb{T}\setminus\{s\}\), \[|\mathrm{Im}\,Y(s^{\prime})| \leq|\mathrm{Im}\,Y(s_{*})|+\tan\Phi_{*}|\mathrm{Re}\,Y(s^{ \prime})-\mathrm{Re}\,Y(s_{*})|\] \[\leq\,\tan\Phi_{*}|\mathrm{Re}\,Y(s^{\prime})|+|\mathrm{Im}\,Y(s_ {*})|+\tan\Phi_{*}|\mathrm{Re}\,Y(s_{*})|\] \[\leq\,\tan\Phi_{*}|\mathrm{Re}\,Y(s^{\prime})|+(d(s)\cos\Phi_{*})^ {-1}. \tag{4.7}\] We applied the Cauchy-Schwarz inequality in the last line. _Step 2_.: We then show that \(\lim_{\eta\to\infty}f(\eta)=\frac{\kappa(s)}{2}\). By definition, \[\operatorname{Im}Y(s^{\prime}) =\operatorname{Im}\frac{X^{\prime}(s)/|X^{\prime}(s)|}{X(s)-X(s^{ \prime})}=\operatorname{Im}\frac{\overline{X^{\prime}(s)}(X(s^{\prime})-X(s))} {|X^{\prime}(s)||X(s)-X(s^{\prime})|^{2}}\] \[=\operatorname{Im}\frac{\overline{X^{\prime}(s)}(X(s^{\prime})-X( s)-X^{\prime}(s)(s^{\prime}-s))}{|X^{\prime}(s)||X(s)-X(s^{\prime})|^{2}}.\] Hence, \[\begin{split}\lim_{s^{\prime}\to s}\operatorname{Im}Y(s^{ \prime})&=\lim_{s^{\prime}\to s}\operatorname{Im}\frac{ \overline{X^{\prime}(s)}(X(s^{\prime})-X(s)-X^{\prime}(s)(s^{\prime}-s))/|s^{ \prime}-s|^{2}}{|X^{\prime}(s)||X(s)-X(s^{\prime})|^{2}/|s^{\prime}-s|^{2}}\\ &=\operatorname{Im}\frac{\overline{X^{\prime}(s)}X^{\prime\prime }(s)/2}{|X^{\prime}(s)||X^{\prime}(s)|^{2}}=\frac{\kappa(s)}{2}.\end{split} \tag{4.8}\] We used (4.1) in the last equality. This proves the desired claim. _Step 3_.: It is clear that \(\kappa_{+}>0\), since \(\int_{\mathbb{T}}\kappa(s)|X^{\prime}(s)|\,ds=\int_{\mathbb{T}}\operatorname {Im}[\partial_{s}\ln X^{\prime}(s)]\,ds=2\pi\) by (4.1). If \(\sup_{x\in\mathbb{R}}f(x)\leq\frac{\kappa_{+}}{2}\), then \[\operatorname{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}=\operatorname{Im}Y(s ^{\prime})=f(\operatorname{Re}Y(s^{\prime}))\leq\frac{\kappa_{+}}{2}\leq\frac{ \kappa_{+}}{2\cos\Phi_{*}}, \tag{4.9}\] which gives (i) and (ii). Next we assume \(\sup_{x\in\mathbb{R}}f(x)>\frac{\kappa_{+}}{2}\). Thanks to the previous step, it must be attained at some \(x_{\uparrow}\in\mathbb{R}\), where \[f(x_{\uparrow})>\frac{\kappa_{+}}{2},\quad f^{\prime}(x_{\uparrow})=0. \tag{4.10}\] Taking \(\eta=x_{\uparrow}\) in (4.5), we find that \[f^{\prime\prime}(x_{\uparrow})\leq\frac{\kappa_{+}-2f(x_{\uparrow})}{x_{ \uparrow}^{2}+f(x_{\uparrow})^{2}}<0.\] If \(f^{\prime\prime}(x)<0\) for all \(x\geq x_{\uparrow}\), then \(f^{\prime}\) is strictly decreasing on \([x_{\uparrow},+\infty)\), so \(f^{\prime}(x)\leq f^{\prime}(x_{\uparrow}+1)<0\) for all \(x\geq x_{\uparrow}+1\), and thus \(f(x)\leq f(x_{\uparrow}+1)+f^{\prime}(x_{\uparrow}+1)(x-x_{\uparrow}-1)\) for \(x\geq x_{\uparrow}+1\), which contradicts with \(\lim_{\eta\to\infty}f(\eta)=\frac{\kappa(s)}{2}\). This implies that \(\{x\geq x_{\uparrow}:\,f^{\prime\prime}(x)\geq 0\}\) must be nonempty. Let \(\tilde{x}:=\inf\{x\geq x_{\uparrow}:\,f^{\prime\prime}(x)\geq 0\}\). Then \(x_{\uparrow}<\tilde{x}<+\infty\), \(f^{\prime\prime}(\tilde{x})=0\), and \(f^{\prime\prime}(x)<0\) for \(x\in[x_{\uparrow},\tilde{x})\). Taking \(\eta=\tilde{x}\) in (4.5) yields \[f(\tilde{x})-\tilde{x}f^{\prime}(\tilde{x})\leq\frac{\kappa_{+}}{2}(1+|f^{ \prime}(\tilde{x})|^{2})^{1/2}. \tag{4.11}\] Since \(f^{\prime\prime}(x)<0\) for \(x\in[x_{\uparrow},\tilde{x})\), we have \(0=f^{\prime}(x_{\uparrow})\geq f^{\prime}(x)>f^{\prime}(\tilde{x})\) for \(x\in[x_{\uparrow},\tilde{x})\), and \[f(\tilde{x})-\tilde{x}f^{\prime}(\tilde{x})-(f(x_{\uparrow})-x_{\uparrow}f^{ \prime}(\tilde{x}))=\int_{x_{\uparrow}}^{\tilde{x}}[f^{\prime}(x)-f^{\prime}( \tilde{x})]\,dx>0.\] This together with (4.11) implies \[f(x_{\uparrow})-x_{\uparrow}f^{\prime}(\tilde{x})<f(\tilde{x})-\tilde{x}f^{ \prime}(\tilde{x})\leq\frac{\kappa_{+}}{2}(1+|f^{\prime}(\tilde{x})|^{2})^{1/2}.\] Now using \(f^{\prime}(\tilde{x})<0\) and (4.10), \[x_{\dagger} \leq\frac{(\kappa_{+}/2)(1+|f^{\prime}(\tilde{x})|^{2})^{1/2}-f(x_ {\dagger})}{-f^{\prime}(\tilde{x})}\] \[\leq\frac{\kappa_{+}}{2}\frac{(1+|f^{\prime}(\tilde{x})|^{2})^{1/ 2}-1}{-f^{\prime}(\tilde{x})}=\frac{\kappa_{+}}{2}\frac{|f^{\prime}(\tilde{x}) |}{(1+|f^{\prime}(\tilde{x})|^{2})^{1/2}+1}.\] Since \(|f^{\prime}(\tilde{x})|\leq\|f^{\prime}\|_{L^{\infty}}\leq\tan\Phi_{*}\), \[x_{\dagger}\leq\frac{\kappa_{+}}{2}\frac{\tan\Phi_{*}}{(1+|\tan\Phi_{*}|^{2})^ {1/2}+1}=\frac{\kappa_{+}}{2}\tan\left(\frac{\Phi_{*}}{2}\right). \tag{4.12}\] One can analogously prove \(x_{\dagger}\geq-\frac{\kappa_{+}}{2}\tan(\Phi_{*}/2)\) by studying the point \(\tilde{x}^{\prime}:=\sup\{x\leq x_{\dagger}:\,f^{\prime\prime}(x)\geq 0\}\). Therefore, \(|x_{\dagger}|\leq\frac{\kappa_{+}}{2}\tan(\Phi_{*}/2)\). If \(x_{\dagger}\geq 0\), since \(f^{\prime\prime}(x)<0\) for \(x\in[x_{\dagger},\tilde{x})\), we find that \[f(\tilde{x})-\tilde{x}f^{\prime}(\tilde{x})-(f(x_{\dagger})-x_{\dagger}f^{ \prime}(x_{\dagger}))=\int_{x_{\dagger}}^{\tilde{x}}[-xf^{\prime\prime}(x)]\, dx>0. \tag{4.13}\] Then by \(f^{\prime}(x_{\dagger})=0\), (4.11) and \(|f^{\prime}(\tilde{x})|\leq\tan\Phi_{*}\), we have \[f(x_{\dagger})<f(\tilde{x})-\tilde{x}f^{\prime}(\tilde{x})\leq\frac{\kappa_{+ }}{2}(1+|f^{\prime}(\tilde{x})|^{2})^{1/2}\leq\frac{\kappa_{+}}{2}(1+|\tan\Phi _{*}|^{2})^{1/2}=\frac{\kappa_{+}}{2\cos\Phi_{*}}.\] If \(x_{\dagger}\leq 0\), we have \(f^{\prime\prime}(x)<0\) for \(x\in(\tilde{x}^{\prime},x_{\dagger}]\), where \(\tilde{x}^{\prime}\) is defined above, so (4.13) still holds with \(\tilde{x}\) there replaced by \(\tilde{x}^{\prime}\). Then arguing as above, we still obtain \(f(x_{\dagger})<\frac{\kappa_{+}}{2\cos\Phi_{*}}\). Hence, we have proved that \(\sup_{x\in\mathbb{R}}f(x)=f(x_{\dagger})\leq\frac{\kappa_{+}}{2\cos\Phi_{*}}\), provided \(\sup_{x\in\mathbb{R}}f(x)>\frac{\kappa_{+}}{2}\). This implies that \(\sup_{x\in\mathbb{R}}f(x)\leq\frac{\kappa_{+}}{2\cos\Phi_{*}}\) is always true. Proceeding as in (4.9), we obtain (i). Now assume that \(\kappa_{+}\geq\frac{2\cos(\Phi_{*}/2)}{\cos(3\Phi_{*}/2)}d(s)^{-1}\). If \(\sup_{x\in\mathbb{R}}f(x)>\frac{\kappa_{+}}{2}\), thanks to (4.6) and (4.12), \[f(x_{\dagger})\leq\tan\Phi_{*}|x_{\dagger}|+(d(s)\cos\Phi_{*})^{-1}\leq\tan \Phi_{*}\cdot\frac{\kappa_{+}}{2}\tan\left(\frac{\Phi_{*}}{2}\right)+\frac{ \kappa_{+}\cos(3\Phi_{*}/2)}{2\cos(\Phi_{*}/2)\cos\Phi_{*}}=\frac{\kappa_{+}} {2},\] which contradicts with (4.10). Hence, \(\sup_{x\in\mathbb{R}}f(x)\leq\frac{\kappa_{+}}{2}\), and thus \(\operatorname{Im}Y(s^{\prime})\leq\frac{\kappa_{+}}{2}\) for \(s^{\prime}\in\mathbb{T}\setminus\{s\}\). This proves (ii). _Step 4_.: The proof of (iii) follows a similar argument. We only sketch it. Suppose \(\inf_{x\in\mathbb{R}}f(x)<\frac{\kappa_{-}}{2}\). With abuse of the notation, the infimum must be attained at some \(x_{\dagger}\in\mathbb{R}\), where \[f(x_{\dagger})<\frac{\kappa_{-}}{2},\quad f^{\prime}(x_{\dagger})=0.\] We use (4.5) to show \(f^{\prime\prime}(x_{\dagger})>0\). Similar as before, there exists \(\tilde{x}^{\prime}<x_{\dagger}<\tilde{x}\) such that \(f^{\prime\prime}>0\) on \((\tilde{x}^{\prime},\tilde{x})\) while \(f^{\prime\prime}(\tilde{x})=f^{\prime\prime}(\tilde{x}^{\prime})=0\). Following the argument in the previous step, we can show that \[\frac{\kappa_{-}}{2}\frac{|f^{\prime}(\tilde{x}^{\prime})|}{(1+|f^{\prime}( \tilde{x}^{\prime})|^{2})^{1/2}+1}\leq x_{\dagger}\leq-\frac{\kappa_{-}}{2}\frac {|f^{\prime}(\tilde{x})|}{(1+|f^{\prime}(\tilde{x})|^{2})^{1/2}+1},\] where \(f^{\prime}(\tilde{x}^{\prime})<0<f^{\prime}(\tilde{x})\). If \(\kappa_{-}>0\), this cannot hold, so we must have \(\inf_{x\in\mathbb{R}}f(x)\geq\frac{\kappa_{-}}{2}\). If \(\kappa_{-}\leq 0\), we obtain a naive bound \[|x_{\dagger}|\leq\frac{|\kappa_{-}|}{2}. \tag{4.14}\] Now we need an improved version of the estimate (4.7). Let \(\beta(s^{\prime}):=\arg[X(s^{\prime})-X(s)]\). Then \(\beta(s^{\prime})\in C(s,s+2\pi)\) and \(\beta(s^{+})=\arg X^{\prime}(s)\); in addition, \(\beta((s+2\pi)^{-})-\beta(s^{+})=\pi\) since \(X(\mathbb{T})\) is parameterized in the counter-clockwise direction and \(X\in C^{1}(\mathbb{T})\). Hence, there exists \(\tilde{s}\in(s,s+2\pi)\), such that \(\beta(\tilde{s})-\beta(s^{+})=\pi/2\). We thus find \[Y(\tilde{s}) =\frac{X^{\prime}(s)/|X^{\prime}(s)|}{X(s)-X(\tilde{s})}=\frac{e ^{i\beta(s^{+})}}{-|X(s)-X(\tilde{s})|e^{i\beta(\tilde{s})}}\] \[=\frac{1}{-|X(s)-X(\tilde{s})|e^{i\pi/2}}=\frac{i}{|X(s)-X(\tilde {s})|}=i|Y(\tilde{s})|.\] By the definition of \(d(s)\), we have \(|X(s)-X(\tilde{s})|\leq d(s)\), so \(|Y(\tilde{s})|\geq d(s)^{-1}\). Hence, for any \(s^{\prime}\in\mathbb{T}\setminus\{s\}\), \[\begin{split}\operatorname{Im}Y(s^{\prime})&\geq \operatorname{Im}Y(\tilde{s})-\tan\Phi_{*}\big{|}\mathrm{Re}\,Y(s^{\prime})- \mathrm{Re}\,Y(\tilde{s})\big{|}\\ &=\,-\tan\Phi_{*}|\mathrm{Re}\,Y(s^{\prime})|+|Y(\tilde{s})|\\ &\geq\,-\tan\Phi_{*}|\mathrm{Re}\,Y(s^{\prime})|+d(s)^{-1}.\end{split} \tag{4.15}\] Therefore, by (4.14) and the fact \(\Phi_{*}\leq\pi/4\), \[f(x_{\dagger})\geq-\tan\Phi_{*}|x_{\dagger}|+d(s)^{-1}\geq-\tan\Phi_{*}\cdot \frac{|\kappa_{-}|}{2}\geq\frac{\kappa_{-}}{2},\] which contradicts with the assumption. This proves (iii). Now we can establish an extremum principle for \(\kappa(s,t)\), as well as its upper and lower bounds. **Proposition 4.1**.: _Suppose \(\Phi_{*}(0)<\frac{\pi}{4}\), where \(\Phi_{*}(t)\) is defined in Proposition 3.1. Denote \(\kappa_{+}(t):=\max_{s\in\mathbb{T}}\kappa(s,t)\) and \(\kappa_{-}(t):=\min_{s\in\mathbb{T}}\kappa(s,t)\) as in Lemma 4.2. Let \(\kappa_{*}(t):=\sup_{s\in\mathbb{T}}|\kappa(s,t)|\). Then_ 1. \(\max\{\kappa_{+}(t)R_{X},7+5\sqrt{2}\}\) _is a non-increasing Lipschitz function. For_ \(t>0\)_,_ \[\kappa_{+}(t)R_{X}\leq 1+C\exp\left[C\left(\int_{0}^{t}\cos 2\Phi_{*}(\tau)\,d \tau\right)^{-1}-ct\right],\] _where_ \(C,c>0\) _are universal constants._ 2. \(\kappa_{-}(t)R_{X}\) _is a non-decreasing Lipschitz function. For_ \(t>0\)_,_ \[\kappa_{-}(t)R_{X}\geq 1-C\exp\left[C\left(\int_{0}^{t}\cos 2\Phi_{*}(\tau)\,d \tau\right)^{-1}-ct\right],\] _where_ \(C,c>0\) _are universal constants._ 3. \(\max\{\kappa_{*}(t)R_{X},7+5\sqrt{2}\}\) _is a non-increasing Lipschitz function. In addition,_ \(\kappa_{*}(t)R_{X}\geq 1\) _for all_ \(t>0\)_._ _Remark 4.1_.: In particular, Proposition 3.1 implies that the above upper and lower bounds are finite for any \(t>0\), and they both converge to \(1\) exponentially as \(t\to+\infty\). Proof.: We shall still use the notations in Lemma 4.2 as well as its proof. _Step \(1\)._ By Proposition 3.1, \(\Phi_{*}(t)=\sup|\Phi(s_{1},s_{2},t)|<\frac{\pi}{4}\) for all \(t\). Take an arbitrary \(t\), and we first study \(\kappa_{+}(t)\). Assume that \(\kappa_{+}(t)\) is attained at some \(s\in\mathbb{T}\), i.e., \(\kappa(s,t)=\kappa_{+}(t)\). Without loss of generality, we additionally assume \[\kappa(s)>\frac{\cos(\Phi_{*}/2)}{\cos(3\Phi_{*}/2)}\tan^{2}\left(\frac{\pi}{4 }+\frac{\Phi_{*}}{2}\right)R_{X}^{-1}\geq R_{X}^{-1}>0. \tag{4.16}\] Then Proposition 3.2 implies \[\kappa(s)>C_{0}\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right)d(s)^{-1}\geq 2 \tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right)d(s)^{-1}\geq 2d(s)^{-1}, \tag{4.17}\] where \(C_{0}=\frac{2\cos(\Phi_{*}/2)}{\cos(3\Phi_{*}/2)}\geq 2\) was defined in Lemma 4.2. By Lemma 4.2, \(\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}\leq\frac{\kappa(s)}{2}\) for all \(s^{\prime}\in\mathbb{T}\setminus\{s\}\). We also assume that \(d(s)=|X(s)-X(s_{*})|\) for some \(s_{*}\in\mathbb{T}\). Define \[A:=\left\{s^{\prime}\in\mathbb{T}:\,\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{ \prime}(s)|}\leq\left(\frac{\kappa(s)}{2d(s)}\right)^{1/2}\right\}.\] If \(|X(s)-X(s^{\prime})|\geq[(2d(s))/\kappa(s)]^{1/2}\), then \[\mathrm{Im}\frac{I(s,s^{\prime})}{|X^{\prime}(s)|}\leq\frac{1}{|X(s)-X(s^{ \prime})|}\leq\left(\frac{\kappa(s)}{2d(s)}\right)^{1/2},\] so \(s^{\prime}\in A\). In particular, \(s_{*}\in A\) since \(d(s)=|X(s)-X(s_{*})|>2\kappa(s)^{-1}\), which means \(A\) is non-empty. Then by Lemma 4.1 and Lemma 4.2, \[\partial_{t}\kappa(s)\leq-\frac{3\kappa(s)}{4\pi}\cos 2\Phi_{*}\left[1-\left( \frac{2}{d(s)\kappa(s)}\right)^{1/2}\right]\int_{A}\frac{|X^{\prime}(s^{\prime })|^{2}}{|X(s^{\prime})-X(s)|^{2}}\,ds^{\prime}.\] By the Cauchy-Schwarz inequality, \[\int_{A}\frac{|X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(s)|^{2}}\,ds^{ \prime}\geq\,\left(\int_{A}\frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s) |}\,ds^{\prime}\right)^{2}\left(\int_{A}\,ds^{\prime}\right)^{-1}\geq\frac{1 }{2\pi}\left(\int_{X(A)}\frac{d\mathcal{H}^{1}(z)}{|z-X(s)|}\right)^{2},\] where \(d\mathcal{H}^{1}(z)\) denotes the \(1\)-dimensional Hausdorff measure. The definition of \(A\) and \(d(s)\) allows us to derive a naive bound by the co-area formula \[\int_{X(A)}\frac{d\mathcal{H}^{1}(z)}{|z-X(s)|}\geq\int_{(2d(s)/\kappa(s))^{1 /2}}^{d(s)}\frac{1}{r}\,dr=\frac{1}{2}\ln\big{(}d(s)\kappa(s)/2\big{)}.\] If \(d(s)\kappa(s)\leq 3\), we need an improved estimate. Using (4.4) and the notations in Lemma 4.2, \[\int_{A}\frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|}\,ds^{\prime}= \int_{A}\frac{|Y^{\prime}(s^{\prime})|}{|Y(s^{\prime})|}\,ds^{\prime}\geq\int _{\{f(x)\leq[\kappa(s)/(2d(s))]^{1/2}\}}\frac{dx}{(x^{2}+f(x)^{2})^{1/2}}. \tag{4.18}\] With \(x_{*}:=\mathrm{Re}\,Y(s_{*})\), \[x_{*}^{2}+f(x_{*})^{2}=|Y(s_{*})|^{2}=\frac{1}{|X(s)-X(s_{*})|^{2}}=d(s)^{-2},\] so \(f(x_{*})\leq d(s)^{-1}<[\kappa(s)/(2d(s))]^{1/2}\). Since \(|f^{\prime}|\leq\tan\Phi_{*}\), \(f(x)\leq[\kappa(s)/(2d(s))]^{1/2}\) for all \(x\) such that \[|x-x_{*}|\leq r:=\min\left\{\frac{[\kappa(s)/(2d(s))]^{1/2}-d(s)^{-1}}{\tan \Phi_{*}},\,d(s)^{-1}\right\}.\] On the other hand, (4.17) implies \(\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right)\leq d(s)\kappa(s)/2\), so \(\tan\Phi_{*}\leq C(d(s)\kappa(s)-2)\) for some universal \(C\). Combining these estimates, we find that \(r\in[cd(s)^{-1},d(s)^{-1}]\) for some universal \(c\in(0,1)\). Hence, on \([x_{*}-r,x_{*}+r]\), \(x^{2}+f(x)^{2}\leq Cd(s)^{-2}\) for some universal \(C\). Then (4.18) gives that \[\int_{A}\frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|}\,ds^{\prime}\geq \int_{|x-x_{*}|\leq r}\frac{dx}{(x^{2}+f(x)^{2})^{1/2}}\geq C,\] where \(C\) is a universal constant. As a result, in all cases, \[\int_{A}\frac{|X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(s)|^{2}}\,ds^{ \prime}\geq C\left[1+\ln\left(d(s)\kappa(s)/2\right)\right]^{2}.\] Therefore, at the point \(s\), \[\partial_{t}\kappa(s)\leq-C\kappa(s)\cos 2\Phi_{*}\left[1-\left(\frac{2}{d(s) \kappa(s)}\right)^{1/2}\right]\left[1+\ln\left(d(s)\kappa(s)/2\right)\right]^ {2}.\] By Proposition 3.2 and (4.16), \[\partial_{t}\kappa(s,t)\leq-C\kappa(s)\cos 2\Phi_{*}\left[1-\tan\left(\frac{ \pi}{4}+\frac{\Phi_{*}}{2}\right)^{1/2}(\kappa(s)R_{X})^{-1/2}\right]\left[1+ \ln(\kappa(s)R_{X})\right]^{2},\] where \(C>0\) is a universal constant. Since \(\kappa(s,t)\) is \(C^{1}\) in the space-time, \(\kappa_{+}(t)\) is a Lipschitz function. Arguing as in Proposition 3.1, we can show that, if \[\kappa_{+}(t)R_{X}>\frac{\cos(\Phi_{*}/2)}{\cos(3\Phi_{*}/2)}\tan^{2}\left( \frac{\pi}{4}+\frac{\Phi_{*}}{2}\right), \tag{4.19}\] it holds \[\begin{split}\frac{d}{dt}[\kappa_{+}(t)R_{X}]\leq& -C\kappa_{+}R_{X}\cos 2\Phi_{*}\big{[}1+\ln(\kappa_{+}R_{X})\big{]}^{2} \\ &\cdot\left[1-\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{2}\right)^{ 1/2}(\kappa_{+}R_{X})^{-1/2}\right],\end{split} \tag{4.20}\] for almost every \(t\). Denote \[h(\phi):=\frac{\cos(\phi/2)}{\cos(3\phi/2)}\tan^{2}\left(\frac{\pi}{4}+\frac{ \phi}{2}\right).\] Since \(\Phi_{*}(t)\leq\pi/4\) for all time, if \(\kappa_{+}(t)R_{X}>h(\pi/4)\), (4.19) holds and (4.20) reduces to \[\frac{d}{dt}[\kappa_{+}(t)R_{X}]\leq-C\cos 2\Phi_{*}\cdot\kappa_{+}R_{X}\big{[} \ln(\kappa_{+}R_{X})\big{]}^{2},\] which gives \[\kappa_{+}(t)R_{X}\leq\max\left\{\exp\left[C\left(\int_{0}^{t}\cos 2\Phi_{*}( \tau)\,d\tau\right)^{-1}\right],\,h\left(\frac{\pi}{4}\right)\right\}\] for all time. Here \(C>0\) is a universal constant. In view of Proposition 3.1, there exists \(t_{0}>0\), such that \[\exp\left[C\left(\int_{0}^{t_{0}}\cos 2\Phi_{*}(\tau)\,d\tau\right)^{-1} \right]=h\left(\frac{\pi}{4}\right).\] Here \(t_{0}\) has universal upper and lower bounds. Whenever \(t\geq t_{0}\), \(\kappa_{+}(t)R_{X}\leq h(\pi/4)\), and \(\cos 2\Phi_{*}\) admits a universal positive lower bound. Hence, for \(t\geq t_{0}\), if (4.19) holds, (4.20) implies \[\frac{d}{dt}\big{[}\kappa_{+}(t)R_{X}\big{]} \leq -C\left[\kappa_{+}R_{X}-\tan\left(\frac{\pi}{4}+\frac{\Phi_{*}}{ 2}\right)^{1/2}\left(\kappa_{+}R_{X}\right)^{1/2}\right]\] \[\leq -C\left[\kappa_{+}(t)R_{X}-\tan\left(\frac{\pi}{4}+\frac{\Phi_{* }(t)}{2}\right)\right].\] By virtue of Proposition 3.1, as \(t\to+\infty\), \(h(\Phi_{*})\) and \(\tan(\frac{\pi}{4}+\frac{\Phi_{*}}{2})\) converge to \(1\) exponentially with some explicit rates. Assuming the constant \(C>0\) in the above inequality to be smaller if necessary, we obtain that for some \(c>0\), \[\kappa_{+}(t)R_{X}\leq 1+e^{-c(t-t_{0})}h\left(\frac{\pi}{4}\right),\quad\forall \,t\geq t_{0}.\] Note that with \(c>0\) being small, the right-hand side of the above estimate can be made greater than the right-hand side of (4.19) for all \(t\geq t_{0}\). Combining all these estimates, the desired upper bound for \(\kappa_{+}(t)R_{X}\) follows. Lastly, it is clear from the above proof that \(\max\{\kappa_{+}(t)R_{X},h(\pi/4)\}\) is a non-increasing Lipschitz function in \(t\), with \(h(\pi/4)=7+5\sqrt{2}\). _Step 2_. Next we study \(\kappa_{-}(t)\). Assume that \(\kappa_{-}(t)\) is achieved at some \(\tilde{s}\in\mathbb{T}\). By Lemma 4.2, \(\operatorname{Im}\frac{I(\tilde{s},s^{\prime})}{|X^{\prime}(\tilde{s})|}\geq \frac{\kappa(\tilde{s})}{2}\) for all \(s^{\prime}\in\mathbb{T}\setminus\{\tilde{s}\}\). By Lemma 4.1, \(\partial_{t}\kappa(\tilde{s})\geq 0\). Arguing as in Proposition 3.1, we know that \(\kappa_{-}(t)\) is a non-decreasing Lipschitz function. It remains to prove the lower bound for \(\kappa_{-}(t)\). First we consider the case \(\kappa(\tilde{s})<-4R_{X}^{-1}<0\). Given this, by Proposition 3.2, \[d(\tilde{s})\geq 2R_{X}\tan\left(\frac{\pi}{4}-\frac{\Phi_{*}}{2}\right) \geq(2\sqrt{2}-2)R_{X}, \tag{4.21}\] so \(d(\tilde{s})|\kappa(\tilde{s})|\geq 8(\sqrt{2}-1)>3\). Let \[\tilde{A}:=\left\{s^{\prime}\in\mathbb{T}:\,\operatorname{Im}\frac{I(\tilde{ s},s^{\prime})}{|X^{\prime}(\tilde{s})|}\geq-\left(\frac{|\kappa(\tilde{s})|}{2d( \tilde{s})}\right)^{1/2}\right\}.\] If \(|X(\tilde{s})-X(s^{\prime})|\geq[(2d(\tilde{s}))/|\kappa(\tilde{s})|]^{1/2}\), then \(s^{\prime}\in\tilde{A}\) because \[\operatorname{Im}\frac{I(\tilde{s},s^{\prime})}{|X^{\prime}(\tilde{s})|}\geq- \frac{1}{|X(\tilde{s})-X(s^{\prime})|}\geq-\left(\frac{|\kappa(\tilde{s})|}{2d (\tilde{s})}\right)^{1/2}.\] Using Lemma 4.1 and Lemma 4.2, we derive as before to see that \[\partial_{t}\kappa(\tilde{s}) \geq -\frac{3\kappa(\tilde{s})}{4\pi}\cos 2\Phi_{*}\left[1-\left( \frac{2}{d(\tilde{s})|\kappa(\tilde{s})|}\right)^{1/2}\right]\int_{\tilde{A}} \frac{|X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(\tilde{s})|^{2}}\,ds^{\prime}\] \[\geq C|\kappa(\tilde{s})|\cos 2\Phi_{*}\left[\ln\left(d(\tilde{s})| \kappa(\tilde{s})|/2\right)\right]^{2}\] \[\geq C|\kappa(\tilde{s})|\cos 2\Phi_{*}\left[\ln\left(|\kappa( \tilde{s})|R_{X}\right)\right]^{2}.\] In the last line, we used (4.21) and the fact \(|\kappa(\tilde{s})|R_{X}>4\). Since \(\kappa(s,t)\) is \(C^{1}\) in the space-time, \(\kappa_{-}(t)\) is a Lipschitz function. Hence, if \(\kappa_{-}(t)R_{X}<-4\), it holds for almost every \(t\) that \[\frac{d}{dt}[\kappa_{-}(t)R_{X}]\geq-C\cos 2\Phi_{*}\cdot\kappa_{-}(t)R_{X} \left[\ln\big{|}\kappa_{-}(t)R_{X}\big{|}\right]^{2},\] where \(C>0\) is a universal constant. This gives \[\kappa_{-}(t)R_{X}\geq\min\left\{-\exp\left[C\left(\int_{0}^{t}\cos 2\Phi_{*}( \tau)\,d\tau\right)^{-1}\right],\,-4\right\} \tag{4.22}\] for all \(t>0\). Next we improve the lower bound (4.22) for large \(t\). Arguing as in (4.15), we find that \[\operatorname{Im}\frac{I(\tilde{s},s^{\prime})}{|X^{\prime}(\tilde {s})|} \geq\,-\tan\Phi_{*}\left|\operatorname{Re}\frac{I(\tilde{s},s^{ \prime})}{|X^{\prime}(\tilde{s})|}\right|+d(\tilde{s})^{-1}\] \[\geq\,-\tan\Phi_{*}|X(\tilde{s})-X(s^{\prime})|^{-1}+d(\tilde{s} )^{-1}.\] If \(|X(\tilde{s})-X(s^{\prime})|\geq d(\tilde{s})/2\), \[\operatorname{Im}\frac{I(\tilde{s},s^{\prime})}{|X^{\prime}(\tilde{s})|}\geq d (\tilde{s})^{-1}\big{(}1-2\tan\Phi_{*}\big{)}.\] Using Lemma 4.1 and Lemma 4.2 and deriving as before, we find \[\partial_{t}\kappa(\tilde{s}) \geq\frac{3}{2\pi}\cos 2\Phi_{*}\int_{\{|X(\tilde{s})-X(s^{ \prime})|\geq d(\tilde{s})/2\}}\frac{|X^{\prime}(s^{\prime})|^{2}}{|X(s^{ \prime})-X(\tilde{s})|^{2}}\left[\operatorname{Im}\frac{I(\tilde{s},s^{\prime })}{|X^{\prime}(\tilde{s})|}-\frac{1}{2}\kappa(\tilde{s})\right]_{+}ds^{\prime}\] \[\geq\frac{3}{2\pi}\cos 2\Phi_{*}\left[d(\tilde{s})^{-1}\big{(}1-2 \tan\Phi_{*}\big{)}-\frac{1}{2}\kappa(\tilde{s})\right]_{+}\int_{\{|X(\tilde{ s})-X(s^{\prime})|\geq d(\tilde{s})/2\}}\frac{|X^{\prime}(s^{\prime})|^{2}}{|X(s^{ \prime})-X(\tilde{s})|^{2}}\,ds^{\prime}\] \[\geq C\cos 2\Phi_{*}\left[d(\tilde{s})^{-1}\big{(}1-2\tan\Phi_{*} \big{)}-\frac{1}{2}\kappa(\tilde{s})\right]_{+}.\] Here the notation \((\cdot)_{+}\) means taking the positive part, i.e., \(a_{+}=\max\{a,0\}\) for any \(a\in\mathbb{R}\). By virtue of Proposition 3.1, with abuse of the notation, there exists a universal \(t_{0}>0\), such that for all \(t\geq t_{0}\), it holds \(1-2\tan\Phi_{*}>0\) and \(\cos 2\Phi_{*}\geq C\) with \(C>0\) being universal. We will assume \(t\geq t_{0}\) for simplicity. Using \(d(\tilde{s})\leq d_{*}\) and the upper bound for \(d_{*}\) in Proposition 3.2, \[\partial_{t}\kappa(\tilde{s})\geq C\left[R_{X}^{-1}\tan\left(\frac{\pi}{4}- \frac{\Phi_{*}}{2}\right)\big{(}1-2\tan\Phi_{*}\big{)}-\kappa(\tilde{s}) \right]_{+}.\] Therefore, for almost all \(t\geq t_{0}\), \[\frac{d}{dt}[\kappa_{-}(t)R_{X}]\geq C\left[\tilde{h}\big{(}\Phi_{*}(t)\big{)} -\kappa_{-}(t)R_{X}\right]_{+},\] where \[\tilde{h}(\phi):=\tan\left(\frac{\pi}{4}-\frac{\phi}{2}\right)\big{(}1-2\tan \phi\big{)}.\] Proposition 3.1 implies that, as \(t\to+\infty\), \(\tilde{h}(\Phi_{*}(t))\) converges to \(1\) exponentially with some explicit rate. Moreover, (4.22) and Proposition 3.1 imply that \(\kappa_{-}(t_{0})R_{X}\geq-C\) for some universal \(C>0\). Therefore, for \(t\geq t_{0}\), \[\kappa_{-}(t)R_{X}\geq 1-Ce^{-c(t-t_{0})},\] where \(C,c>0\) are universal. This combined with (4.22) yields the desired lower bound. _Step \(3\)._ Lastly, that \(\kappa_{*}(t)R_{X}\geq 1\) follows from [27, 28]. The monotonicity and Lipschitz regularity of \(\max\{\kappa_{*}(t)R_{X},7+5\sqrt{2}\}\) follows from that of \(\max\{\kappa_{+}(t)R_{X},7+5\sqrt{2}\}\) and \(\kappa_{-}(t)R_{X}\). We conclude this section by briefly remarking on the case of general elasticity. We follow the setup in Section 3.2. _Remark 4.2_.: Assume that \(X\) solves (3.4) in Section 3.2 with \(X(0,t)=X_{0}(s)\), and satisfies the assumptions (A1)-(A3) in Section 2.1. Then the claims in Proposition 4.1 other than the quantitative bounds should still hold, i.e., when \(\Phi_{*}(0)<\frac{\pi}{4}\), \(\kappa(s,t)\) satisfies extreme principles as in Proposition 4.1. The justification is similar to that in Section 3.2, which we will only sketch. Fix \(t\), and let \(k_{0}\), \(\xi=\xi(s)\) and \(Y(\xi,t)\) be defined as in Section 3.2. By (3.6), \[\partial_{t}\mathrm{Im}\frac{X^{\prime\prime}(s,t)}{X^{\prime}(s,t)} =\partial_{s}\left[\mathrm{Im}\frac{\partial_{t}X^{\prime}(s,t)}{ X^{\prime}(s,t)}\right]=k_{0}\xi^{\prime}(s)\cdot\left.\partial_{\xi}\left[ \mathrm{Im}\frac{\partial_{\tau}Y^{\prime}(\xi,\tau)}{Y^{\prime}(\xi,\tau)} \right]\right|_{(\xi,\tau)=(\xi(s),t)}\] \[=k_{0}\xi^{\prime}(s)\cdot\left.\partial_{\tau}\left[\mathrm{Im} \frac{\partial_{\xi}Y^{\prime}(\xi,\tau)}{Y^{\prime}(\xi,\tau)}\right]\right| _{(\xi,\tau)=(\xi(s),t)}.\] By (4.1), this implies \[\begin{split}&\quad|X^{\prime}(s,t)|\partial_{t}\kappa(s,t)+ \kappa(s,t)\partial_{t}|X^{\prime}(s,t)|\\ &=k_{0}\xi^{\prime}(s)\big{(}|Y^{\prime}(\xi,\tau)|\partial_{ \tau}\kappa_{Y}(\xi,\tau)+\kappa_{Y}(\xi,\tau)\partial_{\tau}|Y^{\prime}(\xi, \tau)|\big{)}\big{|}_{(\xi,\tau)=(\xi(s),t)},\end{split} \tag{4.23}\] where \(\kappa_{Y}=\kappa_{Y}(\xi,\tau)\) is the curvature defined in terms of \(Y(\xi,\tau)\) by (4.1). Note that (4.1) holds for any non-degenerate parameterization of the curve. Using the fact that \(\xi^{\prime}(s)>0\) is real-valued, \[\kappa(s) =\frac{\mathrm{Im}[X^{\prime\prime}(s)/X^{\prime}(s)]}{|X^{ \prime}(s)|}=\frac{1}{|Y^{\prime}(\xi(s))|\xi^{\prime}(s)}\mathrm{Im}\left[ \frac{\partial_{s}(Y^{\prime}(\xi(s))\xi^{\prime}(s))}{Y^{\prime}(\xi(s))\xi^ {\prime}(s)}\right]\] \[=\frac{1}{|Y^{\prime}(\xi(s))|\xi^{\prime}(s)}\mathrm{Im}\left[ \frac{Y^{\prime\prime}(\xi(s))\xi^{\prime}(s)^{2}+Y^{\prime}(\xi(s))\xi^{ \prime\prime}(s)}{Y^{\prime}(\xi(s))\xi^{\prime}(s)}\right]\] \[=\frac{1}{|Y^{\prime}(\xi(s))|}\mathrm{Im}\left[\frac{Y^{\prime \prime}(\xi(s))}{Y^{\prime}(\xi(s))}\right]=\kappa_{Y}(\xi(s)).\] This should be expected since the curvature does not depend on the parameterization of the curve. Moreover, by (3.5), \[\partial_{t}|X^{\prime}(s,t)| =\frac{\mathrm{Re}[\overline{X^{\prime}(s,t)}\partial_{t}X^{ \prime}(s,t)]}{|X^{\prime}(s,t)|}\] \[=\frac{\mathrm{Re}[\overline{Y^{\prime}(\xi(s),t)}\xi^{\prime}(s) \cdot k_{0}\partial_{\tau}Y^{\prime}(\xi(s),t)\xi^{\prime}(s)]}{|Y^{\prime}( \xi(s),t)|\xi^{\prime}(s)}=k_{0}\xi^{\prime}(s)\partial_{\tau}|Y^{\prime}(\xi(s ),t)|.\] Plugging them into (4.23) and using the identity \(X^{\prime}(s,t)=Y^{\prime}(\xi(s),t)\xi^{\prime}(s)\neq 0\), we obtain that \[\partial_{t}\kappa(s,t)=k_{0}\partial_{\tau}\kappa_{Y}(\xi(s),t).\] Then we can justify the desired assertion as in the proof of Proposition 4.1. ## 5. Estimates for \(|X^{\prime}|\) In this section we study \(|X^{\prime}|\), which encodes the stretching information of the elastic string. Recall that \(|X^{\prime}(s,t)|\) solves (2.8), and by the assumptions (A1)-(A3), it is \(C^{1}\) in the space-time. ### \(L^{1}\)-, \(L^{2}\)-, and \(L^{\infty}\)-estimates Recall that \(\mathcal{L}\) and \(\mathcal{E}\) be defined in (2.4) and (2.5), respectively. **Lemma 5.1**.: \(X\) _satisfies the length estimate_ \[\frac{d\mathcal{L}(t)}{dt}=-\frac{1}{4\pi}\int_{\mathbb{T}}\int_{\mathbb{T}} \frac{|X^{\prime}(s)||X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(s)|^{2}} \left[\cos\Phi(s^{\prime},s)-\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\,ds,\] _and the energy estimate_ \[\begin{split}&\frac{d\mathcal{E}(t)}{dt}\\ =&-\frac{1}{16\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{ \prime})|\left(|X^{\prime}(s)|-|X^{\prime}(s^{\prime})|\right)^{2}\left[\cos \Phi(s^{\prime},s)+\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\,ds\\ &-\frac{1}{16\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{ \prime})|\big{(}|X^{\prime}(s)|+|X^{\prime}(s^{\prime})|\big{)}^{2}\left[\cos \Phi(s^{\prime},s)-\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\,ds\\ =&-\frac{1}{8\pi}\int_{\mathbb{T}}\int_{\mathbb{T}} \frac{|X^{\prime}(s)||X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|^{2}}\left( |X^{\prime}(s)|-|X^{\prime}(s^{\prime})|\right)^{2}\cos\Phi(s^{\prime},s)\,ds ^{\prime}\,ds\\ &-\frac{1}{4\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{|X^{ \prime}(s)|^{2}|X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(s)|^{2}}\left[ \cos\Phi(s^{\prime},s)-\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\,ds.\end{split} \tag{5.1}\] _As a result, if \(\Phi_{*}(0)<\pi/4\), both \(\mathcal{L}(t)\) and \(\mathcal{E}(t)\) are non-increasing in time._ Proof.: Integrating (2.8) yields \[\begin{split}&\frac{d}{dt}\int_{\mathbb{T}}|X^{\prime}(s)|\,ds \\ =&\frac{1}{4\pi}\int_{\mathbb{T}}\mathrm{p.v.}\int_{ \mathbb{T}}\frac{|X^{\prime}(s)||X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)| ^{2}}\left[|X^{\prime}(s^{\prime})|\cos 2\Phi(s^{\prime},s)-|X^{\prime}(s)| \cos\Phi(s^{\prime},s)\right]ds^{\prime}\,ds\\ =&-\frac{1}{4\pi}\int_{\mathbb{T}}\int_{\mathbb{T}} \frac{|X^{\prime}(s)||X^{\prime}(s^{\prime})|^{2}}{|X(s^{\prime})-X(s)|^{2}} \left[\cos\Phi(s^{\prime},s)-\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\,ds. \end{split}\] In the last line, we exchanged the \(s\)- and \(s^{\prime}\)-variables. (5.1) can be derived analogously by multiplying (2.8) by \(|X^{\prime}(s)|\) and taking integral on \(\mathbb{T}\). **Lemma 5.2**.: _If \(\Phi_{*}(0)<\pi/4\), \(|X^{\prime}(s,t)|\) satisfies a maximum principle, i.e., \(\max_{s}|X^{\prime}(s,t)|\) is non-increasing._ Proof.: Take an arbitrary \(t\). Suppose \(\max_{s}|X^{\prime}(s,t)|\) is attained at some \(s\in\mathbb{T}\). Since \(|\Phi_{*}(t)|<\frac{\pi}{4}\) by Proposition 3.1, \(\cos\Phi(s,s^{\prime})\geq\cos 2\Phi(s,s^{\prime})\geq 0\), so \(|X^{\prime}(s^{\prime})|\cos 2\Phi(s^{\prime},s)\leq|X^{\prime}(s)|\cos\Phi(s^{ \prime},s)\). By (2.8), \(\partial_{t}|X^{\prime}(s)|\leq 0\). Following the argument in Proposition 3.1, we conclude that \(|X^{\prime}|\) enjoys the maximum principle. ### The lower bound and the well-stretched condition To derive a lower bound for \(|X^{\prime}|\), we first prove a few auxiliary lemmas. **Lemma 5.3**.: _For any \(\delta\in[0,\mathcal{L}(t)/2]\) and continuous \(f:[0,+\infty)\to\mathbb{R}\), it holds_ \[\int_{\{L(s,s^{\prime})\geq\delta\}}f\big{(}L(s,s^{\prime})\big{)}|X^{\prime}(s ^{\prime})|\,ds^{\prime}=2\int_{\delta}^{\mathcal{L}(t)/2}f(x)\,dx.\] Proof.: By the definition of \(\mathcal{L}(t)\), there exists \(s_{*}\in(s,s+2\pi)\) such that \[\int_{s}^{s_{*}}|X^{\prime}(s^{\prime})|\,ds^{\prime}=\int_{s_{*}-2\pi}^{s}|X^ {\prime}(s^{\prime})|\,ds^{\prime}=\frac{1}{2}\mathcal{L}(t).\] Then \[L(s,s^{\prime})=\begin{cases}\int_{s}^{s^{\prime}}|X^{\prime}(s^{\prime\prime })|\,ds^{\prime\prime}&\text{if }s^{\prime}\in[s,s_{*}),\\ \int_{s^{\prime}}^{s}|X^{\prime}(s^{\prime\prime})|\,ds^{\prime\prime}&\text{ if }s^{\prime}\in(s_{*}-2\pi,s).\end{cases}\] The desired claim then follows from a change of variable. **Lemma 5.4**.: _For any \(s,s^{\prime}\in\mathbb{T}\), \(|\Phi(s,s^{\prime})|\leq\kappa_{*}L(s,s^{\prime})\)._ Proof.: Without loss of generality, we assume \(s^{\prime}\in(s,s+2\pi)\) and \(L(s,s^{\prime})=\int_{s}^{s^{\prime}}|X^{\prime}(s^{\prime\prime})|\,ds^{\prime\prime}\). Since \(\operatorname{Im}\frac{X(\tau)}{X(s)-X(s^{\prime})}\big{|}_{\tau=s}^{s^{ \prime}}=0\), by Rolle's theorem, there exists \(s_{*}\in(s,s^{\prime})\) such that \(\operatorname{Im}\frac{(s-s^{\prime})X^{\prime}(s_{*})}{X(s)-X(s^{\prime})}=0\), which implies \(\frac{X^{\prime}(s_{*})}{X(s)-X(s^{\prime})}\in\mathbb{R}\setminus\{0\}\). Hence, \[\Phi(s,s^{\prime})=\arg\frac{X^{\prime}(s)X^{\prime}(s^{\prime})}{(X(s)-X(s^{ \prime}))^{2}}=\arg\frac{X^{\prime}(s)X^{\prime}(s^{\prime})}{X^{\prime}(s_{ *})^{2}}=\alpha(s)+\alpha(s^{\prime})-2\alpha(s_{*}),\] where the equalities are understood in the modulo \(2\pi\). Since \(\alpha^{\prime}=\kappa|X^{\prime}|\) by (4.1), we have that \[|\Phi(s,s^{\prime})| \leq|\alpha(s)-\alpha(s_{*})|_{\mathbb{T}}+|\alpha(s^{\prime})- \alpha(s_{*})|_{\mathbb{T}}\] \[\leq\,\int_{s}^{s_{*}}|\kappa(s^{\prime\prime})X^{\prime}(s^{ \prime\prime})|\,ds^{\prime\prime}+\int_{s_{*}}^{s^{\prime}}|\kappa(s^{\prime \prime})X^{\prime}(s^{\prime\prime})|\,ds^{\prime\prime}\] \[\leq\kappa_{*}\int_{s}^{s^{\prime}}|X^{\prime}(s^{\prime\prime}) |ds^{\prime\prime}=\kappa_{*}L(s,s^{\prime}).\] Here \(|\cdot|_{\mathbb{T}}\) denotes the distance on \(\mathbb{T}\). **Proposition 5.1**.: _Suppose \(\Phi_{*}(0)<\pi/4\). For some universal constant \(\beta>0\), it holds_ \[\min_{s}|X^{\prime}(s,t)|\geq R_{X}\exp\left[-2\coth\left(\frac{\beta}{2}\int_ {0}^{t}\cos 2\Phi_{*}(\tau)\,d\tau\right)\right] \tag{5.2}\] _for all \(t\in(0,T]\). In particular, by Proposition 3.1, the lower bound is positive and strictly increasing for \(t>0\). As a result, for any \(t>0\), \(X(\cdot,t)\) satisfies the well-stretched condition: more precisely, for distinct \(s,s^{\prime}\in\mathbb{T}\),_ \[|X(s,t)-X(s^{\prime},t)|\geq|s-s^{\prime}|_{\mathbb{T}}\cdot C\min_{s}|X^{ \prime}(s,t)|,\] _where \(C\) is a universal constant._ Proof.: The proof is similar to that of Lemma 4.1 in [12]. We proceed in several steps. _Step \(1\)._ Fix \(t\). Assume that \(\min_{s}|X^{\prime}(s,t)|\) is attained at \(s\in\mathbb{T}\). We only consider the case \(|X^{\prime}(s)|\leq R_{X}\). With \(\delta\leq\mathcal{L}(t)/2\) to be determined, we derive that \[\begin{split}&\text{p.v.}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{ \prime})|(|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|)}{|X(s^{\prime})-X(s)|^{2}} \,ds^{\prime}\geq\int_{\{L(s,s^{\prime})\geq\delta\}}\frac{|X^{\prime}(s^{ \prime})|(|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|)}{L(s,s^{\prime})^{2}}\,ds^{ \prime}\\ \geq&\left(\int_{\{L(s,s^{\prime})\geq\delta\}} \frac{|X^{\prime}(s^{\prime})|^{2}}{L(s,s^{\prime})^{2}}\,ds^{\prime}\right) \left(\frac{1}{2\pi}\int_{\{L(s,s^{\prime})\geq\delta\}}1\,ds^{\prime}\right) -|X^{\prime}(s)|\int_{\{L(s,s^{\prime})\geq\delta\}}\frac{|X^{\prime}(s^{ \prime})|}{L(s,s^{\prime})^{2}}\,ds^{\prime}\\ \geq&\frac{1}{2\pi}\left(\int_{\{L(s,s^{\prime})\geq \delta\}}\frac{|X^{\prime}(s^{\prime})|}{L(s,s^{\prime})}\,ds^{\prime}\right)^ {2}-|X^{\prime}(s)|\int_{\{L(s,s^{\prime})\geq\delta\}}\frac{|X^{\prime}(s^{ \prime})|}{L(s,s^{\prime})^{2}}\,ds^{\prime}.\end{split}\] In the last step, we applied the Cauchy-Schwarz inequality. By Lemma 5.3, \[\text{p.v.}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime})|(|X^{\prime}(s^{ \prime})|-|X^{\prime}(s)|)}{|X(s^{\prime})-X(s)|^{2}}\,ds^{\prime}\geq\frac{1} {2\pi}\left(2\ln\frac{\mathcal{L}(t)/2}{\delta}\right)^{2}-2|X^{\prime}(s)| \delta^{-1}.\] Taking \(\delta=\pi|X^{\prime}(s)|\leq\mathcal{L}(t)/2\) and applying the isoperimetric inequality \(\mathcal{L}(t)\geq 2\pi R_{X}\), we obtain \[\begin{split}\text{p.v.}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{ \prime})|(|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|)}{|X(s^{\prime})-X(s)|^{2}} \,ds^{\prime}\geq\frac{2}{\pi}\left(\ln\frac{R_{X}}{|X^{\prime}(s)|}\right)^ {2}-\frac{2}{\pi}.\end{split} \tag{5.3}\] On the other hand, by Proposition 3.2, Lemma 5.3, and Lemma 5.4, \[\begin{split}&\quad\int_{\mathbb{T}}\frac{|X^{\prime}(s^{ \prime})|}{|X(s^{\prime})-X(s)|^{2}}\left[\cos\Phi(s^{\prime},s)-\cos 2\Phi(s^{ \prime},s)\right]ds^{\prime}\\ \leq& C\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime} )|\sin^{2}\Phi(s^{\prime},s)}{L(s,s^{\prime})^{2}}\,ds^{\prime}\\ \leq& C\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime} )|(\min\left\{\kappa_{*}L(s,s^{\prime}),\,\sin\Phi_{*}\right\})^{2}}{L(s,s^{ \prime})^{2}}\,ds^{\prime}\\ =& 2C\int_{0}^{\mathcal{L}(t)/2}\frac{(\min\left\{ \kappa_{*}x,\,\sin\Phi_{*}\right\})^{2}}{x^{2}}\,dx\leq C\kappa_{*}\sin\Phi_{*},\end{split} \tag{5.4}\] where \(C\) is universal. Combining (2.8), (5.3) and (5.4), at \(s\in\mathbb{T}\), \[\begin{split}&\quad\partial_{t}\left(\ln\frac{|X^{\prime}(s)|}{R_{X}} \right)\\ =&\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\frac{|X^{ \prime}(s^{\prime})|(|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|)}{|X(s^{\prime})- X(s)|^{2}}\cos 2\Phi(s^{\prime},s)\,ds^{\prime}\\ &\quad-|X^{\prime}(s)|\cdot\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{ T}}\frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|^{2}}\left[\cos\Phi(s^{ \prime},s)-\cos 2\Phi(s^{\prime},s)\right]ds^{\prime}\\ \geq&\frac{1}{2\pi^{2}}\cos 2\Phi_{*}\left(\left|\ln \frac{|X^{\prime}(s)|}{R_{X}}\right|^{2}-1\right)-C|X^{\prime}(s)|\kappa_{*} \sin\Phi_{*}.\end{split}\] Since \(X^{\prime}(s,t)\) is \(C^{1}\) in the space-time, \(\min_{s}|X^{\prime}(s,t)|\) is a Lipschitz function in \(t\). Let \[\Lambda(t):=\frac{\min_{s}|X^{\prime}(s,t)|}{R_{X}}.\] Recall that Proposition 4.1 implies \[\kappa_{*}(t)R_{X}\leq 1+C_{*}\exp\left[C_{*}\left(\int_{0}^{t}\cos 2\Phi_{*}( \tau)\,d\tau\right)^{-1}-c_{*}t\right]=:K(t), \tag{5.5}\] where \(C_{*},c_{*}>0\) are universal constants. Then we argue as in Proposition 3.1 to find that \[\partial_{t}\ln\Lambda(t)\geq\frac{1}{2\pi^{2}}\cos 2\Phi_{*}\left[\big{|}\ln \Lambda(t)\big{|}^{2}-1-C_{\dagger}\Lambda(t)K(t)\cdot\frac{\sin\Phi_{*}}{\cos 2 \Phi_{*}}\right]=:F(\ln\Lambda(t),t) \tag{5.6}\] for almost every \(t\). Here \(C_{\dagger}\) is a universal constant, and \[F(x,t)=\frac{1}{2\pi^{2}}\cos 2\Phi_{*}(t)\left[x^{2}-1-C_{\dagger}e^{x}K(t) \cdot\frac{\sin\Phi_{*}(t)}{\cos 2\Phi_{*}(t)}\right].\] _Step 2._ We claim that, if we define for \(t>0\) \[\Lambda_{*}(t):=\exp\left[-2\coth\left(\frac{\beta}{2}\int_{0}^{t}\cos 2\Phi_ {*}(\tau)\,d\tau\right)\right]\] with a sufficiently small but universal \(\beta\), it holds \[\partial_{t}\ln\Lambda_{*}(t)\leq F(\ln\Lambda_{*}(t),t). \tag{5.7}\] We first show that \[C_{\dagger}\Lambda_{*}(t)K(t)\cdot\sin\Phi_{*}\leq\frac{3}{4}\big{|}\ln \Lambda_{*}(t)\big{|}^{2}\cos 2\Phi_{*}. \tag{5.8}\] Indeed, it is not difficult to verify that \[2\coth x\geq 1+x^{-1}\quad\forall\,x>0.\] Denote \[A(t):=\left(\int_{0}^{t}\cos 2\Phi_{*}(\tau)\,d\tau\right)^{-1}.\] Then by Proposition 3.1 and (5.5), \[C_{\dagger}\Lambda_{*}(t)K(t)\cdot\sin\Phi_{*} \leq C_{\dagger}\exp\left[-2\coth\left(\frac{\beta}{2}A(t)^{-1} \right)\right]\left[1+C_{*}\exp\big{(}C_{*}A(t)\big{)}\right]\cdot Ce^{-t/\pi ^{2}}\] \[\leq C\exp\left[-1-\frac{2}{\beta}A(t)\right]\exp\big{(}C_{*}A(t )\big{)}\cdot Ce^{-t/\pi^{2}}\] \[\leq\tilde{C}_{*}\exp\left[\big{(}C_{*}-2\beta^{-1}\big{)}A(t) \right]\cdot e^{-t/\pi^{2}},\] where \(C\) and \(\tilde{C}_{*}\) are universal constants. Assuming \(2\beta^{-1}\geq C_{*}\) yields \[C_{\dagger}\Lambda_{*}(t)K(t)\cdot\sin\Phi_{*}\leq\tilde{C}_{*}e^{-t/\pi^{2}}.\] On the other hand, since \(\Phi_{*}(t)\) is decreasing in \(t\), \[A(t)\geq\big{[}t\cos 2\Phi_{*}(t)\big{]}^{-1}\geq t^{-1},\] so \[\frac{3}{4}\big{|}\ln\Lambda_{*}(t)\big{|}^{2}\cos 2\Phi_{*}=\frac{3}{4}\left[2 \coth\left(\frac{\beta}{2}A(t)^{-1}\right)\right]^{2}\cos 2\Phi_{*}\geq\frac{3}{4} \left[1+\frac{2}{\beta}A(t)\right]^{2}\cos 2\Phi_{*}\geq 3\beta^{-2}t^{-2}.\] By choosing a smaller (but still universal) \(\beta\) if necessary, we can guarantee \(\tilde{C}_{*}e^{-t/\pi^{2}}\leq 3\beta^{-2}t^{-2}\) for all \(t>0\). Combining these estimates, (5.8) is proved. Without loss of generality, we assume \(\beta\leq 1/(2\pi^{2})\). Observe that \(\Lambda_{*}(t)\) is an increasing continuous function for \(t>0\), satisfying that \[\partial_{t}\ln\Lambda_{*}(t)=\beta\cos 2\Phi_{*}\left[\frac{1}{4}\big{|}\ln \Lambda_{*}(t)\big{|}^{2}-1\right],\quad\lim_{t\to 0^{+}}\ln\Lambda_{*}(t)=-\infty.\] By the assumption \(\beta\leq 1/(2\pi^{2})\) and (5.8), \[\partial_{t}\ln\Lambda_{*}(t) \leq\frac{1}{2\pi^{2}}\cos 2\Phi_{*}\left[\big{|}\ln\Lambda_{*}(t) \big{|}^{2}-1-\frac{3}{4}\big{|}\ln\Lambda_{*}(t)\big{|}^{2}\right]\] \[\leq\frac{1}{2\pi^{2}}\cos 2\Phi_{*}\left[\big{|}\ln\Lambda_{*}(t) \big{|}^{2}-1-C_{\dagger}\Lambda_{*}(t)K(t)\cdot\frac{\sin\Phi_{*}}{\cos 2 \Phi_{*}}\right]=F(\ln\Lambda_{*}(t),t).\] _Step 3_.: Now we prove \(\Lambda(t)\geq\Lambda_{*}(t)\) on \((0,T]\) by following the standard justification of comparison principles for ordinary differential equations. With abuse of notations, denote \[t_{0}:=\sup\{\tau\in(0,T]:\,\Lambda_{*}(t)<\Lambda(t)\text{ for all }t\leq\tau\}.\] By the continuity of \(\Lambda_{*}\) and \(\Lambda\) (see the assumptions (A1)-(A3) on \(X\)), \(t_{0}\in(0,T]\) is well-defined. If \(t_{0}=T\), the desired claim is proved due to the time continuity at \(T\). Suppose \(t_{0}<T\). Then \(\ln\Lambda(t)>\ln\Lambda_{*}(t)\) for any \(t<t_{0}\) and \(\ln\Lambda(t_{0})=\ln\Lambda_{*}(t_{0})<0\) by continuity. In a neighborhood of \((\ln\Lambda(t_{0}),t_{0})\), by virtue of the continuity of \(\Phi_{*}(t)\) (see Proposition 3.1), one can show that \(F(x,t)\) is continuous in \((x,t)\), Lipschitz continuous in \(x\), and decreasing in \(x\) since \(\ln\Lambda(t_{0})<0\). We denote the Lipschitz constant to be \(L\). By (5.6), for all \(t<t_{0}\), \[\ln\Lambda(t)\leq\ln\Lambda(t_{0})-\int_{t}^{t_{0}}F(\ln\Lambda(\tau),\tau)\, d\tau.\] Combining this with (5.7) and the fact that \(\ln\Lambda(t_{0})=\ln\Lambda_{*}(t_{0})\), we obtain that, for \(t<t_{0}\) with \(|t-t_{0}|\ll 1\), \[\ln\Lambda(t)-\ln\Lambda_{*}(t) \leq\,-\int_{t}^{t_{0}}\big{(}F(\ln\Lambda(\tau),\tau)-F(\ln \Lambda_{*}(\tau),\tau)\big{)}\,d\tau\] \[\leq L\int_{t}^{t_{0}}\big{(}\ln\Lambda(\tau)-\ln\Lambda_{*}(\tau )\big{)}\,d\tau.\] Then by the Gronwall's inequality, we must have \(\ln\Lambda(t)-\ln\Lambda_{*}(t)\leq 0\) for \(t\) satisfying \(0<t_{0}-t\ll 1\), which is a contradiction. Therefore, \(\Lambda(t)\geq\Lambda_{*}(t)\) for all \(t\in(0,T]\), and this gives (5.2). Lastly, given distinct \(s,s^{\prime}\in\mathbb{T}\), assume that \(s<s^{\prime}<s+2\pi\) and \(L(s,s^{\prime})\) is the length of the arc \(X([s,s^{\prime}],t)\) (otherwise, consider \(X([s^{\prime},s+2\pi],t)\) instead). By Proposition 3.2, \[|X(s,t)-X(s^{\prime},t)| \geq CL(s,s^{\prime})=C\int_{s}^{s^{\prime}}|X^{\prime}(s^{\prime \prime},\tau)|\,ds^{\prime\prime}\] \[\geq(s^{\prime}-s)\cdot C\min_{s}|X^{\prime}(s,t)|\geq|s-s^{ \prime}|_{\mathbb{T}}\cdot C\min_{s}|X^{\prime}(s,t)|.\] ### Higher-order estimates Denote \[\partial_{s}\ln X^{\prime}(s)=\frac{X^{\prime\prime}(s)}{X^{\prime}(s)}=:Z(s)+iW( s), \tag{5.9}\] where (see (4.3)) \[Z(s)=\frac{\partial_{s}|X^{\prime}(s)|}{|X^{\prime}(s)|},\quad W(s)=\kappa(s)|X ^{\prime}(s)|. \tag{5.10}\] In this subsection, we shall first bound \(Z\) in \(L^{2}\), which is motivated by a special estimate in the tangential Peskin problem [12, Lemma 5.3]. This then allows us to bound \(X^{\prime\prime}\). We first derive the equation for \(Z\). **Lemma 5.5**.: \(Z(s)=\partial_{s}|X^{\prime}(s)|/|X^{\prime}(s)|\) _solves_ \[\begin{split}&\partial_{t}Z(s)\\ =&\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}}\left[ \mathrm{Re}\,J(s,s^{\prime})\big{(}Z(s^{\prime})-Z(s)\big{)}-\mathrm{Im}\,J(s, s^{\prime})\big{(}W(s^{\prime})-W(s)\big{)}\right]ds^{\prime}\\ &+\frac{1}{2\pi}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime})|^{ 2}\sin 2\Phi(s,s^{\prime})}{|X(s^{\prime})-X(s)|^{2}}\left(3\mathrm{Im}\,I(s,s^{\prime})-\kappa(s)|X^{\prime}(s)|\right)ds^{\prime}.\end{split} \tag{5.11}\] Proof.: By Lemma 2.1 and (2.7), \[\frac{\partial_{t}|X^{\prime}(s)|}{|X^{\prime}(s)|}=\mathcal{I}_{1}+\mathcal{ I}_{2},\] where \[\mathcal{I}_{1} :=\frac{1}{4\pi}\int_{\mathbb{T}}\mathrm{Re}\left[\frac{X^{ \prime}(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}(s))}{(X(s^{\prime})-X(s) )^{2}}-\frac{X^{\prime}(s^{\prime})X^{\prime\prime}(s)}{X^{\prime}(s)(X(s^{ \prime})-X(s))}\right]ds^{\prime}+\frac{1}{4\pi}\mathrm{Re}\left[\frac{X^{ \prime\prime}(s)}{X^{\prime}(s)}\pi i\right],\] \[\mathcal{I}_{2} :=\frac{1}{4\pi}\int_{\mathbb{T}}I_{2}(s,s^{\prime})\,ds^{\prime },\qquad I_{2}(s,s^{\prime}):=\mathrm{Im}\left[\frac{2X^{\prime}(s^{\prime})^ {2}X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\mathrm{Im}\left[\frac{X(s^ {\prime})-X(s)}{X^{\prime}(s)}\right].\] Since \[\partial_{t}Z(s)=\partial_{t}\frac{\partial_{s}|X^{\prime}(s)|}{|X^{\prime}(s) |}=\partial_{s}\frac{\partial_{t}|X^{\prime}(s)|}{|X^{\prime}(s)|}=\partial_ {s}\mathcal{I}_{1}+\partial_{s}\mathcal{I}_{2}.\] we take the derivative of \(\mathcal{I}_{1}\), \[\partial_{s}\mathcal{I}_{1} =\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[ \frac{2X^{\prime}(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}(s))X^{\prime }(s)}{(X(s^{\prime})-X(s))^{3}}-\frac{2X^{\prime}(s^{\prime})X^{\prime\prime}( s)}{(X(s^{\prime})-X(s))^{2}}\right]ds^{\prime}\] \[=\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}}\mathrm{Re}\left[ \frac{X^{\prime\prime}(s^{\prime})X^{\prime}(s)-X^{\prime}(s^{\prime})X^{ \prime\prime}(s)}{(X(s^{\prime})-X(s))^{2}}\right]ds^{\prime}.\] In the second equality, we used \[\partial_{s^{\prime}}\left[\frac{(X^{\prime}(s^{\prime})-X^{\prime }(s))X^{\prime}(s)}{(X(s^{\prime})-X(s))^{2}}-\frac{X^{\prime\prime}(s)}{X(s^ {\prime})-X(s)}\right]\] \[= -\frac{2X^{\prime}(s^{\prime})(X^{\prime}(s^{\prime})-X^{\prime}( s))X^{\prime}(s)}{(X(s^{\prime})-X(s))^{3}}+\frac{X^{\prime\prime}(s^{\prime})X^{ \prime}(s)+X^{\prime}(s^{\prime})X^{\prime\prime}(s)}{(X(s^{\prime})-X(s))^{2}},\] and thanks to the regularity assumptions on \(X\), \[\lim_{s^{\prime}\to s}\left[\frac{(X^{\prime}(s^{\prime})-X^{\prime}(s))X^{ \prime}(s)}{(X(s^{\prime})-X(s))^{2}}-\frac{X^{\prime\prime}(s)}{X(s^{\prime})- X(s)}\right]=\frac{X^{\prime\prime\prime}(s)X^{\prime}(s)-X^{\prime\prime}(s)^{2}}{2X^{ \prime}(s)^{2}}.\] On the other hand, by (4.1), \[\partial_{s}I_{2}\] \[=\operatorname{Im}\left[\frac{6X^{\prime}(s^{\prime})^{2}X^{\prime}( s)^{2}}{(X(s^{\prime})-X(s))^{4}}\right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{ \prime}(s)}\right]\] \[\quad+\operatorname{Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}X^{ \prime}(s)}{(X(s^{\prime})-X(s))^{3}}\cdot\frac{X^{\prime\prime}(s)}{X^{\prime }(s)}\right]\operatorname{Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]\] \[\quad-\operatorname{Im}\left[\frac{2X^{\prime}(s^{\prime})^{2}X^{ \prime}(s)}{(X(s^{\prime})-X(s))^{3}}\right]\operatorname{Im}\left[\frac{X^{ \prime\prime}(s)}{X^{\prime}(s)}\cdot\frac{(X(s^{\prime})-X(s))}{X^{\prime}(s) }\right]\] \[=\operatorname{Im}\left[6J(s,s^{\prime})^{2}\right]\operatorname{ Im}\left[\frac{X(s^{\prime})-X(s)}{X^{\prime}(s)}\right]-\kappa(s)|X^{\prime}(s)| \operatorname{Im}\left[2J(s,s^{\prime})^{2}\right]\frac{|X(s^{\prime})-X(s)|^ {2}}{|X^{\prime}(s)|^{2}}.\] Here we used the following identity for arbitrary \(A,B,C\in\mathbb{C}\), \[\operatorname{Im}\left[AB\right]\cdot\operatorname{Im}C- \operatorname{Im}A\cdot\operatorname{Im}\left[BC\right]\] \[= -\operatorname{Im}B\cdot\left[\operatorname{Re}A\cdot \operatorname{Im}\bar{C}+\operatorname{Im}A\cdot\operatorname{Re}\bar{C}\right]\] \[= -\operatorname{Im}B\cdot\operatorname{Im}\,\left[A/C\right]|C|^ {2}.\] Combining the above calculations, we obtain that \[\partial_{t}Z(s) =\frac{1}{4\pi}\mathrm{p.v.}\int_{\mathbb{T}}\operatorname{Re} \left[J(s,s^{\prime})\left(\frac{X^{\prime\prime}(s^{\prime})}{X^{\prime}(s^{ \prime})}-\frac{X^{\prime\prime}(s)}{X^{\prime}(s)}\right)\right]ds^{\prime}\] \[\quad+\frac{1}{4\pi}\int_{\mathbb{T}}\operatorname{Im}\left[2J(s, s^{\prime})^{2}\right]\frac{|X(s^{\prime})-X(s)|^{2}}{|X^{\prime}(s)|^{2}}\left(3 \operatorname{Im}I(s,s^{\prime})-\kappa(s)|X^{\prime}(s)|\right)ds^{\prime}.\] Then (5.11) follows. **Proposition 5.2**.: _Let \(\kappa_{*}(t)\) be defined in Proposition 4.1. Suppose \(\Phi_{*}(0)<\frac{\pi}{4}\). For \(t\geq 0\),_ \[\|Z(\cdot,t)\|_{L^{2}(\mathbb{T})}^{2}+\frac{1}{8\pi}\int_{0}^{t} \int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime},\tau)|\cos\Phi(s,s^{\prime},\tau)\big{|}Z(s^{\prime},\tau)-Z(s,\tau)\big{|}^{2}\,ds^{\prime}\,ds\,d\tau\] \[\leq\|Z(\cdot,0)\|_{L^{2}(\mathbb{T})}^{2}+C\max\big{\{}\kappa_{* }(0),R_{X}^{-1}\big{\}}^{3}R_{X}\mathcal{E}(0), \tag{5.12}\] _where \(C>0\) is universal._ Proof.: Observe that \[\partial_{s^{\prime}}\left(\operatorname{Im}I(s,s^{\prime})-\frac{1}{3}\kappa (s)|X^{\prime}(s)|\right)=\operatorname{Im}J(s,s^{\prime}),\] and (see (4.8)) \[\lim_{s^{\prime}\to s}\left(\operatorname{Im}I(s,s^{\prime})-\frac{1}{3}\kappa (s)|X^{\prime}(s)|\right)=\frac{1}{6}\kappa(s)|X^{\prime}(s)|,\] so \[\int_{\mathbb{T}}\operatorname{Im}J(s,s^{\prime})\left(3 \operatorname{Im}I(s,s^{\prime})-\kappa(s)|X^{\prime}(s)|\right)ds^{\prime}\] \[=\frac{3}{2}\int_{\mathbb{T}}\partial_{s^{\prime}}\left[\left( \operatorname{Im}I(s,s^{\prime})-\frac{1}{3}\kappa(s)|X^{\prime}(s)|\right)^{2 }\right]ds^{\prime}=0.\] Hence, (5.11) can be rewritten as \[\partial_{t}Z(s)\] \[=\frac{1}{4\pi}\text{p.v.}\int_{\mathbb{T}}\big{[}\text{Re}\,J(s,s^ {\prime})\big{(}Z(s^{\prime})-Z(s)\big{)}-\text{Im}\,J(s,s^{\prime})\big{(}W(s^ {\prime})-W(s)\big{)}\big{]}\,ds^{\prime}\] \[\quad+\frac{1}{2\pi}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime })||X^{\prime}(s)|\sin 2\Phi(s,s^{\prime})}{|X(s^{\prime})-X(s)|^{2}}\left(|X^{ \prime}(s^{\prime})|-|X^{\prime}(s)|\right)\left(\frac{3\text{Im}\,I(s,s^{ \prime})}{|X^{\prime}(s)|}-\kappa(s)\right)ds^{\prime}\] \[\quad+\frac{1}{2\pi}\int_{\mathbb{T}}\frac{|X^{\prime}(s^{\prime })||X^{\prime}(s)|^{2}}{|X(s^{\prime})-X(s)|^{2}}\left[\sin 2\Phi(s,s^{\prime})-2 \sin\Phi(s,s^{\prime})\right]\left(\frac{3\text{Im}\,I(s,s^{\prime})}{|X^{ \prime}(s)|}-\kappa(s)\right)ds^{\prime}.\] Taking inner product with \(Z\) yields \[\frac{d}{dt}\int_{\mathbb{T}}|Z(s)|^{2}\,ds=2\int_{\mathbb{T}}Z(s )\partial_{t}Z(s)\,ds\] \[=\frac{1}{2\pi}\int_{\mathbb{T}}\text{p.v.}\int_{\mathbb{T}} \big{[}\text{Re}\,J(s,s^{\prime})\big{(}Z(s^{\prime})-Z(s)\big{)}Z(s)-\text{Im }\,J(s,s^{\prime})\big{(}W(s^{\prime})-W(s)\big{)}Z(s)\big{]}\,ds^{\prime}\,ds\] \[\quad+\frac{1}{\pi}\int_{\mathbb{T}}Z(s)\int_{\mathbb{T}}\frac{| X^{\prime}(s^{\prime})||X^{\prime}(s)|\sin 2\Phi(s,s^{\prime})}{|X(s^{\prime})-X(s)|^{2}} \left(|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|\right)\left(\frac{3\text{Im}\, I(s,s^{\prime})}{|X^{\prime}(s)|}-\kappa(s)\right)ds^{\prime}\,ds\] \[\quad+\frac{1}{\pi}\int_{\mathbb{T}}Z(s)\int_{\mathbb{T}}\frac{| X^{\prime}(s^{\prime})||X^{\prime}(s)|^{2}}{|X(s^{\prime})-X(s)|^{2}}\left[\sin 2 \Phi(s,s^{\prime})-2\sin\Phi(s,s^{\prime})\right]\left(\frac{3\text{Im}\,I(s, s^{\prime})}{|X^{\prime}(s)|}-\kappa(s)\right)ds^{\prime}\,ds\] \[=:\mathcal{Z}_{1}+\mathcal{Z}_{2}+\mathcal{Z}_{3}.\] For \(\mathcal{Z}_{1}\), we interchange the \(s\)- and the \(s^{\prime}\)-variables and apply the Young's inequality to obtain \[\mathcal{Z}_{1}= -\frac{1}{4\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}\text{Re}\,J(s,s ^{\prime})\big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds\] \[-\frac{1}{2\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}\text{Im}\,J(s,s ^{\prime})W(s)\big{(}Z(s^{\prime})-Z(s)\big{)}\,ds^{\prime}\,ds\] \[\leq -\frac{1}{6\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime} )|\cos\Phi(s,s^{\prime})\big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds\] \[+C\kappa_{*}^{2}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime} )|\sin^{2}\Phi(s,s^{\prime})|X^{\prime}(s)|^{2}\,ds^{\prime}\,ds,\] where \(C\) is a universal constant given that \(\Phi_{*}<\pi/4\). Lemma 4.2 implies \[\left|\frac{3\text{Im}\,I(s,s^{\prime})}{|X^{\prime}(s)|}-\kappa(s)\right|\leq C \kappa_{*}.\] So for \(\mathcal{Z}_{2}\), by the Cauchy-Schwarz inequality, \[\mathcal{Z}_{2} \leq C\kappa_{*}\int_{\mathbb{T}}\int_{\mathbb{T}}|Z(s)|\frac{|X^ {\prime}(s^{\prime})||X^{\prime}(s)||\sin 2\Phi(s,s^{\prime})|}{|X(s^{\prime})-X(s)|^{2}} \left||X^{\prime}(s^{\prime})|-|X^{\prime}(s)|\right|ds^{\prime}\,ds\] \[\leq C\kappa_{*}\left(\int_{\mathbb{T}}\int_{\mathbb{T}}|Z(s)|^{2} |X^{\prime}(s)|\cdot\frac{|X^{\prime}(s^{\prime})||\sin 2\Phi(s,s^{\prime})|^{2}}{|X(s^{ \prime})-X(s)|^{2}}\,ds^{\prime}\,ds\right)^{1/2}\] \[\quad\cdot\left(\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{|X^{ \prime}(s^{\prime})||X^{\prime}(s)|}{|X(s^{\prime})-X(s)|^{2}}\left||X^{ \prime}(s^{\prime})|-|X^{\prime}(s)|\right|^{2}ds^{\prime}\,ds\right)^{1/2}\] \[\leq C\kappa_{*}\Phi_{*}\left(\int_{\mathbb{T}}\int_{\mathbb{T}}|Z(s) |^{2}|X^{\prime}(s)|\frac{|X^{\prime}(s^{\prime})|}{|X(s^{\prime})-X(s)|^{2}}| \sin\Phi(s,s^{\prime})|^{2}\,ds^{\prime}\,ds\right)^{1/2}\] \[\quad\cdot\left(\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime} )|\,\big{|}|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|\big{|}^{2}\cos\Phi(s,s^{ \prime})\,ds^{\prime}\,ds\right)^{1/2}\] \[\leq C\kappa_{*}^{3/2}\Phi_{*}^{3/2}R_{X}^{1/2}\left(\int_{ \mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})|\big{|}Z(s^{\prime})-Z(s)\big{|} ^{2}\cos\Phi(s,s^{\prime})\,ds^{\prime}\,ds\right)^{1/2}\] \[\quad\cdot\left(\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime} )|\sin^{2}\Phi(s,s^{\prime})|X^{\prime}(s)|^{2}\,ds^{\prime}\,ds\right)^{1/2}.\] Combining the estimates for \(\mathcal{Z}_{j}\) (\(j=1,2,3\)) and applying the Young's inequality, we obtain that \[\begin{split}&\frac{d}{dt}\int_{\mathbb{T}}|Z(s)|^{2}\,ds+\frac{1} {8\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})|\cos\Phi(s,s^{\prime} )\big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds\\ \leq& C\big{(}\kappa_{*}^{2}+\kappa_{*}^{3}\Phi_{*} ^{3}R_{X}\big{)}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})|\sin^{2} \Phi(s,s^{\prime})|X^{\prime}(s)|^{2}\,ds^{\prime}\,ds\\ &+C\kappa_{*}^{3}\Phi_{*}R_{X}\int_{\mathbb{T}}\int_{\mathbb{T}}| J(s,s^{\prime})|\,\big{|}|X^{\prime}(s^{\prime})|-|X^{\prime}(s)|\big{|}^{2}\cos \Phi(s,s^{\prime})\,ds^{\prime}\,ds.\end{split} \tag{5.14}\] In view of the energy estimate (5.1), we have that \[\begin{split}&\frac{d}{dt}\int_{\mathbb{T}}\frac{1}{2}|X^{\prime} (s)|^{2}\,ds\\ \leq&-\frac{1}{16\pi}\int_{\mathbb{T}}\int_{\mathbb{T }}|J(s,s^{\prime})|\,\big{(}|X^{\prime}(s)|-|X^{\prime}(s^{\prime})|\big{)}^{2 }\cos\Phi(s,s^{\prime})\,ds^{\prime}\,ds\\ &-\frac{1}{16\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime })|\big{(}|X^{\prime}(s)|+|X^{\prime}(s^{\prime})|\big{)}^{2}\sin^{2}\Phi(s,s^ {\prime})\,ds^{\prime}\,ds\\ =:&-\frac{1}{16\pi}\mathcal{D}(t).\end{split} \tag{5.15}\] Here we used, with \(\Phi=\Phi(s,s^{\prime})\), \(|\Phi|\leq\pi/4\), \(\cos\Phi\geq\cos 2\Phi\geq 0\), and \[\cos\Phi-\cos 2\Phi\geq\cos^{2}\Phi-\cos 2\Phi=\sin^{2}\Phi.\] On the other hand, Proposition 4.1 implies that \(1\leq\kappa_{*}(t)R_{X}\leq\max\{\kappa_{*}(0)R_{X},C_{*}\}\) with \(C_{*}=7+5\sqrt{2}\). Applying these estimates to (5.14) yields \[\begin{split}&\frac{d}{dt}\|Z(\cdot,t)\|_{L^{2}}^{2}+\frac{1}{8 \pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})|\cos\Phi(s,s^{\prime}) \big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds\\ \leq& C_{\dagger}\max\{\kappa_{*}(0)R_{X},C_{*}\}^{3 }R_{X}^{-2}\mathcal{D}(t),\end{split} \tag{5.16}\] where \(C_{\dagger}>0\) is universal. Adding (5.15) and (5.16) with suitable coefficients, we obtain that \[\begin{split}&\frac{d}{dt}\Big{[}\|Z(\cdot,t)\|_{L^{2}}^{2}+16 \pi C_{\dagger}\max\{\kappa_{*}(0)R_{X},C_{*}\}^{3}R_{X}^{-2}\mathcal{E}(t) \Big{]}\\ &+\frac{1}{8\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime })|\cos\Phi(s,s^{\prime})\big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds \leq 0.\end{split}\] Then (5.12) follows. We can further derive a decay estimate for \(\|Z\|_{L^{2}}\). **Proposition 5.3**.: _There exist universal constants \(\gamma,C>0\), such that for all \(t\geq 0\),_ \[\|Z(\cdot,t)\|_{L^{2}(\mathbb{T})}^{2}\leq Ce^{-\gamma t}\left(\|Z(\cdot,0)\|_{ L^{2}(\mathbb{T})}^{2}+\max\big{\{}\kappa_{*}(0)^{3}R_{X},\|X^{\prime}(s,0)\|_{L^{ \infty}}R_{X}^{-3}\big{\}}\mathcal{E}(0)\right).\] Proof.: The case \(t\in[0,1]\) follows from Proposition 5.2, so we assume \(t\geq 1\). We first apply Proposition 3.1 to see that, if \(t\geq 1\), \[\int_{0}^{t}\cos 2\Phi_{*}(\tau)\,d\tau\geq Ct,\] where \(C>0\) is universal. Hence, for \(t\geq 1\), Proposition 4.1 gives that \[\kappa_{*}(t)R_{X}\leq 1+C\exp\big{[}Ct^{-1}\big{]}\leq C,\] and Proposition 5.1 implies that \[\min_{s}|X^{\prime}(s,t)|\geq R_{X}\exp\big{[}-2\coth(Ct)\big{]}\geq CR_{X},\] where \(C>0\) is universal. By Proposition 3.2 and (5.13), for \(t\geq 1\), \[\frac{1}{8\pi}\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})| \cos\Phi(s,s^{\prime})\big{|}Z(s^{\prime})-Z(s)\big{|}^{2}\,ds^{\prime}\,ds\] \[\geq\frac{\cos\Phi_{*}}{8\pi}\cdot\frac{2\mathcal{L}(t)}{d_{*}(t )^{2}}\int_{\mathbb{T}}|X^{\prime}(s)||Z(s)|^{2}\,ds\] \[\geq CR_{X}^{-1}\min_{s}|X^{\prime}(s,t)|\|Z(\cdot,t)\|_{L^{2}}^{2 }\geq C\|Z(\cdot,t)\|_{L^{2}}^{2}.\] On the other hand, arguing as in (5.4) and using Lemma 5.1 and Lemma 5.2, \[\int_{\mathbb{T}}\int_{\mathbb{T}}|J(s,s^{\prime})|\sin^{2}\Phi(s,s^{\prime})|X^{\prime}(s)|^{2}\,ds^{\prime}\,ds \leq C\int_{\mathbb{T}}|X^{\prime}(s)|^{3}\kappa_{*}\sin\Phi_{*} \,ds\] \[\leq CR_{X}^{-1}\mathcal{E}(0)\|X^{\prime}(s,0)\|_{L^{\infty}} \Phi_{*}(t).\] Combining these estimates with (5.14) yields that, for \(t\geq 1\), \[\frac{d}{dt}\|Z(\cdot,t)\|_{L^{2}}^{2}+\gamma\|Z(\cdot,t)\|_{L^{2}}^{2}\leq C \big{(}R_{X}^{-3}\mathcal{E}(0)\|X^{\prime}(s,0)\|_{L^{\infty}}+R_{X}^{-2} \mathcal{D}(t)\big{)}\Phi_{*}(t),\] where \(\gamma,C>0\) are universal constants. For simplicity, we assume \(\gamma\leq(2\pi^{2})^{-1}\), so that \(e^{\gamma\tau}e^{-\tau/\pi^{2}}\leq e^{-\gamma\tau}\). By the Gronwall's inequality and Proposition 3.1, for \(t\geq 1\), \[\|Z(\cdot,t)\|_{L^{2}}^{2} \leq e^{-\gamma(t-1)}\|Z(\cdot,1)\|_{L^{2}}^{2}\] \[\quad+C\int_{1}^{t}e^{-\gamma(t-\tau)}\left(R_{X}^{-3}\mathcal{E }(0)\|X^{\prime}(s,0)\|_{L^{\infty}}+R_{X}^{-2}\mathcal{D}(\tau)\right)\Phi_{* }(0)e^{-\tau/\pi^{2}}\,d\tau\] \[\leq e^{-\gamma(t-1)}\|Z(\cdot,1)\|_{L^{2}}^{2}+CR_{X}^{-2}\int_ {1}^{t}e^{-\gamma(t+\tau)}\mathcal{D}(\tau)\,d\tau\] \[\quad+CR_{X}^{-3}\mathcal{E}(0)\|X^{\prime}(s,0)\|_{L^{\infty}} \int_{1}^{t}e^{-\gamma(t+\tau)}\,d\tau\] \[\leq e^{-\gamma(t-1)}\|Z(\cdot,1)\|_{L^{2}}^{2}+Ce^{-\gamma t}R_{ X}^{-3}\mathcal{E}(0)\|X^{\prime}(s,0)\|_{L^{\infty}}.\] In the last inequality, we used \(R_{X}\leq\frac{1}{2\pi}\mathcal{L}(0)\leq\|X^{\prime}(s,0)\|_{L^{\infty}}\) due the isoperimetric inequality, and \[\int_{1}^{t}\mathcal{D}(\eta)\,d\eta\leq 16\pi\big{(}\mathcal{E}(1)-\mathcal{E}( t)\big{)}\leq C\mathcal{E}(0)\] derived from (5.15). This together with (5.12) implies the desired estimate. Finally, we arrived at the uniform boundedness of \(\|X^{\prime\prime}\|_{L^{2}}\). **Corollary 5.1**.: _Suppose \(\Phi_{*}(0)<\pi/4\), \(\mathcal{E}(0)<+\infty\), \(\kappa_{*}(0)<+\infty\), and \(Z(\cdot,0)\in L^{2}(\mathbb{T})\). Then \(\|X^{\prime\prime}(\cdot,t)\|_{L^{2}(\mathbb{T})}\) is uniformly bounded for \(t\geq 0\), and the bound only depends on \(R_{X}\), \(\mathcal{E}(0)\), \(\kappa_{*}(0)\), \(\|X^{\prime}(\cdot,0)\|_{L^{\infty}}\), and \(\|Z(\cdot,0)\|_{L^{2}}\)._ Proof.: By (5.9) and (5.10), \[|X^{\prime\prime}(s)|^{2}=|X^{\prime}(s)|^{2}\big{(}|Z(s)|^{2}+|X^{\prime}(s)|^{2 }|\kappa(s)|^{2}\big{)},\] so by Proposition 4.1, Lemma 5.1 and Lemma 5.2, \[\|X^{\prime\prime}\|_{L^{2}}^{2}\leq\|X^{\prime}\|_{L^{\infty}}^{2}\big{(}\|Z \|_{L^{2}}^{2}+2\mathcal{E}(t)\kappa_{*}(t)^{2}\big{)}\leq C\|X^{\prime}( \cdot,0)\|_{L^{\infty}}^{2}\big{(}\|Z\|_{L^{2}}^{2}+\mathcal{E}(0)\kappa_{*}(0 )^{2}\big{)}.\] Then the claim follows from Proposition 5.2. ## 6. Proof of the Main Results Now we are ready to prove Theorem 1.1. Proof of Theorem 1.1.: Since \(X_{0}(s)\in h^{1,\alpha}(\mathbb{T})\) and \(X_{0}\) satisfies the well-stretched condition, say with constant \(\lambda>0\), by Theorem 1.3 of [4], there exists \(T>0\) and a unique \[X(s,t)\in C([0,T];C^{1,\alpha}(\mathbb{T}))\cap C^{1}([0,T];C^{\alpha}( \mathbb{T}))\] such that \(X(s,t)\) is a strong solution of (1.9) with the initial condition \(X(s,0)=X_{0}(s)\). It holds that 1. \(X(s,t)\) satisfies (1.9) in \(\mathbb{T}\times(0,T]\), and \(X(\cdot,t)\to X_{0}(t)\) in \(C^{1,\alpha}(\mathbb{T})\) as \(t\to 0\); 2. for all \(t\in[0,T]\), \(X(\cdot,t)\) satisfies the well-stretched condition with constant \(\lambda/2\) (see Section 3.3 of [4]); 3. by Theorem 1.4 of [4], for any \(\delta\in(0,T]\) and any \(k\in\mathbb{N}\), \(X\in C^{1}([\delta,T];C^{k}(\mathbb{T}))\). Take \(t_{0}\in(0,T/2]\), \(t_{0}\leq 1\), such that \(\Phi_{*}(t)<\pi/4\) for all \(t\in[0,t_{0}]\). This is achievable thanks to the time continuity of \(X\) in \(C^{1,\alpha}(\mathbb{T})\) and \(\Phi_{*}(0)<\pi/4\). Indeed, the property (ii) above implies that \(\Phi(s_{1},s_{2},t)\) is continuous. Clearly, \(t_{0}\) may be chosen arbitrarily small. By the properties (ii) and (iii), for any fixed \(t\in[t_{0},T]\), \(s\mapsto X(s,t)\) is injective, \(\min_{s}|X^{\prime}(s,t)|\geq\lambda/2\) and \(X(\cdot,t)\in C^{\infty}(\mathbb{T})\). In particular, \(\|X^{\prime}(\cdot,t_{0})\|_{L^{\infty}}\), \(\mathcal{E}(t_{0})\), \(\kappa_{*}(t_{0})\), and \(\|Z(\cdot,t_{0})\|_{L^{2}}\) are all finite, and \(X(s,t)\in C^{1}([t_{0},T];C^{k}(\mathbb{T}))\) for any \(k\in\mathbb{N}\). Therefore, \(X(s,t)\) on \(\mathbb{T}\times[t_{0},T]\) satisfies our assumptions (A1)-(A3) in Section 2.1. We treat \(t_{0}\) as the new initial time, and find that: 1. By Corollary 5.1 and the Sobolev embedding \(H^{2}(\mathbb{T})\hookrightarrow C^{1,1/2}(\mathbb{T})\), \(\|X(\cdot,t)\|_{\dot{C}^{1,1/2}(\mathbb{T})}\) admits a uniform upper bound for \(t\in[t_{0},T]\). The bound only depends on \(X(\cdot,t_{0})\) but not on \(T\). 2. By Proposition 3.1 and Proposition 5.1, there exists a constant \(\tilde{\lambda}>0\) only depending on \(t_{0}\), such that for all \(t\in[2t_{0},T]\), \(X(\cdot,t)\) satisfies the well-stretched condition with constant \(\tilde{\lambda}R_{X}\). In particular, \(\tilde{\lambda}\) does not depend on \(X_{0}\) or \(T\). Then by Theorem 1.3 and Theorem 1.8 of [4], the solution can be uniquely extended to \([0,+\infty)\). For any \(\delta>0\) and any \(k\in\mathbb{N}\), \(X\in C^{1}_{loc}([\delta,+\infty);C^{k}(\mathbb{T}))\). In the sequel, we study long-time behavior of the global solution \(X(s,t)\). For convenience, we introduce the following terminology. Suppose \(Q(t)\) is a time-varying quantity defined in terms of the solution \(X(\cdot,t)\) for \(t\in(0,+\infty)\). By saying \(Q(t)\) converges to \(Q_{*}\in\mathbb{R}\) exponentially (or decays exponentially in the case of \(Q_{*}=0\)), we mean that for any \(\delta>0\), there exists a constant \(C>0\) that may depend on \(\delta\) and \(X_{0}\), and a universal constant \(c>0\) that does not depend on \(\delta\) or \(X_{0}\), such that \(|Q(t)-Q_{*}|\leq Ce^{-ct}\) for all \(t\geq\delta\). If not otherwise stated, we will always adopt this convention in the rest of the proof. By Proposition 3.1 and the arbitrary smallness of \(t_{0}\) above, \(\Phi_{*}(t)<\pi/4\) for all \(t\geq 0\), \(\Phi_{*}(t)\) is continuous and non-increasing in \([0,+\infty)\), and \(\Phi_{*}(t)\) decays exponentially, satisfying the claimed bound. By Proposition 3.2, \[\mathcal{L}(t)=2\sup_{s^{\prime}}L(s,s^{\prime})\leq\frac{\pi d_{*}}{1-\sin\Phi _{*}}\leq\frac{2\pi R_{X}}{1-\sin\Phi_{*}}\cdot\tan\left(\frac{\pi}{4}+\frac{ \Phi_{*}}{2}\right).\] On the other hand, the isoperimetric inequality gives \(\mathcal{L}(t)\geq 2\pi R_{X}\). Hence, by Proposition 3.1, as \(t\to+\infty\), \(\mathcal{L}(t)\) converge to \(2\pi R_{X}\) exponentially. Proposition 4.1 implies that \(\|\kappa(s)R_{X}-1\|_{L^{\infty}}\) decays exponentially, and satisfies the desired bound in Theorem 1.1. By Proposition 5.1, for any \(t>0\), and any distinct \(s,s^{\prime}\in\mathbb{T}\), \[|X(s,t)-X(s^{\prime},t)|\geq|s-s^{\prime}|_{\mathbb{T}}\cdot R_{X}\cdot C\exp \left[-2\coth\left(\frac{\beta}{2}\int_{0}^{t}\cos 2\Phi_{*}(\tau)\,d\tau\right) \right],\] where \(C\) and \(\beta\) are universal constants. By Proposition 3.1, the function \[t\mapsto C\exp\left[-2\coth\left(\frac{\beta}{2}\int_{0}^{t}\cos 2\Phi_{*}( \tau)\,d\tau\right)\right]\] has a universal non-negative lower bound \(\lambda_{\circ}(t)\) that is strictly increasing on \([0,+\infty)\) such that \(\lambda_{\circ}(0)=0\) and \(\lambda_{\circ}(t)>0\) for \(t>0\). This proves the desired claim for the well-stretched condition. Lastly, we prove the exponential convergence to an equilibrium. By the Cauchy-Schwarz inequality, \[\left(\int_{\mathbb{T}}|\partial_{s}|X^{\prime}(s)||\,ds\right)^{2}\leq\int_{ \mathbb{T}}\left(\frac{\partial_{s}|X^{\prime}(s)|}{|X^{\prime}(s)|}\right)^{ 2}ds\int_{\mathbb{T}}|X^{\prime}(s)|^{2}\,ds=\|Z\|_{L^{2}}^{2}\|X^{\prime}\|_{L ^{2}}^{2}.\] Using the smoothness of \(X(\cdot,t)\) for \(t>0\), Lemma 5.1, and Proposition 5.3, we find \[\int_{\mathbb{T}}\big{|}\partial_{s}|X^{\prime}(s)||\,ds\text{ decays exponentially.}\] Since \[\big{|}|X^{\prime}(s)|-R_{X}\big{|} \leq\frac{1}{2\pi}\left|\int_{\mathbb{T}}(|X^{\prime}(s^{\prime} )|-R_{X})\,ds^{\prime}\right|+\int_{\mathbb{T}}\big{|}\partial_{s^{\prime}}|X ^{\prime}(s^{\prime})||\,ds^{\prime}\] \[=\frac{1}{2\pi}|\mathcal{L}(t)-2\pi R_{X}|+\int_{\mathbb{T}} \big{|}\partial_{s^{\prime}}|X^{\prime}(s^{\prime})||\,ds^{\prime},\] \(|X^{\prime}(s)|\) converge uniformly to \(R_{X}\) exponentially. According to (5.9) and (5.10), \[X^{\prime\prime}(s)-iX^{\prime}(s)=\big{[}Z(s)+i\big{(}\kappa(s)|X^{\prime}(s )|-1\big{)}\big{]}X^{\prime}(s).\] By Proposition 4.1, Proposition 5.3, and the convergence of \(|X^{\prime}|\) shown above, \[X^{\prime\prime}(s)-iX^{\prime}(s)\text{ converges in }L^{2}(\mathbb{T})\text{ to }0\text{ exponentially.} \tag{6.1}\] We write (1.5) and (1.6) into the complex form \[\begin{split}\partial_{t}X(s)&=\frac{1}{4\pi}\int_ {\mathbb{T}}-\ln|X(s)-X(s^{\prime})|X^{\prime\prime}(s^{\prime})\,ds^{\prime} \\ &\qquad+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{X(s)-X(s^{\prime})}{| X(s)-X(s^{\prime})|^{2}}\text{Re}\left[X^{\prime\prime}(s^{\prime})\overline{(X(s)-X(s ^{\prime}))}\right]ds^{\prime}.\end{split} \tag{6.2}\] This is equivalent to (1.9) given the smoothness of \(X(s,t)\) and the well-stretched condition for \(t\geq\delta\) for any given \(\delta>0\). Observe that, by integration by parts, \[\int_{\mathbb{T}}\left(-\ln|X(s)-X(s^{\prime})|iX^{\prime}(s^{ \prime})+\frac{X(s)-X(s^{\prime})}{|X(s)-X(s^{\prime})|^{2}}\mathrm{Re}\left[iX ^{\prime}(s^{\prime})\overline{(X(s)-X(s^{\prime}))}\right]\right)ds^{\prime}\] \[= \int_{\mathbb{T}}\left(i\partial_{s^{\prime}}\ln|X(s)-X(s^{\prime })|(X(s^{\prime})-X(s))+\frac{X(s^{\prime})-X(s)}{|X(s)-X(s^{\prime})|^{2}} \mathrm{Re}\left[iX^{\prime}(s^{\prime})\overline{(X(s^{\prime})-X(s))}\right] \right)ds^{\prime}\] \[= \int_{\mathbb{T}}\frac{X(s^{\prime})-X(s)}{|X(s)-X(s^{\prime})|^{ 2}}\left(i\mathrm{Re}\left[X^{\prime}(s^{\prime})\overline{(X(s^{\prime})-X(s) )}\right]-\mathrm{Im}\left[X^{\prime}(s^{\prime})\overline{(X(s^{\prime})-X(s ))}\right]\right)ds^{\prime}\] \[= i\int_{\mathbb{T}}\frac{X(s^{\prime})-X(s)}{|X(s)-X(s^{\prime}) |^{2}}\cdot X^{\prime}(s^{\prime})\overline{(X(s^{\prime})-X(s))}\,ds^{\prime}\] \[= i\int_{\mathbb{T}}X^{\prime}(s^{\prime})\,ds^{\prime}=0.\] Combining this with (6.2) yields \[\partial_{t}X(s) = \frac{1}{4\pi}\int_{\mathbb{T}}-\ln\left[\frac{|X(s)-X(s^{\prime} )|}{R_{X}}\right]\left(X^{\prime\prime}(s^{\prime})-iX^{\prime}(s^{\prime}) \right)ds^{\prime}\] \[+\frac{1}{4\pi}\int_{\mathbb{T}}\frac{X(s)-X(s^{\prime})}{|X(s)-X( s^{\prime})|^{2}}\mathrm{Re}\left[\left(X^{\prime\prime}(s^{\prime})-iX^{ \prime}(s^{\prime})\right)\overline{(X(s)-X(s^{\prime}))}\right]ds^{\prime}.\] Here we inserted a factor of \(1/R_{X}\) in the logarithm, which does not change the value of the integral. Given arbitrary \(\delta>0\), thanks to the uniform well-stretched condition of \(X(\cdot,t)\) for all \(t\geq\delta\), \[|\partial_{t}X(s)| \leq C\int_{\mathbb{T}}\left(\big{|}\ln|s-s^{\prime}|\big{|}+1 \right)\big{|}X^{\prime\prime}(s^{\prime})-iX^{\prime}(s^{\prime})\big{|}\,ds^ {\prime}\] \[\leq C\Big{\|}\big{|}\ln|s-s^{\prime}|\big{|}+1\Big{\|}_{L^{2}_{s^ {\prime}}}\big{\|}X^{\prime\prime}-iX^{\prime}\big{\|}_{L^{2}}\leq C\big{\|}X^ {\prime\prime}-iX^{\prime}\big{\|}_{L^{2}},\] where \(C\) depends on \(\delta\) only. Therefore, \(\partial_{t}X\) uniformly converges to \(0\) exponentially. This further implies that there exists \(X_{\infty}=X_{\infty}(s)\in L^{\infty}(\mathbb{T})\) such that \[X(\cdot,t)\text{ converges uniformly to }X_{\infty}(s),\text{ with }\|X(\cdot,t)-X_{\infty}(\cdot) \|_{L^{\infty}}\text{ decaying exponentially.} \tag{6.3}\] By the uniform convergence, \(X_{\infty}\) satisfies the well-stretched condition. In view of the uniform bound for \(\|X^{\prime\prime}\|_{L^{2}}\) for all \(t\geq 1\), \(X_{\infty}\in H^{2}(\mathbb{T})\) and \(X(\cdot,t)\) converges to \(X_{\infty}\) weakly in \(H^{2}(\mathbb{T})\) and strongly in \(C^{1}(\mathbb{T})\) as \(t\to+\infty\). By (6.1) we have \(X^{\prime\prime}_{\infty}-iX^{\prime}_{\infty}=0\). As \(|X^{\prime}(s)|\) converges uniformly to \(R_{X}\) exponentially, we have \(|X^{\prime}_{\infty}(s)|=R_{X}\). Therefore, there exists \(x_{\infty}\in\mathbb{C}\) and \(\xi_{\infty}\in\mathbb{T}\), such that \[X_{\infty}(s)=x_{\infty}+R_{X}e^{i(s+\xi_{\infty})}.\] Now we prove strong \(H^{2}\)-convergence to \(X_{\infty}\). Since \(X^{\prime\prime}_{\infty}-iX^{\prime}_{\infty}\equiv 0\), \[\|X(s,t)-X_{\infty}(s)\|_{\dot{H}^{2}}\] \[\leq\big{\|}\big{(}X^{\prime\prime}(s,t)-iX^{\prime}(s,t)\big{)} -\big{(}X^{\prime\prime}_{\infty}(s)-iX^{\prime}_{\infty}(s)\big{)}\big{\|}_{L^ {2}}+\|X(s,t)-X_{\infty}(s)\|_{\dot{H}^{1}}\] \[\leq\|X^{\prime\prime}(s,t)-iX^{\prime}(s,t)\|_{L^{2}}+C\|X(s,t) -X_{\infty}(s)\|_{L^{2}}^{1/2}\|X(s,t)-X_{\infty}(s)\|_{\dot{H}^{2}}^{1/2}.\] By Young's inequality, \[\|X(s,t)-X_{\infty}(s)\|_{\dot{H}^{2}}\leq C\|X^{\prime\prime}(s,t)-iX^{ \prime}(s,t)\|_{L^{2}}+C\|X(s,t)-X_{\infty}(s)\|_{L^{2}}.\] Then (6.1) and (6.3) allow us to conclude that \[\|X(\cdot,t)-X_{\infty}(\cdot)\|_{H^{2}}\text{ decays exponentially.}\]
2302.13858
Modular J-PET with Improved o-Ps Detection Efficiency for CPT Tests
J-PET is a photon detector built of plastic scintillators, which already has been commissioned for CPT studies in the decays of positronium. In the first experiment, J-PET has achieved a sensitivity to CPT violation at a level of 10^{-4}, and now it aims to reach a level of 10^{-5}. This will be done by enhancing the three-photon registration efficiency for ortho-positronium decays using a new layer of densely packed plastic scintillators termed Modular J-PET. We present the simulation studies performed for different experimental detection setups to be used for the next CPT test with the Modular J-PET detector.
Neha Chug, Aleksander Gajos
2023-02-27T14:57:16Z
http://arxiv.org/abs/2302.13858v1
# Modular J-PET with Improved o-Ps Detection Efficiency ###### Abstract J-PET is a photon detector built of plastic scintillators, which already has been commissioned for CPT studies in the decays of positronium. In the first experiment, J-PET has achieved a sensitivity to CPT violation at a level of \(10^{-4}\), and now it aims to reach a level of \(10^{-5}\). This will be done by enhancing the three-photon registration efficiency for ortho-positronium decays using a new layer of densely packed plastic scintillators termed Modular J-PET. We present the simulation studies performed for different experimental detection setups to be used for the next CPT test with the Modular J-PET detector. ## 1 Introduction Discrete-symmetry tests in positronium decays can be done by studying certain non-vanishing angular-correlation operators odd under particular symmetries [1]. For this test we are interested in the CPT-violation-sensitive operator given by \(\vec{S}\cdot(\vec{k_{1}}\times\vec{k_{2}})\), which is the angle between the spin and the orientation of the annihilation plane of the ortho-positronium (o-Ps) atom [2]. J-PET, conceived as a tomography device, allows for exclusive registration of a broad range of kinematical configurations of three-photon annihilations with large geometrical acceptance and high angular resolution [3]. ## 2 Towards improving the sensitivity of CPT test A measurement of the CPT-odd operator \(\vec{S}\cdot(\vec{k_{1}}\times\vec{k_{2}})\) was done with J-PET consisting of 192 plastic scintillator strips arranged concentrically in three layers with PMT readouts on both ends of each strip [4]. Data is collected in a trigger-less mode at four different thresholds applied to photomultiplier signals [5]. The first CPT test with J-PET has reached sensitivities better than the best known previous experimental result by a factor of three [6, 7]. Now, the J-PET detector is being upgraded with an additional layer consisting of 24 modules of densely packed plastic scintillators with 13 scintillators in each module and silicon photomultiplier readouts. It will result in improving the time resolution and enhancing the angular acceptance of the detector. Another possible improvement is to increase the formation probability of o-Ps atoms that is achieved by replacing the cylindrical annihilation chamber (used in the previous experiment) with a spherical vacuum chamber [6, 8]. ## 3 Future CPT test with Modular J-PET Modular J-PET is a compact detector with 24 modules of plastic scintillators, where the modules can be arranged in a single layer or be used as a multi-layer system. We have performed MC simulations of different geometrical configurations of Modular J-PET and the spherical annihilation chamber. Studies are done to choose the best geometrical configuration to evaluate the measurement time and conditions needed to reach the sensitivity of \(10^{-5}\) to CPT violation either with the originally-proposed combined setup or with a single-layer or multi-layer digital J-PET setup as shown in Fig. 1. The relative gain in efficiency of the registration of three-photon annihilations of o-PS in different configurations with Modular J-PET with respect to the 3-layer J-PET (Fig. 1(a)) used in the previous experiments, as obtained with MC simulations, is given in Table 1. ## 4 Conclusions and perspectives J-PET has already started test measurements with the spherical annihilation chamber and the 3-layer detector setup [9]. From our MC simulations, we Figure 1: Cross section of the Modular J-PET detector with spherical annihilation chamber at the center for CPT-symmetry tests with three different configurations consisting of (a) 3-layer J-PET (already in use), (b) 3-layer J-PET combined with 24 Modular J-PET, (c) 5-layer setup with 16 and 8 modules of Modular J-PET along with 3-layer J-PET, and (d) 24-Modular digital J-PET used as a stand-alone detector. conclude to use the stand-alone digital J-PET detector with spherical annihilation chamber _(d)_ for the next CPT test. Although its registration efficiency is less than the multi-layer setups _(b)_ and _(c)_, it allows easier data acquisition, detector systematics, and dealing with secondary Compton scatterings compared to the combined setup. The high fraction of secondary Compton-scattered events in the multi-layer detector setup as given in Table 1 would result in an increase in background contributions. With an efficiency as shown in Table 1, J-PET would be able to search for the CPT violation at the precision level of \(10^{-5}\), and an efficiency gain of a factor by 17 is sufficient to reach the required precision in four months of data taking. ## Acknowledgments This work was supported by the Foundation for Polish Science through TEAM/2017-4/39, NCN of Poland through 2019/35/B/ST2/03562, Jagiellonian University MNS grant no. 2021-N17/MNW/000013, and SciMat and qLife Priority Research Areas budget under the program Excellence Initiative - Research University at the Jagiellonian University.
2307.11891
Advancements in the GravAD Pipeline: Template Reduction and Testing Simulated Signals for Black Hole Detection
This paper introduces significant improvements to the GravAD pipeline, a Python-based system for gravitational wave detection. These advancements include a reduction in waveform templates, implementation of simulated signals, and optimisation techniques. By integrating these advancements, GravAD exhibits increased performance, efficiency, and accuracy in processing gravitational wave data. This leads to more efficient detection and freeing computational resources for further research. This pipeline also applies adaptive termination procedures for resource optimisation, enhancing gravitational wave detection speed and precision. The paper emphasises the importance of robust, efficient tools in gravitational wave data analysis, particularly given the finite nature of computational resources. Acknowledging system limitations such as dependency on the ripple python library capabilities and suggests future enhancements in waveform generation and differentiation.
William E. Doyle
2023-07-21T20:23:17Z
http://arxiv.org/abs/2307.11891v2
Advancements in the GravAD Pipeline: Template Reduction and Testing Simulated Signals for Black Hole Detection ###### Abstract This paper introduces significant improvements to the GravAD pipeline, a Python-based system for gravitational wave detection. These advancements include a reduction in waveform templates, implementation of simulated signals, and optimisation techniques. By integrating these advancements, GravAD exhibits increased performance, efficiency, and accuracy in processing gravitational wave data. This leads to more efficient detection and freeing computational resources for further research. This pipeline also applies adaptive termination procedures for resource optimisation, enhancing gravitational wave detection speed and precision. The paper emphasises the importance of robust, efficient tools in gravitational wave data analysis, particularly given the finite nature of computational resources. Acknowledging system limitations such as dependency on the ripple python library capabilities and suggests future enhancements in waveform generation and differentiation. ## I Introduction Compact binary coalescences (CBCs), astronomical occurrences marked by the merging of two distinct compact objects such as black holes (BHs) or neutron stars (NSs), present unique opportunities to study gravitational waves (GWs) [1]. Since the advent of the Laser Interferometer Gravitational-wave Observatory (LIGO) and the subsequent detection of the first GW signal in 2015 [2], our understanding of these cosmic phenomena has significantly expanded. The resultant waveforms from CBCs, or transient-modelled waveforms, encapsulate the dynamics of these merging systems and their study enables us to probe the nature of gravity itself. Confirming the predictions of General Relativity in the strong-field regime, such as the inspiral, merger, and ringdown phases of compact object mergers, allows us to test the limits of our current understanding and potentially uncover new physics [3]. In our prior research, we introduced GravAD, a Python-based search pipeline for GW detection utilising automatic differentiation (AD) and JAX [4]. GravAD's approach centres around dynamically generating and refining waveform templates, thereby improving their fit to incoming data with each iteration. This method not only enhances the efficiency of the detection process but also significantly reduces the number of templates required for data analysis [5]. This paper aims to further expand on the advancements made to GravAD since our initial publication. We have implemented significant enhancements to the system, driven by two primary motivations. Firstly, the escalating complexity of waveform templates and the increasing sensitivity of LIGO detectors necessitate continuous improvements in data analysis methods for GWs [6; 7]. The growth in detector sensitivity and the rise in GW detections underscore the need for resilient and efficient analytical tools. Secondly, with the emergence of new detectors capable of observing a larger variety of CBC events, our analysis approach must become more comprehensive [8]. To address these motivations, we have integrated simulated signals into our pipeline, pushing GravAD's boundaries and expanding its range of detectable astrophysical sources. We have also further decreased the number of templates needed for data analysis, thus improving the efficiency of the detection process and limiting the lost accuracy from this technique. Acknowledging the trade-off between precision and computational resource requirements [9], we have explored alternative optimisation algorithms. For instance, the Adam optimisation algorithm, a method that adjusts the learning rate based on the estimated moment of gradients [10], has been partially incorporated into GravAD, yielding substantial enhancements in performance and efficiency. In this research, our primary objective is to develop our search pipeline by integrating simulated signals and innovative optimisation strategies. This includes the adoption of a callback mechanism - with the objective of early termination - reminiscent of TensorFlow's callback method [11], which effectively halts optimisation processes once specific criteria have been fulfilled. Subsequently, we will discuss the outcomes and critically evaluate the implications of these modifications. ## II Methods The development and improvement of methodologies to detect GWs is a continually evolving field. Our research focuses on enhancing the accuracy and efficiency of the GravAD pipeline, a process that includes three primary stages: generating simulated signals, refining the optimisation strategy, and reducing the number of templates. Each of these stages has the objective of yielding high Signal-to-Noise Ratios (SNRs). The SNR is how strong the GW signal is against a background of noise. This in dicator alerts us to the presence of a detection [12]. Each iteration in GravAD's search aims to refine templates based on gradient information. This works by updating mass parameters fed into the waveform generator. For more information on how GravAD works, visit our previous publication [5]. ### Optimisation Strategy Selection In our prior endeavours, we predominantly employed stochastic gradient descent (SGD) and simulated annealing (SA) in our pursuit of detecting GWs. In a bid to further refine our search capabilities, we incorporated the concept of momentum in our gradient computations, resulting in an enhancement in the form of Adaptive Moment Estimation (Adam). The effectiveness of Adam can be attributed to its ability to combine the benefits of two extensions of SGD, specifically Root Mean Square Propagation (RMSProp) and Momentum. RMSProp employs a moving average of squared gradients to normalise the gradient, facilitating faster convergence and eliminating the risk of vanishing learning rates [13]. On the other hand, Momentum takes into account past gradients to smooth out the update. Therefore, Adam, effectively mitigates the challenges of high variance in parameter updates, providing smoother convergence to optimal solutions. Despite our expectations, the implemented Adam method did not prove as effective as alternative approaches. Our findings revealed that utilising solely the momentum aspect of Adam led us to locate the optimal template more swiftly than with the comprehensive Adam application. This adjustment, importantly, maintained a higher level of accuracy than we had previously seen with GravAD. #### ii.1.1 Stochastic Gradient Descent Our gradient is calculated using AD, which simplifies things greatly. We take the derivative of the SNR calculation and combine this with a learning rate in order to climb to the top of the peak, essentially performing gradient ascent. We can therefore create an updated mass parameter \(\theta_{i}\): \[\theta_{i}=\alpha\cdot g_{i} \tag{1}\] where \(i\) corresponds to the index of the parameter (in this case \(i=0\), corresponding to the first gradient value), and with \(\alpha\) as the learning rate and \(g_{i}\) as the current gradient. #### ii.1.2 Simulated Annealing Simulated annealing complements SGD by facilitating its escape from local maxima, enhancing the optimisation process. By enabling hill-climbing moves, which may temporarily worsen the objective function value, SA offers a mechanism to explore alternative solutions in pursuit of a global optimum [14]. In GravAD we use this mechanism in the form of a perturbation. The perturbations are generated with a normal distribution and then scaled by the temperature. If \(N(0,1)\) denotes a standard normal distribution (mean 0, variance 1), then the perturbations can be represented as: \[P_{i}=T_{i}\cdot N(0,1) \tag{2}\] with \(T_{i}\) as the temperature. The updated parameters are then calculated by adding the product of the learning rate and gradient (from SGD) to the perturbation (from SA). Given \(\theta_{i}\) as the current mass (update parameter), \(\alpha\) as the learning rate, and \(g_{i}\) as the current gradient, the new mass parameter can be calculated as: \[\theta_{i}=\theta_{i-1}+(\alpha\cdot g_{i})+P_{i} \tag{3}\] We commence with an initial temperature value of 1, which subsequently diminishes with each iteration due to the annealing rate of 0.99. The temperature parameter is updated by a straightforward multiplication of the current temperature with the annealing rate: \[T_{i+1}=T_{i}\cdot\gamma \tag{4}\] Where \(T_{i+1}\)is the new temperature and \(\gamma\) is the annealing rate. #### ii.1.3 Adam Adam's optimisation process can be expressed through the following equations, as outlined in[10]: \[m_{i}=\beta_{1}\cdot m_{i-1}+(1-\beta_{1})\cdot g_{i} \tag{5}\] where \(m_{i}\) is the moving averages of the gradient, \(g_{i}\) is the current gradient, and \(\beta_{1}\) is the decay rate set to 0.9. We then calculate the gradient squared: \[v_{i}=\beta_{2}\cdot v_{i-1}+(1-\beta_{2})\cdot g_{i}^{2} \tag{6}\] where \(\beta_{2}\) is the decay rate set to 0.999. From here we can compute the bias-corrected estimates. Firstly: \[\hat{m_{i}}=\frac{m_{i}}{1-\beta_{1}^{i}} \tag{7}\] then, \[\hat{v_{i}}=\frac{v_{i}}{1-\beta_{2}^{i}} \tag{8}\] resulting in our updated parameter: \[\theta_{i}=\theta_{i-1}-\alpha\cdot\frac{\hat{m_{i}}}{\sqrt{\hat{v_{i}}}+\epsilon} \tag{9}\] with \(\alpha\) as the learning rate, and \(\epsilon\) is a small constant to avoid division by zero. ### Applying the Optimisations When we combine the optimisations of SGD, SA, and Adam we get: \[\theta_{i}=\theta_{i-1}-\alpha\cdot\frac{\hat{m_{i}}}{\sqrt{\hat{v_{i}}}+ \epsilon}+P_{i}. \tag{10}\] In practice, however, this method falls short in performance when compared to the likes of SGD and SA. Prompted by this discovery, we ventured into an alternative approach, using only a segment of the Adam optimiser: the momentum. We apply equation 5 in a straightforward manner. Subsequently, the updated parameter is determined by the sum of the product of the learning rate and the updated momentum (from the preceding step), and the perturbation (derived from SA). Given \(\theta_{i}\) as the current parameter (in this context, mass1), \(\alpha\) as the learning rate, and \(P_{i}\) as the current perturbation, the new parameter is computed as follows: \[\theta_{i}=\theta_{i-1}+(\alpha\cdot m_{i})+P_{i} \tag{11}\] The aforementioned equations illustrate the parameter update rules when utilising SGD with momentum, combined with SA. These rules demonstrate how the gradient information, momentum, and perturbations guide the search for optimal solutions within the parameter space. The subsequent inclusion of momentum serves to further fine-tune this process. By factoring in the momentum of the gradient, GravAD facilitates swifter convergence and a more nuanced exploration of the parameter space. Consequently, the combination of these optimisation strategies not only accelerates the attainment of solutions but also boosts the likelihood of these solutions being near-optimal. This, in turn, amplifies our capabilities in GW detection. ### Template Reduction Technique GravAD implements an adaptive termination procedure to fine-tune its exploration of the parameter space, which is analogous to the callback function found in machine learning frameworks such as TensorFlow. This function is engineered to pinpoint suitable peak values in the SNR landscape, and prematurely terminate the search upon their discovery. The mechanism functions by identifying occasions when the SNR surpasses a previously recorded peak. Upon identification, the algorithm records the new peak SNR and its corresponding iteration index. If, for instance, 5 iterations pass without topping the previously recorded peak SNR, the system makes a strategic manoeuvre to conclude the search prematurely. The capability to adaptively terminate manifests as an efficient methodology for navigating the template space. This results in a substantial decrease in the number of templates required for the search. Consequently, it improves GravAD's proficiency to detect GWs rapidly. For subsequent experiments, we selected a cutoff value of 2. This indicates that if the SNR does not improve within two additional iterations, the algorithm will cease the search. This value was chosen because it doesn't significantly degrade the SNR and produces results comparable to a cutoff value of 25. It also leads to a low average number of iterations. We arrived at these values and averages by using events: GW150914, GW151012, GW151226, GW170104, GW170729, GW170809, GW170814, GW170818, GW170823. The outcomes of these tests are visualised in Figure 1. Figure 1: Trends of average SNRs and average iterations for the ’sgd_sa_p’ optimiser, as influenced by different cutoff values. ### Simulated Signals Generation Simulated signals, defined as synthetic data crafted to mimic real GWs, prove instrumental in testing the effectiveness of the GravAD pipeline. They provide valuable insights into the accuracy of the algorithm by facilitating comparisons between known parameters and those estimated. The python library, ripple[6], is utilised to generate these simulated signals according to predetermined parameters, amid the interference of noise. This inclusion of noise is a methodological decision intended to assess the efficacy of GravAD when applied to realistic signals. Each simulated signal is created with a pair of masses. Masses from 20 to 100 (with a step of 10) are used for both the primary and secondary bodies involved in the simulated gravitational event. This consequently results in the generation of signals that correspond to the coalescence of two objects, one of which, for example, could possess a mass of 20 solar masses, while the other could potentially be as massive as 80 solar masses. For each pair of masses, the frequency domain waveform of the GW signal is generated using the gen_waveform function from the GravAD library. This function takes as input the masses of the two bodies, a frequency series determined by a delta frequency/step size, and a set of parameters describing the spins, distance, and phase; however, these are all set to the same value as they have minimal impact on the search. Once the waveform is generated, it is then transformed into a noisy signal in the frequency domain. This process is proceeded by adding a noise profile obtained from the Power Spectral Density (PSD) derived from the event 'GW150914' detected by the LIGO Hanford detector (H1). The noisy waveform, \(h_{noisy}(f)\) is given by: \[h_{noisy}(f)=h(f)+N(f) \tag{12}\] Where \(h(f)\) is the generated waveform, \(N(f)\) is the noise profile from the PSD. This approach to generating and storing a wide range of simulated signals with varying mass parameters allows for robust testing of the GravAD pipeline under different signal scenarios. ## III Results ### Effectiveness of the Optimisations Upon comparing the performance of various optimisation techniques (as depicted in Figure 2), we observe that standalone SGD can be somewhat slow yet effective, resulting in a high SNR. The incorporation of SA navigates the search away from local maxima, guiding it towards regions within the parameter space that are more likely to generate superior solutions. While this approach diminishes the average iterations per run, it also adversely affects the average SNR. The most effective strategy, on the whole, appears to be the combination of SGD, SA, and momentum (P), due to its high average SNR and reasonable average iterations. We can also see the ineffectiveness of Adam in our search with and without SA involvement. ### Significant Reduction in Template Usage Our refined methodology yielded a considerable reduction in the number of templates necessary for performing a search. Traditionally, an estimated \(N\sim 500,000\) templates are utilised for such a task [15]. Our prior research, however, was able to cut down this number to a mere \(N\sim 180\) templates. The current implementation advances this further, demanding on average only \(N\sim 8.67\) templates per search, which signifies a remarkable efficiency gain, reducing the requirement by approximately \(N\sim 60,000\) times. ### Achieved SNRs Upon reviewing the updates made to GravAD in this latest iteration, we observe (as depicted in Figure 3) an average \(8\%\) decrease in the magnitude of the SNR. This compromise in detection capabilities is directly linked to the minimal number of templates employed in the search. Nonetheless, the architecture of the GravAD code provides a significant advantage. It allows researchers to Figure 2: A comparison between different optimisers used on GravAD. harness its inherent flexibility, notably by modifying the early termination process to improve SNR values, albeit at the expense of employing more templates. This adaptability empowers researchers to strike a balance that best serves their specific research needs. ### Performance on Simulated Signals Analysis of the generated signals revealed some variations in the predicted and actual mass parameters. For example, the signal simulated with a mass1 and mass2 of 90_20 indicated a noticeable overestimation of mass1 while mass2 was underestimated. Despite this discrepancy, the total mass maintained a marginal deviation from the expected values, underscoring the balance between the mass estimations. The precision exhibited across all simulated signals was remarkable, achieving a near-perfect alignment within 0.3% of the forecasted values. This achievement validates GravAD's efficacy in accurately processing a diverse range of mass ratios. Notably, the signal with a 50.40 mass ratio presented an anomaly, achieving only 35% of the expected value. This reinforces our confidence in GravAD's ability to process real GW signals effectively. Figure 4 illustrates GravAD's competence in processing an array of mass parameters, including high and low mass ratios, and simulated data sets. This proficiency underpins GravAD's robustness and wide-ranging applicability. ## IV Discussion The discussion section aims to provide a comprehensive analysis of the advancements made to the GravAD system, focusing on its rapid search functionality and accuracy in detecting GWs. By combining various techniques, including the integration of simulated signals and optimisation strategies, the pipeline demonstrates its effectiveness in efficiently processing GW data. One significant aspect to consider is the importance of efficient methods in allocating computational resources for other research. As the field of GW detection continues to evolve and the number of detections increases, computational hardware becomes a valuable and limited resource. The GravAD system addresses this challenge by implementing optimisation techniques that reduce the number of templates required in the search, allowing for the allocation of computational resources in other research areas. However, it is crucial to acknowledge the limitations of GravAD. The system is limited by the capabilities of the ripple software. This limitation implies that the effectiveness of our algorithm is dependent on the capabilities and advancements of the software it relies on. Therefore, future improvements in waveform generation and differentiation will play a crucial role in enhancing the effectiveness of the GravAD pipeline. By integrating simulated signals and optimisation strategies, the algorithm is more effective at processing GW data. The optimisation techniques, such as the use of a callback method, steer the search away from the unneeded exploration of the parameter space, improving the efficiency of the search process. Moreover, the GravAD system's ability to accurately process a diverse range of mass ratios, as demonstrated in the simulated signals, reinforces its credibility. ## V Conclusion This study underscores the notable advancements in GravAD's functionality and efficiency in detecting gravitational waves. Leveraging a multitude of techniques, the system has improved its capability to process simulated signals, thereby enhancing the accuracy of gravitational wave detection. Despite minor discrepancies in individual mass predictions, GravAD adeptly preserved total mass values, showcasing its practicality across diverse mass ratios. Our research also highlights the vital role of optimisation strategies in augmenting the efficacy and speed of GravAD's search process. The blend of SGD, SA, and momentum has proven effective, offering a balance between high average SNR and reasonable average iterations of \(N\sim 8.67\) for each search. Figure 3: A comparison between the latest development of GravAD (\(N\sim 8.67\) templates) compared to the predecessor (\(N\sim 180\) templates), the primary differences are due to the early termination template reduction technique as well as a momentum optimiser. Also, the mean SNR with bounds data being from the GWTC [16]. A pivotal achievement lies in GravAD's substantial reduction in the number of templates required for search processes, improving computational efficiency remarkably without substantial reduction in result quality. This major stride forward paves the way for the allocation of resources to other vital research areas, proving instrumental in the continual expansion of gravitational wave detection. The success of GravAD remains tethered to the progression of the ripple library. Future enhancements in the generation and differentiation of waveforms could further boost GravAD's effectiveness. Therefore, the continual evolution of our system necessitates paralleled advancements in the underlying technology. The success of GravAD remains tethered to the progression of the ripple library. Future enhancements in the generation and differentiation of waveforms could further boost GravAD's effectiveness. Therefore, the continual evolution of our system necessitates paralleled advancements in the underlying technology.
2306.17111
Equal Pay for Similar Work
Equal pay laws increasingly require that workers doing "similar" work are paid equal wages within firm. We study such "equal pay for similar work" (EPSW) policies theoretically and test our model's predictions empirically using evidence from a 2009 Chilean EPSW. When EPSW only binds across protected class (e.g., no woman can be paid less than any similar man, and vice versa), firms segregate their workforce by gender. When there are more men than women in a labor market, EPSW increases the gender wage gap. By contrast, EPSW that is not based on protected class can decrease the gender wage gap.
Diego Gentile Passaro, Fuhito Kojima, Bobak Pakzad-Hurson
2023-06-29T17:14:47Z
http://arxiv.org/abs/2306.17111v1
# Equal Pay for _Similar_ Work+ ###### Abstract Equal pay laws increasingly require that workers doing "similar" work are paid equal wages within firm. We study such "equal pay for similar work" (EPSW) policies theoretically and test our model's predictions empirically using evidence from a 2009 Chilean EPSW. When EPSW only binds across protected class (e.g., no woman can be paid less than any similar man, and vice versa), firms segregate their workforce by gender. When there are more men than women in a labor market, EPSW increases the gender wage gap. By contrast, EPSW that is not based on protected class can decrease the gender wage gap. ## 1 Introduction "No employee with status within one or more protected class or classes shall be paid a wage at a rate less than the rate at which an employee without status within the same protected class or classes in the same establishment is paid for... **similar work** [emphasis added]" -New York Labor Code, Section 194 Firms have some degree of wage-setting power in many labor markets (see, e.g., Card, 2022; Manning, 2005). Because of this, they may pay workers different relative salaries in ways that are repugnant to society at large. In particular, wage gaps between groups of workers, often men and women, are frequent rallying points for governmental action. A popular form of legislation seeks to prohibit firms from paying disparate wages to different workers, guided by the principle of "equal pay for equal work" (EPEW). In the United States, 49 states had EPEW laws in effect in 2015, requiring each firm to pay equal wages to all of its workers doing equal work. However, EPEW may be difficult to enforce; "equal" pay is straightforward to define, but it is likely that no two workers are exactly "equal" within a firm. Firms can avoid the intent of these laws by pointing out differences between workers or making other maneuvers such as job title proliferation to marginally heterogenize their workforce (Baron and Bielby, 1986; Goldin, 1990).1 To combat this enforceability issue, many EPEW laws have been updated to include a measure of coarseness-they require a firm to set "equal pay for similar work" (EPSW).2 California was the first state in the US that moved from EPEW to EPSW in 2015, and as of January 2023, more of the US workforce is under the jurisdiction of a state EPSW law than a state EPEW law.3 The equal pay provision in EPSW is frequently group based in that it binds only across groups of workers, prohibiting for example, that a man is paid more than a "similar" woman (and vice versa). Footnote 1: For example, a manufacturer told the _Washington Post_ (1964) that his firm would “downgrade some job classifications for women and reassign higher-level, higher-paying duties to men” in response to EPEW. We are grateful to Martha Bailey for bringing this article to our attention. Footnote 2: See Guppy and Vincent (2021) for a discussion of the transition in Canadian law—and the differences in allowable pay discrepancies—from EPEW to EPSW. Footnote 3: The percent of the US workforce that is under the jurisdiction of EPSW and EPEW, respectively, are 45.9 and 45.6, respectively. These figures are calculated from 1) finding all states covered by each of these policies (see [https://www.dol.gov/agencies/wb/equal-pay-protections](https://www.dol.gov/agencies/wb/equal-pay-protections)), and 2) the share of the US workforce employed in each state (see [https://www.statista.com/statistics/223669/state-employment-in-the-us/](https://www.statista.com/statistics/223669/state-employment-in-the-us/)). Despite the rapid growth of EPSW laws, little is known about their effects on labor market outcomes. Since EPSW is more constraining on firms than EPEW, EPSW may lead to a larger direct effect on wages. But how will firms adjust their employment policies to adapt in equilibrium? How will potential employment changes affect the goal of ensuring fairer pay? We theoretically and empirically study the labor market effects of EPSW. Our findings suggest that the equilibrium effects of group-based EPSW overwhelm the direct effects, leading to increased occupational segregation and a shift in the wage gap in favor of the majority group of workers in a labor market. Therefore, these policies may counterintuitively exacerbate the problem they were intended to solve. We show that modifying EPSW to remove protected classes may have more positive effects. We develop a theoretical framework to elucidate key economic forces at play. We begin by describing the simplest version of our model to derive key intuitions, and later extend our results to more general settings. In the basic model, there exist two homogeneous firms competing for a continuum of heterogeneous workers who all perform "similar" work in the eyes of the law. Each worker belongs to one of two groups, \(A\) or \(B\) (e.g., men or women), and is endowed with a "productivity" drawn from a distribution that potentially differs by group identity. Each firm has a constant-returns-to-scale production function and faces a wage monotonicity constraint to limit worker incentives to shift: wages paid must be non-decreasing in productivity within group. Several remarks are in order regarding our modeling decisions. First, we are agnostic about the underlying fundamentals of worker productivity, and therefore the differences in distributions between groups; we interpret productivity as a measure of firm willingness to pay for an individual worker net of any unmodeled, discriminatory factors. For example, in consumer-facing industries where consumers have discriminatory preferences in favor of a particular group of workers (Bar and Zussman, 2017; Holzer and Ilhanfeldt, 1998; Kelley et al., 2023; Kline et al., 2022), we may expect that group's productivity distribution to first-order stochastically dominate the other group's distribution. Similarly, taste-based bias by (managers of) firms can be incorporated. Second, even though our model assumes complete information in which each worker's productivity is common knowledge, a worker's productivity can instead be interpreted as the expected productivity at the time the hiring decision is made. Therefore, we can similarly include statistical discrimination into our framework. Third, our model assumes that all workers perform "similar" work in the eyes of the law.4 Thus, our theoretical predictions should be viewed as applying within "job" in a particular labor market, and should not be used to predict differential effects on, for example, custodians and lawyers working within the same firm. Footnote 1: The _Hotelling_ model is a model for the _Helling_ model, and is a model for the _Helling_ model. Our model analysis reveals important effects of EPSW on worker sorting across firms. Without EPSW, each worker can be hired by either firm in equilibrium, regardless of group identity or productivity. Similarly to the classic Bertrand model, firms compete fiercely for each worker, and as a result, the average gap in pay across groups \(A\) and \(B\) is equal to the difference in average productivity between the groups. Thus, any discriminatory factors affecting firms' willingness to pay are exactly reflected in the wage gap. This result is obtained in the presence of rigidities imposed by the wage monotonicity constraint, suggesting that the "Bertrand" prediction may be more likely in unregulated labor markets than previously considered. With EPSW, we show that firms segregate the workforce, with one firm hiring all \(A-\)group workers and the other hiring all \(B-\)group workers. To understand why this is the case, note that because each firm hires from only one group of workers, no firm is exposed to the constraint of equal pay in equilibrium. By contrast, EPSW makes poaching workers from its competitor costly: EPSW requires equal pay to any two workers from different groups and, by transitivity, this implies that equal wages must be paid to _all_ workers it hires. Thus, EPSW serves as the enforcement mechanism for segregation, similarly to location choices in Hotelling's competition model. The aforementioned analogy to the Bertrand and Hotelling models helps explain why EPSW leads to workforce segregation, but it leaves unanswered the question of the effect of EPSW on the wage gap. To address this question, we develop new analytical tools. First, we characterize the set of equilibrium outcomes under EPSW, which reveals novel economic forces and provides implications for policy. We show that (in any outcome involving worker segregation) three conditions on the wage schedules of the firms hold if and only if the wages are supported in an equilibrium outcome: (i) individual rationality--no firm pays a worker more than her productivity, (ii) equal profit across the firms, and (iii) a novel no desegregation condition--it is not profitable for a firm to pay any common wage and hire workers from both groups, possibly by poaching from the other firm. Second, given a wage function of a firm, say wage \(w_{2}(\cdot)\) of firm 2, we define a wage function of the other firm, say function \(\hat{w}_{1}(\cdot)\), that is lowest among all those that make it unprofitable to desegregate, i.e., \(\hat{w}_{1}(\cdot)\) is the cheapest wage function that hires all \(A-\)group workers and satisfies the no desegregation condition. We establish that an equilibrium outcome with \(w_{2}(\cdot)\) exists if and only if the profit of firm 1 under \(\hat{w}_{1}(\cdot)\) is no smaller than the profit of firm 2 under \(w_{2}(\cdot)\). This result facilitates our analysis by turning the hard problem of finding an equilibrium outcome into the simpler problem of evaluating an inequality involving firm profit under these two wage functions. We use these tools to show that EPSW moves the wage gap in favor of the majority group of workers. Specifically, there exists a continuum of equilibria under EPSW: in one equilibrium, the wage gap is equal to that in the "Bertrand" outcome without EPSW, but in all other equilibria, the wage gap is strictly larger (i.e. more in favor of the majority group). This result follows from the equal profit condition between firms that must be satisfied in equilibrium. More specifically, if there are more \(A-\)group workers than \(B-\)group workers, the firm that hires these workers must receive smaller average profit from each worker than the other firm receives from the average \(B-\)group worker, so \(A-\)group workers' average wage is relatively higher. Notably, the wage gap widens under EPSW simply because the majority group has a larger population and, in particular, this conclusion holds regardless of the distributions of productivities of the two groups. Moreover, we also show that firm profit and the magnitude of increase in the wage gap co-move, implying that firms would benefit from selecting equilibria with larger wage gaps. One might claim that our model stacks the deck against EPSW because the outcome without EPSW is already "fair" in the sense that workers from both groups are paid their productivity. We disagree. Recall that our model does not make strong assumptions on discriminatory factors, so "productivity" could incorporate firms' bias. Given this observation, our model does _not_ take a stance on whether or not the outcome without EPSW is fair. What we do show, by contrast, is that EPSW is relatively more advantageous to the majority group. And this conclusion is robust in that it does not depend on whether the outcome without EPSW is fair or not. Of course, if the majority group is favored without EPSW for discriminatory reasons, then our result implies that EPSW makes the labor market even less fair. The effects of EPSW on the wage gap may be particularly bleak in some markets: with sufficient imbalance in the size of the groups, _any_ individually rational wage schedule for workers in the minority group (including zero wages for all minority group workers) can be supported in equilibrium, while arbitrarily high wages _must_ simultaneously obtain for workers in the majority group. This result casts doubt on folk wisdom that equal pay policies help minority workers enter fields dominated by majority workers.5 While our model holds fixed relative group sizes and does not formally consider dynamic changes in responses to equilibrium conditions, Le Chatelier's Principle suggests that EPSW may cause or exacerbate "occupational tipping" patterns in which, for example, women select away from an industry if it becomes too male dominated (Pan, 2015). Footnote 5: For example, the Obama administration claimed in 2013 that The Equal Pay Act of 1963 (an EPEW) “had a profound effect on job opportunities and earnings for women... and laid the foundation for the movement of women into the paid labor force at unprecedented levels... Since passage of the Equal Pay Act,... [w]omen have integrated many previously exclusively male job fields.” See [https://obamahitehouse.archives.gov/sites/default/files/equalpay/equal_pay_task_force_progress_report_june_2013_new.pdf](https://obamahitehouse.archives.gov/sites/default/files/equalpay/equal_pay_task_force_progress_report_june_2013_new.pdf). We test our model by studying the effect of the enactment of EPSW in Chile in 2009. This EPSW was the first equal pay law in Chile, and it constrained how a firm could pay its workers across gender: no firm is permitted to pay a woman less than it pays a man for similar work, and vice versa. The law subjects firms in violation to substantial fines, and through a public-records request, we find direct evidence of policy enforcement. Importantly, EPSW binds only for firms above a particular size threshold. This allows for a straightforward event-study (difference-in-differences) design to estimate the causal effects of EPSW, wherein we compare differences in outcomes of firms above ("treated") and below ("control") the threshold following EPSW. Following Bennedsen et al. (2022); Boheim and Gust (2021); Duchini et al. (2022); Gulyas et al. (2023), we restrict our sample to firms just above and below this size threshold to limit size-based wage dynamics. That is, our identifying assumption is that parallel trends hold for similarly-sized firms. Using matched employer-employee data from 2005-2013 we identify the following effects of EPSW consistent with our model predictions. First, EPSW increases gender segregation across firms. The share of firms with workers of only one gender increases by 4.6 percentage points, off a baseline of 31.2% of gender-segregated firms prior to EPSW enactment. While our theoretical results predict full gender segregation across all firms, there are clearly unmodeled factors that prevent such a stark empirical prediction. We show that EPSW leads to a "missing mass" of firms that are nearly-but-not-fully gender segregated, and moreover, that the share of "missing," nearly segregated firms is of a similar magnitude as the increased share of fully segregated firms. These findings suggest that firms on the margin of full segregation (e.g. those which can become fully segregated by firing the last worker of the "wrong" gender) are those most likely to fully segregate after EPSW. Second, we show that EPSW moves the gender wage gap in favor of the majority group of workers in a local labor market. The average within-firm gender wage gap was 35.8% prior to EPSW. By including worker-firm fixed effects in our specifications, we are able to compare the impacts of EPSW on the same worker at the same firm. In local labor markets--defined by firm industry and county--where a majority of workers are men, we find that EPSW _increases_ (in favor of men) the gender wage gap by 3.8 percentage points, while in local labor markets where a majority of workers are women, we find that EPSW _decreases_ (in favor of women) the gender wage gap by 5.2 percentage points. These findings exactly match our prediction that EPSW benefits the majority group of workers in a labor market. Because men in Chile dominate the overall labor market (5/6 of all workers are employed in majority male local labor markets), the overall effect of EPSW is to increase the gender wage gap (in favor of men) by 2.6 percentage points. The results thus far paint a bleak picture of the impacts of EPSW. But, we theoretically analyze a simple design choice--removing protected classes--and show how this change can reverse the unintended equilibrium consequences of EPSW. On its face, requiring equal pay across classes makes a measure of sense; due to the "coarseness" of EPSW, if a policy maker is concerned only about gender-based discrimination by firms, it may seem reasonable to allow firms to pay different wages to different men (who are presumably heterogeneous). As we have shown, however, this asymmetry allows firms to segregate the workforce by gender to avoid the implied requirement that all similar workers are paid equally. By restricting that all similar workers be paid equal wages, regardless of group identity, such segregation is no longer an equilibrium feature. Instead, we show that under a non-group-based EPSW, firms segregate the workforce by productivity, with one firm hiring the most productive workers in the market and the other firm hiring less productive workers. We show that such a policy can decrease the gender wage gap in the market and reduce within-firm wage inequality.6 However, we caution that such non-group-based EPSW can reduce overall employment. Footnote 6: We note that complementary policies can be added to group-based EPSW to ensure the same equilibrium outcome as non-group-based EPSW. The key is that firms must be disincentivized from segregating occupations within job. For example, group quotas (Bertrand et al., 2019) or the proliferation of a sufficiently large number of protected classes prevent group-based segregation outcomes. ### Related Literature While we are the first, we believe, to analyze the novel equilibrium effects of EPSW, there are rich theoretical and empirical literatures related to EPEW. Theoretical studies of EPEW have typically focused on its unintended effects. This focus can be traced back to Milton Friedman who once famously said, "What you are doing, not intentionally, but by misunderstanding, when you try to get equal pay for equal work law... is reducing to zero the cost imposed on people who are discriminating for irrelevant reasons."7 More recent work studies EPEW in Salop's classic location model; the first such paper is Bhaskar et al. (2002) and is succeeded by Berson (2016); Kaas (2009); Lagerlof (2020); Lanning (2014). These papers must contend with the very motivation that led to EPSW: what is"equal work? Doing so results in at least two difficulties. First, the authors interpret "equal work" literally and assume workers are equally productive, while in reality there may be very few workers whose productivities are exactly equal. Second, their analyses predict that EPEW can either increase or decrease differences in outcomes across groups of workers, often within the same paper. The lack of clear policy-relevant predictions is reflected in the empirical literature on EPEW, which we discuss shortly. By contrast, we find that EPSW has clear, if unintended, effects: our theoretical analysis unambiguously predicts both job segregation and widening wage gaps, and our empirical analysis of Chilean data confirms both predictions. Footnote 7: See [https://www.aei.org/carpe-diem/milton-friedman-makes-the-case-against-equal-pay-for-equ](https://www.aei.org/carpe-diem/milton-friedman-makes-the-case-against-equal-pay-for-equ) al-work-laws/. The empirical literature investigating equal-labor-rights legislation primarily considers US policies in the 1960s and 1970s. As with the theoretical literature we detail above, this empirical literature draws mixed conclusions about whether such legislation improves the employment rate or wages of protected classes of workers (see Altonji and Blank (1999); Bailey et al. (2022); Blau and Kahn (1992); Donohue III and Heckman (1991); Hyland et al. (2020); Neumark and Stock (2006) for detailed discussions).8 Our paper adds to the equal-labor-rights literature in several ways. First, we solely observe the impact of an equal pay law. One difficulty in much of the literature is assessing the impacts of individual policies, as many related labor policies are often enacted in quick succession.9 Donohue III and Heckman (1991) argue that it is difficult to attribute observed effects to any one of the contemporaneous policies, as there may be complementarities between them. Our empirical setting of Chile is notable as no existing equal pay laws were on the books at the time EPSW was enacted in 2009, and no significant related policies were enacted in quick succession. Footnote 8: The findings of Bailey et al. (2022) suggest that the conclusions from this literature may be sensitive to the econometric methods used. Footnote 9: The Equal Pay Act of 1963 requires equal pay for men and women for equal work, while Title VII of the Civil Rights Act of 1964 prohibits discrimination in hiring, layoffs, and promotions. There were also other federal equal pay policies—Executive order 11246 in 1965 banning discrimination in hiring by federal contractors against minority candidates, and an extension to include women in 1967; the Equal Employment Opportunity Act in 1972 to increase enforcement; and many individual state policies. Methodologically, our paper is more related to the theoretical literature on "best-price" guarantees, which commit firms to rebating past consumers if prices fall in the future. These policies have the direct effect of equalizing payments across heterogeneous buyers, but have the unintended equilibrium effect of raising firm market power (Butz, 1990; Cooper and Fries, 1991; Scott Morton, 1997a,b). In our paper, EPSW plays the role of a best-wage guarantee, but it binds only off the equilibrium path where firms fail to segregate. Nevertheless, this off-path restriction is key in driving the unintended wage effects of EPSW; as a result, firms in our model have an ex-ante identical willingness to pay for each particular worker, but the costs of hiring workers of the "wrong" type are differentiated in equilibrium. This force is similar to "artificial" costs that heterogenize ex-ante identical products in consumer markets, which can lead to local market power for firms (Klemperer, 1987). Therefore, a key force in our model is firms' equilibrium behavior to segregate their workforce by group identity. Indeed, we show empirically that the Chilean EPSW leads to an increase in gender segregation across firms. One may suspect that such segregation is less likely to occur in other localities that enact EPSW.10 Speaking to this point, however, group-based segregation across firms has been noted in the US (Blau, 1977; Goldin, 1990; Hellerstein and Neumark, 2008; Neumark et al., 1996), and recent research (Ferguson and Koning, 2018) argues that this segregation has increased over time. Therefore, it seems plausible that EPSW could further affect segregation in a wide variety of labor markets. Footnote 10: For example, recent research shows that gender-based occupational segregation may be especially likely when the local language has gendered nouns, as firms can target their hiring to workers of specific genders (Card et al., 2021; Kuhn and Shen, 2022; Kuhn et al., 2020). This may explain the high baseline level of gender segregation in Chile, where Spanish is the official language. Gendered nouns and targeted hiring may also play a role in the ability of Chilean firms to further segregate once EPSW is enacted. ## 2 Model There are two firms, 1 and 2, and a continuum of workers. Each worker is endowed with a type \(e=(g,v)\in\{A,B\}\times[0,1]\), where \(g\in\{A,B\}\) is the worker's group identity (say, men and women) and \(v\in[0,1]\) is the worker's productivity. There is a \(\beta\geq 1\) measure of \(A-\)group workers and 1 measure of \(B-\)group workers. \(F_{A}\) and \(F_{B}\) are cumulative distribution functions governing the productivities of workers in groups \(A\) and \(B\), respectively. \(F_{A}\) and \(F_{B}\) are absolutely continuous and thus admit density functions \(f_{A}\) and \(f_{B}\), respectively. We assume that \(0<\underline{f}_{A}\leq\bar{f}_{A}<+\infty\) and \(0<\underline{f}_{B}\leq\bar{f}_{B}<+\infty\), where \(\underline{f}_{A}=\inf\{f_{A}(v)|v\in[0,1]\}\), \(\bar{f}_{A}=\sup\{f_{A}(v)|v\in[0,1]\}\), \(\underline{f}_{B}=\inf\{f_{B}(v)|v\in[0,1]\}\), and \(\bar{f}_{B}=\sup\{f_{B}(v)|v\in[0,1]\}\). A _(labor) _market_ is a tuple (\(\beta\),\(F_{A}\),\(F_{B}\)). Note that the distribution of \(A-\)group workers may be different from that of \(B-\)group workers, allowing us to model situations in which firms discriminate against one of the groups of workers (i.e. the output of \(B-\)group workers are drawn from the same distribution as \(A-\)group workers, but the firms' willingness to pay for them is lower because firms have a taste-based preference for \(A-\)group workers). For each \(g\!\in\!\{A\),\(B\}\) we define \(\mathbb{E}_{g}(v)\!:=\!\int_{0}^{1}\!vf_{g}(v)dv\). For expositional ease, we study this environment via a cooperative game (all of our model predictions are unchanged if we instead consider a non-cooperative game, see Remark 4). Informally, we consider the following situation: An outcome specifies, for each worker, the firm she works for (or the outside option of staying unemployed) and the wage received (if employed). Each worker only cares about her wage and works for whichever firm offers her a higher wage (in case both firms offer the same wage to her, she may work for either of the firms), or stays unemployed if no firm makes a job offer (in case all wage offers made to the worker are zero, she may be employed or stay unemployed). A firm generates per-unit profit \(v\!-\!w\) if it hires a worker of productivity \(v\) and pays her wage \(w\). The firm does not have any capacity constraint (i.e., the firm can hire any measure of workers), and its payoff is the integral of profit generated from workers it employs. Formally, an outcome for firm \(i\) is \(O_{i}\!:=\!\{(f_{g}^{i}(v)\),\(w_{i}^{g}(v))\}_{v\in[0,1],g=A,B}\), where: 1. \(f_{g}^{i}(v)\!\in\![0\),\(f_{g}(v)]\) is the density of workers of type \(e\!=\!(g\),\(v\)) hired by firm \(i\), 2. \(w_{i}^{g}(v)\!\in\![0\),\(\infty)\) is the wage firm \(i\) pays to workers of type \(e\!=\!(g\),\(v)\) it hires. If \(f_{g}^{i}(v)\!=\!0\), then we fix \(w_{i}^{g}(v)\!=\!0\). An outcome is a tuple \(O\!:=\!(O_{1}\),\(O_{2})\) where \(O_{i}\) is the outcome for firm \(i\) such that \(f_{g}^{1}(v)\!+\!f_{g}^{2}(v)\!\leq\!f_{g}(v)\) for each \(v\) and \(g\). That is, the (overall) outcome specifies the outcome for both firms such that the total hiring does not exceed the supply of workers (a feasibility requirement). We assume that \(f_{g}^{i}\) and \(w_{i}^{g}\) are measurable functions for each \(i\) and \(g\). We also assume that wages must be monotone non-decreasing in worker productivity within each firm within each group. Formally, for each \(i\!\in\!\{1\),\(2\}\), \(g\!\in\!\{A\),\(B\}\), and any \(v\),\(v^{\prime}\!\in\![0\),\(1]\), \(w_{i}^{g}(v)\!\geq\!w_{i}^{g}(v^{\prime})\) if \(v\!\geq\!v^{\prime}\) and \(f_{g}^{i}(v)\!>\!0\). **Remark 1**.: Throughout, we maintain the assumption that the wage function is weakly monotone in the aforementioned sense. The motivation behind this assumption is the following. In many situations, although it may be difficult or impossible for a worker to convince the firm that they have a higher productivity than their true value, it is often easy for a worker to pretend to have a lower productivity than their true value. For example, a worker who is fluent in a foreign language can pretend to be otherwise simply by not speaking that language, while misrepresentation in the opposite direction may be impossible. Thus, if the wage function fails monotonicity, then such a misrepresentation may be both feasible and profitable for the worker. In other words, our monotone wage assumption within a firm and within a group is the condition we impose for robustness against a worker who considers destroying productivity or pretending to have lower productivity than their true value. Under an outcome for \(i\), \(O_{i}\!:=\!\{(f_{g}^{i}(v)\),\(w_{i}^{g}(v))\}_{v\in[0,1],g=A,B}\), firm \(i\) receives profit \[\pi_{i}^{O_{i}}\!:=\!\beta\!\int_{0}^{1}\![v\!-\!w_{i}^{A}(v)]f_{A}^{i}(v)dv\! +\!\int_{0}^{1}\![v\!-\!w_{i}^{B}(v)]f_{B}^{i}(v)dv.\] Given an outcome \(O\!=\!(O_{1}\),\(O_{2})\) and firm \(i\), we denote \(\pi_{i}^{O}\!:=\!\pi_{i}^{O_{i}}\). Given an outcome \(O\), denote by \(AW^{O}_{g}\) the average wages for group \(g\!\in\!\{A,\!B\}\), i.e.,11 Footnote 11: Note that each unemployed worker from group \(g\) contributes a wage of \(0\) to the calculation of the average wage for group \(g\). \[AW^{O}_{g}\!:=\!\!\int\limits_{0}^{1}\!w^{g}_{1}(v)f^{1}_{g}(v)dv\!+\!\int \limits_{0}^{1}\!w^{g}_{2}(v)f^{2}_{g}(v)dv.\] We refer to \(AW^{O}_{A}\!-\!AW^{O}_{B}\) as the _wage gap_ in outcome \(O\). We view two outcomes for firm \(i\), \(O_{i}\!:=\!\{(f^{i}_{g}(v),\!w^{g}_{i}(v))\}_{v\in[0,1],g=A,B}\) and \(\tilde{O}_{i}\!:=\!\{(\tilde{f}^{i}_{g}(v),\!\tilde{w}^{g}_{i}(v))\}_{v\in[0,1],g=A,B}\) as equivalent if, for each \(g\!\in\!\{A,\!B\}\), \(f^{i}_{g}(v)\!=\!\tilde{f}^{i}_{g}(v)\) and \(w^{g}_{i}(v)\!=\!\tilde{w}^{g}_{i}(v)\) for almost all \(v\). We view two outcomes \(O\) and \(\tilde{O}\) as equivalent if either: 1. for every \(i\!\in\!\{1,\!2\}\), \(O_{i}\) is equivalent to \(\tilde{O}_{i}\), or 2. \(O_{1}\) is equivalent to \(\tilde{O}_{2}\), and \(O_{2}\) is equivalent to \(\tilde{O}_{1}\). The first condition captures the usual notion that the outcomes are regarded as equivalent if both the employment patterns and wages are identical between them except for a measure-zero set. The second condition captures the case in which the employment patterns and wages are identical almost everywhere once the names of the firms are relabeled--recall that firms are homogeneous in the present model. Remark **2**.: We refer to the model presented so far as the model without EPSW. In the case with group-based EPSW we add a restriction that, for any outcome and any firm, no positive measures of workers from different groups receive different wages. The formal definition is given in Section 3.2. In the case with non-group-based EPSW, we add a restriction that, for any outcome and any firm, almost all workers at that firm receive the same wages. The formal definition is given in Section 3.3. An outcome is said to be a _core outcome_ if there is no firm and an alternative wage schedule for a subset of workers such that they are made better off being matched to each other, that is, both the firm and each worker in the hired subset obtain a higher payoff than the present outcome. Formally, we say that an outcome \(O\!:=\!\{(f^{i}_{g}(v),\!w^{g}_{i}(v))\}_{v\in[0,1],i=1,2,g=A,B}\) is _blocked_ by firm \(j\) via an alternative outcome (for \(j\)) \(\tilde{O}_{j}\!:=\!\{(\tilde{f}^{j}_{g}(v),\!\tilde{w}^{g}_{j}(v))\}_{v\in[0, 1],g=A,B}\) if \(\pi^{\tilde{O}_{j}}_{j}\!>\!\pi^{\tilde{O}_{j}}_{j}\) and, for each \(g\!\in\!\{A,\!B\}\) and almost all \(v\!\in\![0,\!1]\), one of the following conditions hold. Note that, because we define \(\tilde{O}_{j}\) to be an outcome, it must satisfy all restrictions imposed on an outcome in addition to those listed below: 1. \(\tilde{w}^{g}_{j}(v)\!\geq\!w^{g}_{j}(v)\) and \(\tilde{w}^{g}_{j}(v)\!>\!w^{g}_{-j}(v)\), 2. \(\tilde{w}^{g}_{j}(v)\!\geq\!w^{g}_{j}(v)\) and \(\tilde{f}^{j}_{g}(v)\!+\!f^{-j}_{g}(v)\!\leq\!f_{g}(v)\), 3. \(\tilde{w}^{g}_{j}(v)\!>\!w^{g}_{-j}(v)\) and \(\tilde{f}^{j}_{g}(v)\!+\!f^{j}_{g}(v)\!\leq\!f_{g}(v)\), or 4. \(\tilde{f}^{j}_{g}(v)\!+\!f^{j}_{g}(v)\!+\!f^{-j}_{g}(v)\!\leq\!f_{g}(v)\). These cases enumerate all possibilities for the formation of a blocking coalition. Condition 1 states a "no wage cuts" requirement; if firm \(j\) weakly raises the wages of all workers involved, and strictly raises wages for workers employed by the other firm \(-j\), then these workers are all willing to join the blocking coalition. Condition 2 considers the case in which firm \(j\) does not need to pool workers from firm \(-j\) to construct the blocking outcome, so the only constraint on wages is that existing worker's wages are not reduced. Condition 3 considers the case in which firm \(j\) does not need to keep any existing workers to construct the blocking outcome, so the only restriction on wages is that the wage paid to poached workers is higher than those paid by \(-j\) to the same workers. Condition 4 considers the case in which firm \(j\) can hire from unemployed workers to construct the blocking outcome, so there is no additional restriction on the wages of these workers. We say that an outcome \(O\) is a _core outcome_ if there exists no outcome that blocks it. Remark 3.: The definition of block implies two restrictions that any core outcome must satisfy: First, Condition 3 of the definition of block immediately implies the following **Equal Profit Condition**: In any core outcome \(O\), \(\pi_{1}^{O}\!=\!\pi_{2}^{O}\). This is because otherwise the firm earning strictly lower profit could fire all of its existing workers and hire all of the workers employed by the other firm with an arbitrarily small wage increase. Second, the definition of the core implies the following **Individual Rationality Condition** for firms: In any core outcome, there is no set \(V\!\subset\![0,\!1]\) of positive Lebesgue measure, a group \(g\!\in\!\{A\!,\!B\}\), and a firm \(i\!\in\!\{1\!,\!2\}\) such that \(w_{i}^{g}(v)\!>\!v\) for all \(v\!\in\!V\). Intuitively this is because, if there were, firm \(i\) could simply fire all of the workers in question (i.e. set \(\widetilde{f}_{g}^{i}(v)\!=\!0\) for all \(v\!\in\!V\)) and increase its profit. A formal argument for the case without EPSW is given in the proof of Proposition 1, and an essentially identical argument extends this observation to the case with (group-based or non-group-based) EPSW as well. Remark 4.: Our setting and the solution concept of the core are of a cooperative nature. An alternative approach would be to set up a non-cooperative game and analyze its equilibria. In Appendix C, we present a non-cooperative game wherein the firms simultaneously make wage offers to workers, and each worker accepts at most one of the offers she receives (after observing all offers). The subgame perfect Nash equilibrium outcomes of this game exactly coincide with the set of core outcomes of the cooperative game we describe above. We choose to present the cooperative framework in the main text because its exposition is simpler, and the equivalence mentioned here provides a noncooperative foundation for doing so. ## 3 Results In this section, we present theoretical results from our model. Throughout, we fix an arbitrary labor market (\(\beta\),\(F_{A}\),\(F_{B}\)) and present results within this labor market, except where explicitly stated otherwise. ### Core without EPSW We begin our analysis by studying core outcomes without EPSW. We establish that our model leads to very strong predictions both on employment patterns and wages. Proposition 1.: _Without EPSW, there exist a continuum of (non-equivalent) core outcomes. In any core outcome, almost every worker is employed and earns a wage equal to her productivity (formally, for all \(i\in\{1,2\}\), all \(g\in\{A,B\}\), and almost all \(v\in[0,1]\): \(f_{g}^{1}(v)+f_{g}^{2}(v)=f_{g}(v)\) and \(w_{i}^{g}(v)=v\) if \(f_{g}^{i}(v)>0\))._ Proposition 1 establishes that, while there are multiple core outcomes, they all feature full employment and result in wages to each worker equal to their productivity. We use this result as a benchmark and proceed to study how the employment patterns and wages are affected by EPSW in the following subsections. Note that, by Proposition 1, in any core outcome \(O\) without EPSW, the wage gap is \[AW_{A}^{O}-AW_{B}^{O}:= \int\limits_{0}^{1}w_{1}^{A}(v)f_{A}^{1}(v)dv+\int\limits_{0}^{1 }w_{2}^{A}(v)f_{A}^{2}(v)dv-\int\limits_{0}^{1}w_{1}^{B}(v)f_{B}^{1}(v)dv-\int \limits_{0}^{1}w_{2}^{B}(v)f_{B}^{2}(v)dv\] \[= \int\limits_{0}^{1}vf_{A}(v)dv-\int\limits_{0}^{1}vf_{B}(v)dv\] \[:= \mathbb{E}_{A}(v)-\mathbb{E}_{B}(v). \tag{1}\] At first glance, Proposition 1 may seem quite intuitive and perhaps even straightforward: In the absence of EPSW, firms compete in a "Bertrand"-like manner, i.e., compete for each worker in isolation without any reference to wages paid to other workers, so the only core outcomes must feature wages equal to the worker's productivity. While this "Bertrand" intuition is reasonable, the actual proof is much more nontrivial and involved. The reason for this complexity is that throughout we assume that the wage function must be weakly non-decreasing within any given firm and each group of workers--recall that monotonicity is assumed to remove incentives for a worker from destroying productivity or pretending to have lower productivity (see Remark 1). Because of the monotonicity condition on wages, the wage for a particular worker cannot be freely chosen even without EPSW. The main content of the formal proof is that, even with this restriction, any outcome that does not satisfy the "Bertrand"-like features, i.e., that almost all workers are hired at their productivities, allows for a blocking outcome which itself satisfies the monotonicity of the wage. In that sense, one interpretation of our analysis is that the sharp prediction obtained by the Bertrand competition is in fact robust to a certain kind of wage rigidity, suggesting that the prediction may be more likely in applications than previously considered. In the next subsections, we analyze whether EPSW affects wages and hiring in any substantive manner. As we will demonstrate, versions of EPSW in fact change the employment patterns such as the unemployment rate and job segregation, as well as wages paid to different groups of workers. ### Core with Group-Based EPSW Now we study core outcomes of our game under a group-based EPSW. Informally, this restriction requires that each firm pays the same wages to almost all workers it hires only if it hires a positive measure of workers from both groups. Formally, we modify the definition of outcome \(O_{i}=\{(f_{g}^{i}(v),w_{i}^{g}(v))\}_{v\in[0,1],g=A,B}\) for all \(i\in\{1,2\}\) to include the following restriction: There exist no sets \(V_{g}\subset[0,1]\) and \(V_{-g}\subset[0,1]\) with positive Lebesgue measure such that: 1. \(f_{g}^{i}(v)>0\) for all \(v\in V_{g}\), 2. \(f_{-g}^{i}(v)>0\) for all \(v\in V_{-g}\), and 3. \(\inf_{v\in V_{-g}}w_{i}^{-g}(v)\!>\!\!\sup_{v\in V_{g}}w_{i}^{g}(v)\). Informally, the preceding restriction prevents a firm from employing sets of workers from both groups with positive measure (points 1 and 2) such that all workers in one set receive strictly higher pay than all workers in the other (point 3).12 Given the symmetry of the above definition across groups, group-based EPSW implies, by transitivity, that if a firm hires a positive measure of workers from both groups, it must pay almost all workers the same wages. Footnote 12: Note that because the above restriction must hold for every set \(V_{g}\) and \(V_{-g}\) of positive measure, we could equivalently state point 3 using the essential infimum and essential supremum of the wages, respectively. The next result shows that generically firms must fully segregate by group in any core outcome under group-based EPSW.13 Footnote 13: We consider the space of distributions \(F_{A}\) and \(F_{B}\) describing the distributions of productivities of \(A-\) and \(B-\)group workers, respectively, that admit respective density functions \(f_{A}\) and \(f_{B}\) with respective lower bounds \(\underline{f}_{A}\!,\underline{f}_{B}\!>\!0\) where \(\underline{f}_{A}\!=\!\inf\{f_{A}(v)|v\!\in\![0,\!1]\}\), \(\underline{f}_{B}\!=\!\inf\{f_{B}(v)|v\!\in\![0,\!1]\}\). We endow the space of distributions \(F_{g}\), \(g\!\in\!\{A,\!B\}\) with the weak-\({}^{*}\) topology and consider genericity with respect to the product topology over the product set of distributions, where we say that a property holds generically if the property holds in an open and dense subset. Proposition **2**.: _Generically, in any core outcome under group-based EPSW, firms completely segregate. Specifically, one firm hires almost all \(A-\)group workers, and the other hires almost all \(B-\)group workers (formally, for some \(i\!\in\!\{1,\!2\}\), \(f_{A}^{i}(v)\!=\!f_{A}(v)\) for almost all \(v\!\in\![0,\!1]\) and \(f_{B}^{-i}(v)\!=\!f_{B}(v)\) for almost all \(v\!\in\![0,\!1]\))._ Remark **5**.: The conclusion of this proposition holds only generically. An example of a non-generic case in which the conclusion fails features \(\beta\!=\!1\) and \(F_{A}(v)\!=\!F_{B}(v)\!=\!v\) for all \(v\!\in\![0,\!1]\). For this parameterization, it is straightforward to verify that there exists a core outcome where firm 1 hires all workers from both groups with \(v\!\in\![0,\frac{1}{2}]\) at wage zero while firm 2 hires all other workers at wage \(\frac{1}{2}\). Following the previous result, we assume throughout that any core outcome under group-based EPSW exhibits full segregation by group. Therefore, we assume in any core outcome \(O\), without loss of generality, that firm 1 hires all \(A-\)group workers (\(f_{A}^{1}(v)\!=\!f_{A}(v)\) for all \(v\)) and firm 2 hires all \(B-\)group workers (\(f_{B}^{2}(v)\!=\!f_{B}(v)\) for all \(v\)). Consider a core outcome \(O\) where \(w_{1}(\cdot)\) specifies firm 1's wages to \(A-\)group workers and \(w_{2}(\cdot)\) specifies firm 2's wages to \(B-\)group workers. By Individual Rationality for the firms (see Remark 3), it suffices to consider \(w_{i}(\cdot)\!:\![0,\!1]\!\to\![0,\!1]\) for each \(i\!\in\!\{1,\!2\}\) in any core outcome. Note that we can therefore represent the wage gap in a core outcome \(O\) under group-based EPSW as \[AW_{A}^{O}\!-\!AW_{B}^{O}=\!\!\!\int\limits_{0}^{1}w_{1}(v)f_{A}(v)dv-\int \limits_{0}^{1}w_{2}(v)f_{B}(v)dv.\] To understand the existence and properties of core outcomes under group-based EPSW, we introduce new machinery. Suppose firm 2 attempts to block an outcome \(O\) that involves segregation as detailed in the preceding paragraph. One potential blocking outcome that firm 2 could undertake is to desegregate and hire positive measures of workers from both groups. By the restrictions of group-based EPSW, this would require paying a common wage \(\epsilon\!\in\![0,\!1]\) to (almost) all workers it hires in the proposed blocking outcome. To denote the set of workers potentially available to be "poached," we define \[w_{i}^{-1}(\epsilon)\!:=\!\begin{cases}\sup\{v|w_{i}(v)\!\leq\!\epsilon\}& \text{if }\{v|w_{i}(v)\!\!\leq\!\epsilon\}\!\neq\!\emptyset,\\ 0&\text{otherwise},\end{cases}\] for each \(i\in\{1,2\}\) and \(\epsilon\in[0,1]\), which characterizes the highest productivity worker hired by firm \(i\) who is paid no more than \(\epsilon\). \(w_{i}^{-1}(\cdot)\) is a generalization of a traditional inverse function of \(w_{i}(\cdot)\), in that we allow either or both of these functions to be weakly increasing instead of strictly increasing, which is necessary to study the class of wage functions that satisfy our monotonicity condition. Figure 1 plots a particular wage function \(w_{2}(\cdot)\) and its inverse \(w_{2}^{-1}(\cdot)\). Consider the following inequalities which we refer to as the _No Desegregation Condition_: \[\pi_{2}^{O}:=\!\!\int\limits_{0}^{1}(v\!-\!w_{2}(v))f_{B}(v)dv\!\geq\!\beta\int \limits_{\epsilon}^{w_{1}^{-1}(\epsilon)}(v\!-\!\epsilon)f_{A}(v)dv+\int \limits_{\epsilon}^{w_{2}^{-1}(\epsilon)}(v\!-\!\epsilon)f_{B}(v)dv\qquad \mbox{ for all }\epsilon\in[0,1] \tag{2}\] It states that firm 2 does not wish to desegregate, pay a common wage of \(\epsilon\) to all workers it employs, and hire only the workers with productivities above \(\epsilon\) who are also paid less than \(\epsilon\) according to \(w_{1}(\cdot)\) and \(w_{2}(\cdot)\). If the Equal Profit Condition also holds, then the No Desegregation Condition implies that firm 1 does not wish to block the outcome by desegregating either. The following result finds that, in addition to Individual Rationality and the Equal Profit Condition which we have previously introduced, the No Desegregation Condition exactly characterizes the set of core outcomes. **Lemma 1**.: _Consider an outcome \(O\) in which \(f_{A}^{1}(v)=f_{A}(v)\) for all \(v\) and \(f_{B}^{2}(v)=f_{B}(v)\) for all \(v\). Then \(O\) is a core outcome if and only if:_ 1. \(w_{i}(v)\leq v\) _for all_ \(v<1\) _and all_ \(i\in\{1,2\}\) _(Individual Rationality),_ 2. \(w_{i}(v)\) _is a core outcome if and only if:_ Figure 1: Relationship between a wage function and its inverse 2. \(\pi_{1}^{O}\!=\!\pi_{2}^{O}\) _(Equal Profit Condition), and_ 3. \((2)\) _is satisfied (No Desegregation Condition)._ The existence of a core outcome holds generally under group-based EPSW; consider the zero-profit outcome in which firm \(1\) hires all \(A-\)group workers and firm \(2\) hires all \(B-\)group workers, and all workers are paid wages equal to their productivities. Note that \(w_{i}^{-1}(\epsilon)\!=\!\epsilon\) for any \(i\!\in\!\{\!1,\!2\}\) and any \(\epsilon\!\in\![0,\!1]\), implying that the No Desegregation Condition is trivially satisfied, as the right-hand side is identically \(0\) for all \(\epsilon\). This outcome is similar to core outcomes without EPSW, except that the workforce is now necessarily segregated. However, there are also other core outcomes under group-based EPSW that result in positive firm profits and a gap of average pay between the two groups. Indeed, there always exist a continuum of (non-equivalent) core outcomes. Moreover, if the measure of \(A-\)group workers is strictly larger than the measure of \(B-\)group workers, all but one core outcome exhibits a larger wage gap than in the absence of group-based EPSW, and firm profits are higher in core outcomes with larger wage gaps. Proposition **3**.: _Suppose there is a group-based EPSW._ 1. _There exist a continuum of non-equivalent core outcomes._ 2. _Let_ \(\beta\!>\!1\)_. There exists one core outcome (and its equivalent outcomes) that yields the same wage gap as in the (essentially unique) core outcome without EPSW. In all other core outcomes under group-based EPSW, the wage gap is strictly larger._ 3. _Let_ \(\beta\!>\!1\)_. Consider any two core outcomes. The wage gap is larger in the first outcome if and only if firm profit is higher in the first outcome._ As demonstrated by Part 2 of Proposition 3, group-based EPSW widens the wage gap between the two groups (and the widening of the wage gap is strict except for one core outcome among a continuum). Moreover, Part 3 shows that larger wage gaps under group-based EPSW are associated with higher firm profits. An implication of this last result is that firms prefer core outcomes that result in larger wage gaps, suggesting that a core outcome with a larger wage gap may be more likely to occur if firms can coordinate to select an outcome from the core. When interpreting this result, we emphasize that we have made no assumptions about the relative productivities of the two groups of workers. Specifically, Proposition 3 predicts that if the wage gap is negative in the core outcomes without EPSW (which can occur, if for example, there are discriminatory factors against workers in the majority group), it will increase in a core outcome with group-based EPSW in the sense that it will either become less negative or change signs entirely. This consideration will become important in our empirical analysis of a Chilean EPSW in Section 5 where we find that the wage gap widened (i.e. in favor of men) after the introduction of EPSW in male-majority labor markets while the wage gap closed (i.e. in favor of women) in female-majority ones. Both findings are as predicted by Proposition 3. #### 3.2.1 How large can the wage gap be? We have demonstrated that group-based EPSW can lead to an increase in the wage gap. In this section, we analyze how large this wage gap can become and how it is related to the relative size of the two groups of workers. We show that when the proportion of \(A-\)group workers in the market grows sufficiently large, nearly maximal wage inequality (subject to individual rationality) between \(A-\) and \(B-\)group workers can be supported in a core outcome: all \(A-\)group workers are paid nearly their marginal products, and all \(B-\)group workers receive exactly zero pay, regardless of productivity. This core outcome in which \(B-\)group workers receive zero pay maximizes firm profits (by Proposition 2 and the Equal Profit Condition) and therefore also maximizes the wage gap (Proposition 3, Part 3). As discussed in the introduction, our results suggest that EPSW may cause or exacerbate "occupational tipping" patterns in which, for example, women select away from an industry if it becomes too male dominated (Pan, 2015). We present two results that formalize those claims. First we show that for any sufficiently large \(\beta\), all \(A-\)group workers receive a wage nearly equal to their productivity. **Proposition 4**.: _Suppose there is a group-based EPSW and fix \(F_{A}\) and \(F_{B}\) arbitrarily. For any \(\delta\!>\!0\) there exists \(\beta^{*}\!\in\![1,\!\infty)\) such that in the market \((\beta,\!F_{A},\!F_{B})\) with any \(\beta\!>\!\beta^{*}\), \(w_{1}(v)\!>\!v-\delta\) for all \(v\) in any core outcome._ The intuition for this result is simple and closely related to the rise in segregation under group-based EPSW (Proposition 2). First, because the two firms hire from different groups, the profit for each firm is bounded by the social surplus created by workers of the group from which the firm hires. Second, Proposition 2 also shows that almost all workers are employed, so the maximum social surplus is created by both groups. Third, the Equal Profit Condition implies that the firms must obtain the same profit. Therefore, when the proportion of \(A-\)group workers is large, most of the social surplus created by them must be paid to them as a wage. In other words, if some \(A-\)group workers receive wages bounded away from their productivity even for large \(\beta\), then the firm hiring the \(A-\)group workers would receive higher profit than the firm hiring the \(B-\)group workers, a violation of the Equal Profit Condition. Next, we proceed to explore the lowest wages we can sustain for \(B-\)group workers. To do so, we provide a necessary and sufficient condition for a wage schedule for \(B-\)group workers to be part of some core outcome. In light of Lemma 1, for any given \(w_{2}(\cdot)\) we proceed by first characterizing the wage functions \(w_{1}(\cdot)\) that satisfy the No Desegregation Condition stated in (2) (while addressing Individual Rationality and the Equal Profit Condition later). Consider the following construction. For each \(\epsilon\), define \(\phi(\epsilon)\) implicitly as the supremum of \(\tilde{v}\) such that \[\int\limits_{0}^{1}(v\!-\!w_{2}(v))f_{B}(v)dv\!\geq\!\beta\!\int \limits_{\epsilon}^{\tilde{v}}(v\!-\!\epsilon)f_{A}(v)dv+\int\limits_{\epsilon }^{w_{2}^{-1}(\epsilon)}(v\!-\!\epsilon)f_{B}(v)dv. \tag{3}\] Then, we define \[\hat{w}_{1}(v)\!:=\!\begin{cases}\sup\{\epsilon|\phi(\epsilon)\!\leq\!v\}& \text{if }\{\epsilon|\phi(\epsilon)\!\leq\!v\}\!\neq\!\emptyset,\\ 0&\text{otherwise.}\end{cases}\] and \[\hat{w}_{1}^{-1}(\epsilon)\!:=\!\begin{cases}\sup\{v|\hat{w}_{1}(v)\!\leq\! \epsilon\}&\text{if }\{v|\hat{w}_{1}(v)\!\leq\!\epsilon\}\!\neq\!\emptyset,\\ 0&\text{otherwise.}\end{cases}\] Note that \(\hat{w}_{1}(v)\) is weakly increasing in \(v\). This is because the supremand \(\{\epsilon|\phi(\epsilon)\!\leq\!v\}\) weakly expands (in the set inclusion sense) when we increase \(v\). Figure 2 illustrates the construction of \(\phi(\cdot)\) and \(\hat{w}_{1}(\cdot)\) for a certain specification of the model primitives. Figure 3 presents the relationship between \(\phi(\cdot)\) and \(\hat{w}_{1}^{-1}(\cdot)\). As can be seen--and as we show formally in Remark 8 in the appendix--\(\hat{w}_{1}^{-1}(\cdot)\) is the largest monotone nondecreasing function that is pointwise no larger than \(\phi(\cdot)\). Considering the differences between \(\phi(\cdot)\) and \(\hat{w}_{1}^{-1}(\cdot)\) is illuminating in order to understand the purpose of the additional machinery we have introduced in this section. \(\phi(\cdot)\) and \(\hat{w}_{1}^{-1}(\cdot)\) differ over interval \((\epsilon^{*},\frac{1}{2})\). By construction, \(\phi(\epsilon)\) is the highest productivity \(A-\)group worker that can be hired at a desegregation attempt \(\epsilon\) while still deterring desegregation. But \(\phi(\epsilon)\!>\!\phi(\epsilon^{*})\!=\!\phi(\frac{1}{2})\) for any \(\epsilon\!\in\!(\epsilon^{*},\frac{1}{2})\), meaning that the highest productivity \(A-\)group worker that can be hired while still deterring desegregation is higher at \(\epsilon\) than at \(\epsilon^{*}\) and at \(\frac{1}{2}\). We construct \(\hat{w}_{1}(\cdot)\) to deter desegregation at any wage in [0,1], therefore, satisfying (3) at \(\frac{1}{2}\) implies that the No Desegregation Condition is slack for \(\epsilon\!\in\!(\epsilon^{*},\frac{1}{2})\), which is reflected in \(\hat{w}_{1}^{-1}(\cdot)\) being constant over \([\epsilon^{*},\frac{1}{2}]\). Not only is \(\hat{w}_{1}(\cdot)\) constructed to deter desegregation, we show that it is the cheapest monotonic wage function for firm 1 that does so. Define the profit firm 1 receives if it were to hire all \(A-\)group workers and pay wage \(\hat{w}_{1}(v)\) to each \(A-\)group worker of productivity \(v\!\): \[\hat{\pi}_{1}\!:=\!\beta\int_{0}^{1}(v\!-\!\hat{w}_{1}(v))f_{A}(v)dv.\] The following lemma provides a necessary and sufficient condition under which \(w_{2}(\cdot)\) can be supported in a core outcome. Lemma 2: _There exists a core outcome \(O\!=\!(O_{1}\!,\!O_{2})\) in which firm 2 pays wages \(w_{2}(\cdot)\) to the workers it employs if and only if \(\hat{\pi}_{1}\!\geq\!\pi_{2}^{O_{2}}\)._ Figure 4 illustrates the "if" part of Lemma 2 for the environment described in Figure 2. In Panel (a), \(\hat{\pi}_{1}\) corresponds to the blue region multiplied by \(\beta\!=\!4\), which is larger than \(\pi_{2}^{O_{2}}\) depicted by the starred area. Panel (b) illustrates an example of a wage function \(w_{1}(\cdot)\) that supports a core outcome. To see that \(w_{1}(\cdot)\) indeed supports a core outcome, observe that (i) clearly \(\leq\!w_{1}(v)\!\leq\!v\) holds for all \(v\), so Individual Rationality is satisfied; (ii) the green area (multiplied by \(\beta\)), which represents firm 1's profit under \(w_{1}(\cdot)\), is equal to \(\pi_{2}^{O_{2}}\), so that the Equal Profit Condition is satisfied; and (iii) by construction, \(w_{1}(v)\!\geq\!\hat{w}_{1}(v)\) for all \(v\!\in\![0,\!1]\), and by inspection of (3) we can see that \(w_{1}(\cdot)\) satisfies the No Desegregation Condition because \(\hat{w}_{1}(\cdot)\) does. Thus, by Lemma 1, the wage profiles are part of a core outcome. Note that the property required in Lemma 2 that \(\hat{\pi}_{1}\!\geq\!\pi_{2}^{O_{2}}\) is satisfied by this example, and this fact guarantees the existence of an appropriate \(w_{1}(\cdot)\) satisfying the three conditions characterizing a core outcome. With this machinery, we are now ready to state and prove the following result: Proposition **5**.: _Suppose there is a group-based EPSW and fix \(F_{A}\) and \(F_{B}\) arbitrarily. Let \(w_{2}(\cdot)\) be an arbitrary wage function such that \(w_{2}(v)\!\leq\!v\) for all \(v\!\in\![0,\!1]\). There exists \(\beta^{*}\!\in\![0,\!\infty)\) such that in the market \((\beta,\!F_{A},\!F_{B})\) with any \(\beta\!>\!\beta^{*}\), there exists a core outcome in which the wage schedule of \(B-\)group workers is given by \(w_{2}(\cdot)\)._ Remark 6.: For a subclass of wage functions, one can strengthen the conclusion of Proposition 5 to show that there exists a "threshold" value of \(\beta\) for which a given wage function \(w_{2}(\cdot)\) can be supported in a core outcome. We state and prove this result in the appendix for a general class of functions \(w_{2}(\cdot)\) while describing this result for a special case here, namely for the lowest-possible wage function, \(w_{2}(v)\!=\!0\) for all \(v\): Suppose there is a group-based EPSW and fix \(F_{A}\) and \(F_{B}\) arbitrarily. There exists \(\beta^{*}\!\in\!(0,\infty)\) such that in the market (\(\beta\),\(F_{A}\),\(F_{B}\)) with any \(\beta\!>\!\beta^{*}\) there exists a core outcome in which \(w_{2}(v)\!=\!0\) for all \(v\) (i.e. the wage of each \(B-\)group worker is \(0\)) while in the market (\(\beta\),\(F_{A}\),\(F_{B}\)) with any \(\beta\!<\!\beta^{*}\) there exists no core outcome in which \(w_{2}(v)\!=\!0\) for all \(v\). Propositions 4 and 5, together with Remark 6, imply the wage gap between \(A-\) and \(B-\)group workers can be large when there are many more \(A-\)group workers than \(B-\)group workers. In particular, if all \(B-\)group workers receive zero wages in a core outcome, then as \(\beta\) grows large, it follows from the Equal Profit Condition that there is nearly maximal inequality between \(A-\) and \(B-\)groups: \(A-\)group workers are nearly paid their marginal products. While there exists Figure 4: Profit, and the existence of a core outcome with \(w_{2}(\cdot)\) a multiplicity of core outcomes, recall that our Proposition 3 implies that this core outcome with maximal inequality in pay between the groups is the firm-optimal core outcome. ### Core with Non-Group-Based EPSW Now we study core outcomes of our game under a non-group-based EPSW. Informally, this restriction requires that each firm pays the same wages to almost all workers it hires. Formally, we modify the definition of outcome \(O_{i}=\{(f_{g}^{i}(v),w_{i}^{g}(v))\}_{v\in[0,1],g=A,B}\) for all \(i\in\{1,2\}\) to include the following restriction: There exists \(w_{i}\in[0,1]\) such that \(w_{i}^{g}(v)=w_{i}\) for all \(g\in\{A,B\}\) and almost all \(v\in[0,1]\) such that \(f_{g}^{i}(v)>0\). Observe that this restriction makes no reference to the group identity of a worker, so it is convenient for us to proceed with the following distribution of productivities of the entire population, with its cumulative distribution function \(F\) given by \[F(v):=\frac{\beta}{1+\beta}F_{A}(v)+\frac{1}{1+\beta}F_{B}(v), \tag{4}\] and denote the associated density function as \(f(v)\). Denote \(\bar{f}:=\sup\{f(v)|v\in[0,1]\}\) and \(\underline{f}:=\inf\{f(v)|v\in[0,1]\}\) where \(0<\underline{f}<\bar{f}<+\infty\) by our previous assumptions on \(\underline{f}_{A},\underline{f}_{B},\bar{f}_{A}\), and \(\bar{f}_{B}\). In any outcome we denote the density of workers hired by each firm \(i\in\{1,2\}\) as \(f^{i}(v):=\frac{\beta}{1+\beta}f_{A}^{i}(v)+\frac{1}{1+\beta}f_{B}^{i}(v)\). Given the non-group-based EPSW, it is without loss of generality to specify only \(f^{i}(v)\) instead of both \(f_{A}^{i}(v)\) and \(f_{B}^{i}(v)\), \(i\in\{1,2\}\). Furthermore, an outcome of the game specifies wages paid by firms \(1\) and \(2\) to (almost) all of its workers, \(w_{1}\in\mathbb{R}_{+}\) and \(w_{2}\in\mathbb{R}_{+}\), respectively. Without loss of generality, assume \(w_{1}\leq w_{2}\). **Proposition 6**.: _Suppose there is a non-group-based EPSW._ 1. _There exist a continuum of non-equivalent core outcomes. In any core outcome,_ \(w_{1}<w_{2}\)_._ 2. _There exists one core outcome (and its equivalent outcomes) in which almost all workers are employed. In all other core outcomes, a strictly positive measure of workers are unemployed._ 3. _Consider any two core outcomes. The measure of unemployed workers is higher in the first outcome if and only if firm profit is lower in the first outcome._ Several observations are in order under a non-group-based EPSW. First, segregation of workers across firms occurs by productivity. More specifically, one firm hires almost every worker (from both groups \(A\) and \(B\)) whose productivity is above a threshold, the other firm hires almost every worker with productivity below that threshold but above another threshold, and the lowest-productivity workers remain unemployed. Second, workers of the same productivity receive the same wage, irrespective of their group identity. Third, there is no wage gap between workers within the same firm. Remark 7.: The effect of non-group-based EPSW on wage gaps between the groups is indeterminate. First we show that the difference in wages paid by the two firms can be arbitrarily large or arbitrarily small in different core outcomes _in the same market_. Second, we show that again in different core outcomes in the same market, a non-group-based EPSW can either increase or decrease the gap in average pay between \(A-\) and \(B-\)group workers compared to the core outcome wage gap without EPSW (which is equal to \(\mathbb{E}_{A}(v)-\mathbb{E}_{B}(v)\) by (1)). More formally, under non-group-based EPSW: 1. For any \(\epsilon\!>\!0\) there exists a market (\(\beta\),\(F_{A}\),\(F_{B}\)) such that there exist core outcomes \(O\) and \(O^{\prime}\) (both with \(w_{2}\!>\!w_{1}\)) such that: \(w_{2}\!-\!w_{1}\!>\!1\!-\!\epsilon\) in \(O\) and \(w_{2}\!-\!w_{1}\!<\!\epsilon\) in \(O^{\prime}\). 2. There exists a market (\(\beta\),\(F_{A}\),\(F_{B}\)) and two core outcomes \(O\) and \(O^{\prime}\) (both with \(w_{2}\!>\!w_{1}\)) such that: \(AW_{A}^{O}-AW_{B}^{O}\!>\!\mathbb{E}_{A}(v)\!-\!\mathbb{E}_{B}(v)\) and \(AW_{A}^{O^{\prime}}-AW_{B}^{O^{\prime}}\!<\!\mathbb{E}_{A}(v)\!-\!\mathbb{E}_ {B}(v)\). We provide a constructive proof of this remark in the appendix. ## 4 Discussion Our analysis thus far has centered around a model with two homogeneous firms and two groups of workers, and moreover, EPSW is applied equally to both firms, so that the effect of EPSW can be meaningfully analyzed in the simplest setup possible. In this section, we discuss how relaxing these simplifying assumptions affects our theoretical predictions. Formal results are relegated to Appendix B. ### Multiple Groups and/or Firms In Appendix B.1 we analyze more general cases in which the number of firms or groups (or both) is larger than two. Most of our findings generalize to these cases, while we also find some subtleties under group-based EPSW. Without EPSW, there is a continuum of core outcomes and, in any core outcome, almost every worker is employed and receives a wage equal to her productivity (a generalization of Proposition 1). With a non-group-based EPSW, each firm "segregates by productivity" by setting a uniform wage in any core outcome (a generalization of Proposition 6). With a group-based EPSW, if the number of the firms is larger than the number of the groups, the outcome becomes similar to the case without EPSW: Specifically, firms completely segregate by group, each firm earns zero profit, and almost every worker is employed and receives a wage equal to her productivity. By contrast, if the number of the groups is sufficiently large, then the core outcomes under the group-based EPSW becomes equivalent to those under non-group-based EPSW. Overall, we find that our analysis readily generalizes if there is no EPSW or the EPSW is not based on groups. By contrast, we find that the effect of group-based EPSW depends on some characteristics of the specific applications. More specifically, our analysis suggests that the key parameters to beware are the number of groups to be protected by the law, as well as the competitive environment in the sense of the firms competing for workers from the same segment of the labor market. ### Heterogeneous Treatment EPSW One form of homogeneity imposed in our base model is that both firms are constrained by EPSW. In Appendix B.2 we discuss the implications of our model in which a group-based EPSW applies to only one of the two firms, a situation we refer to as _heterogeneous treatment_. Cases of heterogeneous treatment have been documented and studied by economists; a U.S. federal EPEW policy restricted only federal contractors in the 1970s (see Donohue III and Heckman (1991)). As we discuss in our empirical application below, this model offers important predictions in the study of EPSW in Chile. To fix ideas, consider again a model with two firms, but now let us assume that firm 1 is subject to the group-based EPSW while firm 2 is not. Proposition 11 demonstrates that the job segregation effect of homogeneous EPSW largely carries over to the case with heterogeneous EPSW, while the effect on the wage gap disappears completely. Specifically, in any core outcome, firm 1 is fully segregated in the sense that it hires from only (at most) one group, while firm 2 can hire workers from both groups; meanwhile, almost every worker receives a wage equal to her productivity. Therefore, introduction of heterogeneous-treatment EPSW leads to no changes in wages (and therefore, to the wage gap) compared to the case without EPSW. ### Taste-Based Bias Our base model does not explicitly incorporate (taste-based) bias against the minority group, nor does it consider differences in bias across firms. It is worth noting, however, that the base model allows for bias against the minority group, as long as there is no heterogeneity in the two firms in terms of their bias. Specifically, we allow the distributions of productivity of \(A-\)group workers to be different from those for \(B-\)group workers. By interpreting productivity of \(B-\)group workers as net of the disutility that (a manager of) the firm incurs when hiring a \(B-\)group worker, the model becomes one without any explicit disutility term associated with \(B-\)group workers, while their productivity distributions are shifted to reflect the effect of the disutility for the firms. In Appendix B.3, we consider the case in which one firm (referred to as the biased firm) has biased preferences while the other firm is purely profit motivated.14 Specifically, firm 1 incurs a constant per-worker disutility for hiring workers from \(B-\)group, while firm 2 does not incur any disutility from hiring a \(B-\)group worker. We show that the main predictions of our base models remain largely unchanged, though with some subtle changes. More specifically, without EPSW, firms completely segregate in any core outcome, just as in the case without a biased firm. With group-based EPSW, larger wage gaps arise in core outcomes compared to the case without EPSW.15 Finally, the wage gap can be arbitrarily large if the share of \(A-\)group workers is sufficiently large (i.e. \(\beta\) is large). In these senses, the theoretical predictions for the case with a biased firm are largely unchanged from the case without a biased firm. We briefly discuss other forms of heterogeneity in Appendix B.4. Footnote 14: Per the discussion of the last paragraph, one can also interpret the model as cases in which both firms have bias, but one of the firms incurs larger disutility from hiring a \(B-\)group worker than the other. Footnote 15: Specifically, with group-based EPSW, the set of possible wage gaps across core outcomes expands in the set inclusion sense compared with the case without EPSW, and the “new” wage gaps that can be supported in a core outcome feature a larger wage gap than those without EPSW. ## 5 Empirical Analysis In this section, we present an empirical test of our model findings by analyzing a 2009 EPSW in Chile. ### Institutional Background Chile is an OECD country with nearly 20 million inhabitants. The Chilean labor market is relatively concentrated in the formal employment sector; the informal labor market share around the time of the policy was 25%, the lowest in Latin America (Gasparini and Tornarolli, 2007). Only 10% of the (formal) workforce is unionized, and only union members are covered by collective bargaining agreements, implying that Chilean firms plausibly have a high degree of wage-setting power. Workers can be fired without cause and without notice at the cost of one month's wages. The gender wage gap in Chile is similar to that in the United States. Chilean female workers earn 18-23% less than their male counterparts (Petricara and Bueno, 2009). Female labor force participation was roughly 30% in 2009, which is lower than in many OECD and other Latin American countries (Verick, 2014). ### EPSW implementation In June 2009, Law 20.348 was signed as an amendment to Chile's labor code with the General Secretary of the Chilean Senate declaring, "The main objective of the initiative is to establish the right to equal remuneration between men and women for the provision of services of similar value." We refer throughout to June 2009 as the time of announcement. The law took effect in November 2009, which we refer to as the time of enactment. An important part of the discussion and debate surrounding the law was providing a definition of "similar work." The law specifies that a firm cannot pay a man and a woman different wages for "arbitrary reasons"; pay differences across genders are allowable only if workers fall into different coarse categories based on skills, qualifications, suitability, responsibility, or productivity. Firms that do not comply are subject to sizable monetary fines per offense, as we discuss below. The law also establishes a 10% discount for any other labor fines a firm is subject to if it pays men and women the same wages for "similar jobs and responsibilities." We therefore classify the law for our purposes as (a gender-based) EPSW.16 Footnote 16: Guideline 1187/018 published on April 2010 by the Directorate of Labor clarifies that 1) the law does not bind within gender group, and 2) that a firm paying even a single man more than a single woman (or vice versa) despite both performing similar work is in violation of the law. The law has different consequences for firms of different sizes, based on the number of a firm's workers with long-term employment contracts.17 Firms with 10 or more long-term workers are required to explicitly have a grievance procedure for gender-based pay discrimination. Workers in firms above this threshold who allege the firm has violated Law 20.348 must receive a sufficient response from the firm within 30 days. If no such response is received, the worker can file a complaint at the Labor Inspection Office or can directly raise the issue with a labor court. Financial penalties also differ per infraction by firm size. Firms with 10-49 long-term workers found to violate the law are subject to a fine of 69-1,384 USD per worker-month of violation, while firms with fewer than 10 long-term workers are not subject to a financial penalty.18 Footnote 17: The vast majority of workers in Chile have either long-term contracts (no end date is specified ex ante) or fixed-term contracts (an end date is specified ex ante, although such contracts are automatically transitioned into long-term contracts if the worker continues to be employed beyond the contract end date). EPSW protections for workers of different contract types within the same firm are identical. Nearly half of firm-months in our dataset contain no workers with fixed-term contracts. Footnote 18: Cruz and Rau (2022) further discuss how the law imposed mandatory transparency guidelines on worker roles in the firm, and additional fines for violations, for firms with at least 200 long-term workers. They show that the disclosure policy reduced the gender wage gap through a bargaining channel. Our analysis avoids firms treated by this additional policy, due to the potential confounding equilibrium effects of transparency policies (for further discussion on potential equilibrium effects, see Cullen and Pakzad-Hurson, 2023). Initial evidence suggests that the law was both widely known to workers, and enforced. In a 2013 governmental survey,19 11% of respondents stated that they know someone that has complained using the law. Through a public-records request, we found that 9,577 complaints were filed by workers alleging violations of Law 20.348, 9,723 inspections were carried out by the government, and that 489 individual firms were punished. The average fine amount was 1,167 USD per violation (each worker-month of unfair pay is a separate violation of the law). Footnote 19: See [https://www.evaluaciondelalley.cl/wp-content/uploads/2019/07/ley_20348_igualdad_remuneraci_ones.pdf](https://www.evaluaciondelalley.cl/wp-content/uploads/2019/07/ley_20348_igualdad_remuneraci_ones.pdf). ### Data We study the effects of EPSW using matched worker-firm administrative data from the Chilean unemployment insurance system from January 2005 to December 2013. We observe a random sample of firms, stratified by size, totaling roughly 4% of all firms. In our data, an observation is a worker firm-month. For each observation, the data include worker pay20 and demographic information including gender, education level, contract status, age, and marital status; additionally, we observe the firm's geographic location and industry code. We observe the entire employment history of each worker ever employed at a sampled firm. We discuss further details of our dataset in Appendix D.5. Footnote 20: Worker monthly pay at each firm is top coded in our data. The threshold for top coding varies over time; in June 2009, the top-code threshold was roughly 83,550. The share of observations in our data that are top coded is 1.7%. Firms in our sample are typically small, with a median of 9 concurrent workers. However, there are outliers. Following Bennedsen et al. (2022); Boheim and Gust (2021); Duchini et al. (2022); Gulyas et al. (2023) we only consider firms of similar sizes at the time of policy announcement in order to limit size-based wage dynamics. In our main specifications, we consider firms with at least 6 and no more than 13 total workers at announcement, which includes roughly 40% of firms in our data. In order to limit the impacts of firms that close after EPSW announcement (who potentially fail due to the policy) or those that are founded after announcement (who came into being in a labor environment with wage constraints), we restrict our analysis to a balanced sample of firms that are in operation during the entire window of our analysis. As we discuss in the following pages and in Appendix D.3, our findings are qualitatively similar under alternative size and balance restrictions. In Table 1, we present descriptive statistics for (column I) the set of firms prior to the size restriction, (column II) the set of firms following the size restriction, and (column III) the set of firms following balancing which constitute our baseline sample. ### Empirical Strategy Based on our theoretical findings, we investigate the effect of EPSW on gender segregation within firms and on the differential pay of men and women. To obtain the causal effect of EPSW on our outcomes of interest, we consider an event-study analysis wherein firms are considered "treated" if they were subject to EPSW at announcement (i.e. the firm in question employed at least 10 long-term workers in June 2009) and "control" otherwise.21 We present and discuss supportive evidence for our designation of treatment status in Appendix D.1. Appendix D.1 also presents "placebo" tests which do not find statistically or economically meaningful effects at alternative firm size thresholds, supporting that the observed effects of EPSW around the size threshold specified in the law are plausibly causal. Footnote 21: Following the discussion in Section 4, the magnitude of the effects of EPSW on a firm are predicted to depend on the number and treatment status of its competitors. We do not observe all firms in a local labor market, nor do we separately observe any firm’s direct competitors. Our event-study analysis assumes that only the treatment status of the own firm is relevant for determining EPSW’s directional effect on the outcomes we study, and the outcome variables we analyze comport with this assumptions. By contrast, a discontinuity-based identification strategy around the size threshold of 10 would implicitly require additional assumptions about the share of a firm’s competitors that are treated by EPSW. As we have little support in making such assumptions, we do not proceed with this alternative approach. Our theoretical analysis, and extensions in Section 4, predict an increase in segregation for all firms that are treated by EPSW. One conceptual hurdle is that we do not observe which workers perform "similar" work within a firm. Therefore, we consider complete firm-level gender segregation, implying the gender segregation of every set of similar workers within the firm. We also predict a shift in the wage gap in favor of the majority gender group of workers in the local labor market in which a firm treated by EPSW operates in. An important question is, therefore, how we define local labor markets. There are 21 industry codes and 321 geographic counties in our data. We define a local labor market a firm operates in by the firm's geographic county and industry code pair. ### Effect of EPSW on Segregation To study the effect of EPSW on gender segregation within the firm, we consider a panel in which an observation is a firm-month. We let \(j\) index a firm and let \(t\) index a month. We construct a full segregation indicator, \(full_{jt}\), that equals 1 in time \(t\) if all workers employed by firm \(j\) are of the same gender at time \(t\), and 0 otherwise. We also construct an indicator for whether the firm employs at least 10 long-term workers in June 2009 (policy announcement date), which we call \(above10_{j}\). Finally, we construct a post-treatment indicator \(post_{t}\) that is 1 if the time \(t\) of the observation is from June 2009 or later, and zero otherwise. We estimate difference-in-difference models of the following form: \[full_{jt}\!=\!\alpha_{j}\!+\!\alpha_{k(j)t}\!+\!\beta^{seg}(above10_{j}\!\times \!post_{t})\!+\!X_{jt}\Lambda\!+\!\epsilon_{jt} \tag{5}\] where \(X_{jt}\) is a vector of controls indicating the share of workers (strictly) younger than the median age in the industry-region, the share of workers with tertiary education, and the share of workers that have long-term contracts. \(\alpha_{j}\) is a fixed effect for firm \(j\), and \(\alpha_{k(j)t}\) are time-varying fixed effects at the level of firm \(j\)'s industry-county (we provide results from additional empirical specifications which consider time trends at more aggregated levels in Appendix D.2). Our coefficient of interest is \(\beta^{seg}\), and we interpret it as the effect of the policy on the share of gender-segregated firms. To understand more about the dynamic effects of EPSW, we consider the following difference \begin{table} \begin{tabular}{l c c c} \hline \hline & (I) & (II) & (III) \\ & All firms & 6 to 13 & Balanced \\ & & workers at announcement & 6 to 13 \\ \multicolumn{4}{l}{_Panel (a): Workers_} \\ \multicolumn{4}{l}{Average age at announcement} \\ \multicolumn{4}{l}{Share with tertiary education} \\ \multicolumn{4}{l}{Share male} \\ \multicolumn{4}{l}{Share married} \\ \multicolumn{4}{l}{Share with residence in Santiago Region} \\ \multicolumn{4}{l}{Share in female majority industry-county at announcement} \\ \multicolumn{4}{l}{Number of workers} \\ \multicolumn{4}{l}{_Panel (b): Firms_} \\ \multicolumn{4}{l}{Average number of workers at announcement} \\ \multicolumn{4}{l}{Share in Santiago Region} \\ \multicolumn{4}{l}{Share in Agriculture} \\ \multicolumn{4}{l}{Share in Manufacturing} \\ \multicolumn{4}{l}{Share in female majority industry-county at announcement} \\ \multicolumn{4}{l}{Number of firms} \\ \multicolumn{4}{l}{23,449} \\ \multicolumn{4}{l}{6,124} \\ \multicolumn{4}{l}{3,201} \\ \hline \hline \end{tabular} Notes: This table displays summary statistics for the different samples used in the paper. The unit of our panels is the worker-firm-month. In Panel (a), we display figures about the workers present in our data. In Panel (b), we display figures about the firms in our data. In column I, we display figures for the data without firm size restrictions or balancing. In column II, we display figures for the data in column I after restricting for firms that employed between 6 and 13 workers in June 2009. In column III, we display figures for the data in column II after further restricting for firms that are in our data in every month between January 2005 and December 2013. Column III is the sample we use in the majority of our analysis. The rows within each panel label the displayed variables. \end{table} Table 1: Descriptive Statistics in-differences model year by year, where we omit the year before policy announcement as the reference period, so that the set of years included is \(\mathcal{T}\!=\!\{2005,2006,2007,2009,...,2013\}\). By construction, each year (indexed by \(\tau\)) corresponds to twelve time periods (indexed by \(t\)). \[full_{jt}\!=\!\alpha_{j}\!+\!\alpha_{k(j)t}\!+\!\sum_{\tau\in\mathcal{T}}\!\beta _{\tau}^{seg}D_{jt}\!+\!X_{jt}\Lambda\!+\!\epsilon_{jt} \tag{6}\] where \(D_{jt}\) is an indicator that equals 1 in time period \(t\) if firm \(j\) employs at least 10 long-term workers at policy announcement, and zero otherwise. \(\beta_{\tau}^{seg}\) is the average difference in segregation between treated and control firms in year \(\tau\) (relative to 2008). Our identifying assumption is that parallel trends hold between treated and control firms. That is, \(\mathbb{E}[\epsilon_{jt}\!\cdot\!D_{jt}]\!=\!0\) for all \(t\). This strategy builds in a partial falsification test, in that we expect coefficient estimates of \(\beta_{\tau}^{seg}\) to be zero for all \(\tau\!<\!2009\). Table 2 presents estimates on the effect of EPSW on segregation. Column I presents our baseline results from (5). We find a 4.6 percentage point increase in segregation following EPSW, from a baseline of 31% of firms that were fully segregated at EPSW announcement. Columns II-VII present results on segregation from alternative empirical specifications and alternative sample selections. We discuss these specifications further in Appendix D.3. Across all specifications, the increase in segregation due to EPSW is statistically significant at conventional levels. Additionally, we present estimates from (6) to support our parallel trends assumption. We show that the mean pre-treatment estimates are small and statistically insignificant, and we cannot reject an F-test that all pre-treatment coefficients are jointly zero. Figure 5 displays the estimated coefficients of interest from (6). Prior to EPSW, the coefficient of interest is statistically indistinguishable from 0 in all years \(\tau\!<\!2009\). In the first year of EPSW, segregation rises by 1.99 percentage points in the treated group compared to the control group (p-value = 0.135) and rises to 5.11 percentage points (p-value= 0.038) by year five of EPSW. To better understand how EPSW affects firm incentives to segregate, we consider its effect on the share of firms that are mostly-but-not-fully segregated. Our model makes strong predictions that firms bound by EPSW will have incentives to fully segregate in equilibrium, but not that there are particular incentives to partially segregate. Of course, our model does not capture every complexity present in the labor market, and there may be other barriers preventing some firms from fully segregating. Nevertheless, even with such complexities, the equilibrium channels we study in this paper suggest that treated firms are likely to fully segregate if doing so would present a small change to their workforce. That is, firms that would have otherwise had only a small number of workers of the "wrong" gender that prevent full segregation may be particularly likely to end their relationship with these workers to achieve full segregation. Therefore, the economic forces present in our model likely indicate a decrease in the share of treated firms that are almost-but-not-fully segregated after EPSW. We reanalyze our firm-based analysis with a different dependent variable: "near" segregation. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & (I) & (II) & (III) & (IV) & (V) & (VI) & (VII) \\ & Baseline & No firm & No & Unbalanced & Doughnut & Narrower & Wider \\ & & FEs & controls & sample & hole & band & band \\ \hline \((\hat{\beta}^{seg})\) Post \(\times\) Treated & 0.0455\({}^{***}\) & 0.0420\({}^{**}\) & 0.0368\({}^{**}\) & 0.0471\({}^{***}\) & 0.0431\({}^{**}\) & 0.0432\({}^{**}\) & 0.0651\({}^{***}\) \\ & (0.0171) & (0.0168) & (0.0171) & (0.0144) & (0.0199) & (0.0203) & (0.0157) \\ \hline Mean Pre-Treatment & -0.0044 & -0.0033 & -0.0004 & 0.0032 & -0.0002 & -0.0066 & -0.0051 \\ & (0.0140) & (0.0134) & (0.0141) & (0.0125) & (0.0189) & (0.0162) & (0.0129) \\ No Pre-Trend p-value & 0.3237 & 0.2556 & 0.6262 & 0.2009 & 0.7872 & 0.1485 & 0.3475 \\ \hline Number of Firms & 2,638 & 2,638 & 2,638 & 4,275 & 2,244 & 1,920 & 3,325 \\ Number of Observations & 284,904 & 284,904 & 284,904 & 418,049 & 242,352 & 207,360 & 359,100 \\ \hline \multicolumn{7}{l}{_Fixed effects_} \\ \multicolumn{7}{l}{Firm} & Yes & No & Yes & Yes & Yes & Yes & Yes \\ \multicolumn{7}{l}{Month \(\times\) industry \(\times\) county} & Yes & Yes & Yes & Yes & Yes & Yes \\ \multicolumn{7}{l}{Firm-month level controls} & Yes & Yes & No & Yes & Yes & Yes & Yes \\ \hline \hline \end{tabular} Notes: The unit of analysis is the firm-month and the dependent variable is a binary variable that indicates whether all workers at the firm in question are of a single gender in a given month. Column I presents estimated coefficient \(\hat{\beta}^{seg}\) for our baseline difference-in-differences regression presented in (5). Firm-month controls included are: the share of workers younger than the median age in the industry-region, the share of workers with tertiary education, and the share of workers that have long-term contracts. The mean pre-treatment effect is the mean of \(\hat{\beta}^{seg}_{r}\) for \(\tau\) (\(\in\) [2005,2006,2007]) calculated from (6), and the no pre-trend p-value is derived from a joint \(F\)-test that \(\beta^{seg}_{r}\) = 0 for all \(\tau\) (\(\in\) [2005,2006,2007]). Columns II-VII present the analogous information for alternative sample selections and empirical specifications as described in Appendix D.3: column II removes firm fixed effects, column III removes the vector of controls, column IV considers the sample of all firms that exist at announcement, column V drops firms with 9 or 10 workers at announcement from our baseline sample, column VI drops firms with 6 or 13 workers at announcement from our baseline sample, and column VII adds firms with 5 or 14 workers at announcement to our baseline sample. Throughout, standard errors in parentheses are two-way clustered at the firm and month levels. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively. \end{table} Table 2: Effect of EPSW on Segregation Figure 5: Dynamic Path of EPSW’s Effect on Segregation Specifically, we define a firm \(j\) to be nearly segregated at time \(t\) if the share of workers in the majority gender of its workforce is in the interval [0.8,1). Note that \(j\) is classified as "nearly" segregated at time \(t\) only if it is not fully segregated at time \(t\). We select 0.8 as the lower end of the interval for the definition of this outcome variable due to our size restrictions; firms in our sample typically are nearly segregated only in time periods in which they employ either 1 or 2 workers of the non-majority gender; however, our findings are robust to other selections of the lower end of the range. We re-estimate (5) and (6) with near segregation as the outcome variable and present these results in Table 3 and Figure 6. We refer to the associated coefficients of interest as \(\beta^{nearseg}\) and \(\beta^{nearseg}_{\tau}\), respectively. Table 3 presents evidence to support our hypothesis on the effects of EPSW on near segregation. Each column presents a specification corresponding to the same column in Table 2. Our baseline specification in column I reveals that EPSW lowers the share of nearly segregated firms 5.1 percentage points following EPSW. Columns II-VII present results on near segregation from alternative empirical specifications and alternative sample selections. We discuss these specifications further in Appendix D.3. Across all specifications, our findings of a decrease in near segregation due to EPSW are statistically significant at conventional levels. Note that these estimates are similar in magnitude to the increase in the share of fully segregated, treated firms following EPSW (see Table 2). This "missing mass" of firms with near-but-not-full segregation suggests that firms that face relatively lower costs of fully segregating are those whose behavior most closely matches \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & (I) & (II) & (III) & (IV) & (V) & (VI) & (VII) \\ & Baseline & No firm & No & Unbalanced & Doughnut & Narrower & Wider \\ & & FEs & controls & sample & hole & band & band \\ \hline \((\hat{\beta}^{nearseg})\) Post \(\times\) Treated & -0.0505\({}^{***}\) & -0.0496\({}^{***}\) & -0.0418\({}^{**}\) & -0.0453\({}^{***}\) & -0.0414\({}^{*}\) & -0.0526\({}^{**}\) & -0.0796\({}^{***}\) \\ & (0.0179) & (0.0179) & (0.0180) & (0.0149) & (0.0230) & (0.0207) & (0.0169) \\ \hline Mean Pre-Treatment & -0.0155 & -0.0155 & -0.0197 & -0.0108 & -0.0538\({}^{***}\) & -0.0052 & -0.0105 \\ & (0.0148) & (0.0143) & (0.0147) & (0.0123) & (0.0199) & (0.0168) & (0.0137) \\ No Pre-Trend p-value & 0.2335 & 0.2641 & 0.2502 & 0.5456 & 0.0654 & 0.2477 & 0.3840 \\ \hline Number of Firms & 2,638 & 2,638 & 2,638 & 4,275 & 2,244 & 1,920 & 3,325 \\ Number of Observations & 284,904 & 284,904 & 284,904 & 418,049 & 242,352 & 207,360 & 359,100 \\ \hline \multicolumn{7}{l}{_Fixed effects_} \\ \multicolumn{7}{l}{Firm} & Yes & No & Yes & Yes & Yes & Yes & Yes \\ Month \(\times\) industry \(\times\) county & Yes & Yes & Yes & Yes & Yes & Yes \\ \multicolumn{7}{l}{Firm-month level controls} & Yes & Yes & No & Yes & Yes & Yes \\ \hline \hline \end{tabular} Notes: The unit of analysis is the firm-month and the dependent variable is “near” segregation, i.e. the dependent variable equals 1 if and only if the share of the majority gender of workers at firm \(j\) at time \(t\) is an element of [.8,1). Column I presents estimated coefficient \(\hat{\beta}^{nearseg}\) for our baseline difference-in-differences regression presented in (5). Firm-month controls included are the share of workers younger than the median age in the industry-region, the share of workers with tertiary education, and the share of workers that have long-term contacts. The mean pre-treatment effect is the mean of \(\hat{\beta}^{nearseg}_{\tau}\) for \(\tau\) \(\in\) [2005,2006,2007] calculated from (6), and the no pretends p-value is derived from a joint \(F\)-test that \(\hat{\beta}^{nearseg}_{\tau}\) = 0 for all \(\tau\) \(\in\) [2005,2006,2007]. Columns II-VII present the analogous information for alternative sample selections and empirical specifications as described in Appendix D.3: column II removes firm fixed effects, column III removes the vector of controls, column IV considers the sample of all firms that exist at announcement, column V drops firms with 9 or 10 workers at announcement from our baseline sample, column VI drops firms with 6 or 13 workers at announcement from our baseline sample, and column VII adds firms with 5 or 14 workers at announcement to our baseline sample. Throughout, standard errors in parentheses are two-way clustered at the firm and month levels. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively. \end{table} Table 3: Effect of EPSW on Near Segregation our theoretical predictions. Figure 6 shows the dynamic path of near segregation following EPSW. The pre-treatment coefficients are not statistically different from 0 in any year \(\tau\!<\!2009\); we note that pre-period point estimates are negative (the estimate in 2005, the first year of our panel, is -0.04 with a p-value of 0.11) which mitigates the magnitude of our coefficient estimate in Table 3. ### Effect of EPSW on the Gender Wage Gap Guided by our theoretical model, we are interested in studying the effect of EPSW on the wage gap between male and female workers. Our model predicts a relative benefit to men in male majority local labor markets and a relative benefit to women in female majority local labor markets. We define a local labor market as being female majority at time \(t\) if the share of female workers across all firms in a particular industry-county pair is at least 0.5, and otherwise, we define it as a male majority local labor market. To estimate the effects of EPSW on the gender wage gap across local labor markets, we consider a panel in which an observation is a worker-firm-month. In order to remove potential confounds from workers who are simultaneously enrolled in formal schooling and those beyond Chile's official retirement age, we only consider workers aged 22-65. We let \(i\) index a worker, \(j\) index a firm, and \(t\) index a month. Let \(w_{ijt}\) be the earnings of worker \(i\) at firm \(j\) in month \(t\). We construct an indicator, \(male_{i}\), that equals 1 if the worker is a male, and 0 otherwise. We construct indicator \(femalemaj_{jt}\) that equals 1 if firm \(j\) is in a female majority local labor market at time \(t\), and 0 otherwise. We estimate difference models of the following form: \[\begin{split}\text{ln}w_{ijt}&=\alpha_{i}+\omega_{ it}+\alpha_{j}+\alpha_{k(ij)t}+\gamma_{1}(above10_{j}\times post_{t})+\psi_{1}( above10_{j}\times post_{t}\times femalemaj_{jt})\\ &\quad\quad+\gamma_{2}(male_{i}\times post_{t})+\psi_{2}(male_{i }\times post_{t}\times femalemaj_{jt})\\ &\quad\quad+\gamma_{3}(above10_{j}\times male_{i})+\psi_{3}(above1 0_{j}\times male_{i}\times femalemaj_{jt})\\ &\quad\quad+\beta^{MgO}(above10_{j}\times male_{i}\times post_{t} )+\beta^{Fgap}(above10_{j}\times male_{i}\times post_{t}\times femalemaj_{jt}) \\ &\quad\quad+X_{ijt}\Lambda\!+\!\epsilon_{ijt}\end{split} \tag{7}\] Figure 6: Dynamic Path of EPSW’s Effect on Near Segregation where \(\alpha_{i}\) is a fixed effect for worker \(i\), \(\omega_{it}\) is worker \(i\)'s age at time \(t\), and \(X_{ijt}\) is a vector of firm-month level controls and worker-firm-month level controls. The firm-month level controls are the share of workers younger than the median age at the industry-region, share of workers with tertiary education, and the share of workers that have long-term contracts. The worker-firm-month level controls are the number of months at the firm and an indicator for whether the worker's earnings reach the top-coding threshold. \(\alpha_{k(ij)t}\) are time fixed effects for workers in set \(k(ij)\), where \(k(ij)\) is a comparison group of workers to \(i\) employed at a comparison group of firms to \(j\). Worker comparison groups are defined by equivalence across three binary dimensions at time \(t\) at firm \(j\): an indicator for tertiary education, an indicator for long-term versus fixed-term contract, and an indicator for being above median age in the particular industry-region in which firm \(j\) operates. Firm comparison groups are defined by firms in the same industry-county in which firm \(j\) operates (we provide results from additional empirical specifications which consider time trends at more aggregated levels in Appendix D.2). Our coefficients of interest are \(\beta^{Mgap}\) and \(\beta^{Fgap}\). We interpret \(\beta^{Mgap}\) as the effect of the policy on the (percentage) wage gap between male and female workers in male majority labor markets. We interpret \(\beta^{Fgap}+\beta^{Mgap}\) as the effect of the policy on the (percentage) wage gap between male and female workers in female labor markets. To understand more about the dynamic effects of EPSW, we estimate the following triple difference model year by year, where we omit the year before policy announcement as the reference period, so that the set of years included is \(\mathcal{T}=\){2005,2006,2007,2009,...,2013}. By construction, each year (indexed by \(\tau\)) corresponds to twelve time periods (indexed by \(t\)). Let \(year_{\tau}\) be an indicator that equals 1 in year \(\tau\), and zero otherwise. \[\mathrm{ln}w_{ijt} = \alpha_{i}+\omega_{it}+\alpha_{j}+\alpha_{k(ij)t}+\sum_{\tau\in \mathcal{T}}\gamma_{1\tau}(above10_{j}\times year_{\tau})+\sum_{\tau\in \mathcal{T}}\psi_{1\tau}(above10_{j}\times femalemaj_{jt}\times year_{\tau}) \tag{8}\] \[+\sum_{\tau\in\mathcal{T}}\gamma_{2\tau}(male_{i}\times year_{ \tau})+\sum_{\tau\in\mathcal{T}}\psi_{2\tau}(male_{i}\times femalemaj_{jt} \times year_{\tau})\] \[+\gamma_{3}(above10_{j}\times male_{i})+\psi_{3}(above10_{j}\times male _{i}\times femalemaj_{jt})\] \[+\sum_{\tau\in\mathcal{T}}\beta^{Mgap}_{\tau}D^{M}_{ijt}+\sum_{ \tau\in\mathcal{T}}\beta^{Fgap}_{\tau}D^{F}_{ijt}\] \[+X_{ijt}\Lambda+\epsilon_{ijt}\] where \(D^{M}_{ijt}\) is an indicator that equals 1 in time period \(t\) if firm \(j\) employs at least 10 long-term workers at the time of policy announcement, \(j\)'s local labor market is coded as male majority in time \(t\), and \(i\) is male, and zero otherwise. \(\beta^{Mgap}_{\tau}\) is the average difference in log wages between men and women in treated versus control firms in year \(\tau\) (relative to 2008) in male majority local labor markets. Similarly, \(D^{F}_{ijt}\) is an indicator that equals 1 in time period \(t\) if firm \(j\) employs at least 10 long-term workers at the time of policy announcement, \(j\)'s local labor market is coded as female majority in time \(t\), and \(i\) is male, and zero otherwise. \(\beta^{Fgap}_{\tau}\) is the average difference in log wages between men and women in treated versus control firms in year \(\tau\) (relative to 2008) in female majority labor markets _relative to male majority labor markets_. Therefore, \(\beta^{Mgap}_{\tau}+\beta^{Mgap}_{\tau}\) is the average difference in log wages between men and women in treated versus control firms in year \(\tau\) (relative to 2008) in female majority labor markets. Our identifying assumption is that parallel trends hold between treated and control firms, that is, \(\mathbb{E}[\epsilon_{ijt}\cdot D^{M}_{ijt}]=0\) and \(\mathbb{E}[\epsilon_{ijt}\cdot(D^{M}_{ijt}+D^{F}_{ijt})]=0\) for all \(t\)(Olden and Moen, 2022). This strategy builds in a partial falsification test, in that we expect coefficient estimates of \(\beta^{Mgap}_{\tau}\), and \(\beta^{Mgap}_{\tau}+\beta^{Fgap}_{\tau}\) to be zero for \(\tau<2009\). Table 4 presents our estimates on the effect of EPSW on the gender wage gap. Column I presents our baseline results from (7). We find that EPSW increases the gender wage gap (in favor of men) by 3.8 percentage points in male majority labor markets, but decreases the gender wage gap (in favor of women) by 5.2 percentage points in female majority labor markets. For reference, the within-firm wage gap in favor of men, averaged across firms, is 35.8% at EPSW announcement among firms in our sample that employ both male and female workers. Columns II-VII present results on the gender wage gap from alternative empirical specifications and alternative sample selections. We discuss these specifications further in Appendix D.3. Across all specifications, our findings of an increase in the gender wage gap in male majority labor markets, and a decrease in the gender wage gap in female majority labor markets, are statistically significant at conventional levels. Additionally, we present estimates from (8) to support our parallel trends assumption. The mean pre-treatment estimates are small and statistically insignificant, and we cannot reject an F-test that all pre-treatment coefficients are jointly zero. Figure 7 displays the estimated coefficients of interest from (8). Panel (a) presents estimates for male majority labor markets. The coefficient of interest is statistically indistinguishable from 0 in all years \(\tau\!<\!2009\). In the first year of EPSW, the wage gap rises (in favor of men) by 2.02 percentage points in the treated group compared to the control group (p-value = 0.123) and rises to 4.45 percentage points (p-value = 0.030) by year five of EPSW. Panel (b) presents estimates for female majority labor markets. The coefficient of interest is statistically indistinguishable from 0 in all years \(\tau\!<\!2009\). In the first year of EPSW, the wage gap falls (in favor of women) by 1.18 percentage points in the treated group compared to the control group (p-value= 0.583) and falls to 6.59 percentage points (p-value = 0.033) by year five of EPSW. One potentially interesting policy-related question is the overall effect of EPSW on the gender wage gap across all industries. Given our findings that EPSW relatively benefits the majority group of workers in a local labor market, and the fact that the vast majority of workers are employed in male majority local labor markets, we expect EPSW to increase the wage gap in favor of men (although by a smaller magnitude that the increase in male majority labor markets). In Appendix D.4, we formally test this hypothesis using a triple difference model. We find that Figure 7: Dynamic Path of EPSW’s Effect on Gender Wage Gap, by Majority Worker Group \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & (I) & (II) & (III) & (IV) & (V) & (VI) & (VII) \\ & Baseline & No firm & No & Unbalanced & Doughnut & Narrower & Wider \\ & & FEs & controls & sample & hole & band & band \\ \hline \((\hat{\beta}^{Mgap})\) Treated \(\times\) Male \(\times\) Post & 0.0378\({}^{***}\) & 0.0340\({}^{**}\) & 0.0387\({}^{***}\) & 0.0273\({}^{**}\) & 0.0494\({}^{**}\) & 0.0266\({}^{*}\) & 0.0171 \\ & (0.0143) & (0.0138) & (0.0144) & (0.0113) & (0.0188) & (0.0152) & (0.0125) \\ \((\hat{\beta}^{Fgap})\) Treated \(\times\) Male \(\times\) Post & -0.0902\({}^{***}\) & -0.0852\({}^{***}\) & -0.0934\({}^{***}\) & -0.0890\({}^{***}\) & -0.109\({}^{***}\) & -0.0750\({}^{***}\) & -0.0552\({}^{**}\) \\ \(\times\) Female Majority Labor Market & (0.0233) & (0.0233) & (0.0234) & (0.0197) & (0.0298) & (0.0282) & (0.0226) \\ \((\hat{\beta}^{Mgap}+\hat{\beta}^{Fgap})\) Effect in Female & -0.0524\({}^{**}\) & -0.0512\({}^{**}\) & -0.0547\({}^{**}\) & -0.0618\({}^{***}\) & -0.0600\({}^{**}\) & -0.0484\({}^{*}\) & -0.0381\({}^{*}\) \\ Majority Labor Market & (0.0217) & (0.0215) & (0.0216) & (0.0185) & (0.0286) & (0.0265) & (0.0210) \\ \hline Mean Pre-Treatment & -0.0069 & -0.0039 & -0.0058 & -0.0177\({}^{*}\) & -0.0017 & -0.0253\({}^{*}\) & -0.0133 \\ (Male Majority Labor Market) & (0.0126) & (0.0118) & (0.0129) & (0.0107) & (0.0170) & (0.0145) & (0.0122) \\ No Pre-Trend p-value & 0.8285 & 0.8874 & 0.8361 & 0.1460 & 0.9369 & 0.2988 & 0.6646 \\ (Male Majority Labor Market) & (0.0125) & 0.0189 & 0.0104 & 0.0129 & -0.0065 & -0.0023 & 0.0009 \\ (Female Majority Labor Market) & (0.0253) & (0.0257) & (0.0254) & (0.0226) & (0.0276) & (0.0320) & (0.0215) \\ No Pre-Trend p-value & 0.8427 & 0.6893 & 0.8480 & 0.8645 & 0.9228 & 0.3435 & 0.4014 \\ (Female Majority Labor Market) & & & & & & & \\ \hline Number of Firms & 3,168 & 3,168 & 3,168 & 6,060 & 2,760 & 2,398 & 3,867 \\ Number of Observations & 3,333,272 & 3,333,272 & 3,333,272 & 5,321,733 & 2,839,244 & 2,634,618 & 3,982,352 \\ \hline \hline \multicolumn{7}{l}{_Fixed effects_} \\ \hline Firm & Yes & No & Yes & Yes & Yes & Yes & Yes \\ Worker & Yes & Yes & Yes & Yes & Yes & Yes \\ Worker age & Yes & Yes & Yes & Yes & Yes & Yes \\ Month \(\times\) industry \(\times\) county & Yes & Yes & Yes & Yes & Yes & Yes \\ \(\times\) tertiary education & & & & & & \\ \(\times\) contract type \(\times\) worker age & & & & & & \\ Firm-month level controls & Yes & Yes & No & Yes & Yes & Yes & Yes \\ Worker-month level controls & Yes & Yes & No & Yes & Yes & Yes & Yes \\ \hline \hline \end{tabular} Notes: The unit of analysis is the worker-firm-month and the dependent variable is the natural logarithm of the worker’s wage at the firm in a given month. Column I presents estimated coefficients \(\hat{\beta}^{Mgap}\) and \(\hat{\beta}^{Fgap}\) for our baseline regression specification presented in (7). Time-varying fixed effects are defined as the intersection of: the firm’s industry, the firm’s county, an indicator for worker tertiary education, an indicator for worker contract type, an indicator for a worker being above median age in the industry-region. Firm-month controls included are: the share of workers younger than the median age in the industry-region, the share of workers with tertiary education, and the share of workers that have long-term contracts. Worker-firm-month levels controls included are the number of months at the firm and an indicator for reaching the earnings truncation threshold. The mean pre-treatment effects are the mean of \(\hat{\beta}^{Mgap}_{r}\) and \(\hat{\beta}^{Mgap}_{r}+\hat{\beta}^{Fgap}_{r}\), respectively, for \(r\)\(\in\) {2005,206,2007} calculated from (8), and the no pret pretends p-values are derived from joint \(F\)-tests that \(\beta^{Mgap}_{r}\)\(=\) 0 and \(\beta^{Mgap}_{r}\)\(+\hat{\beta}^{Fgap}_{r}\)\(=\) 0, respectively, for all \(r\)\(\in\) {2005,2006,2007}. Columns II-VII present the analogous information for alternative sample selections and empirical specifications as described in Appendix D.3: column II removes firm fixed effects, column III removes the vector of controls, column IV considers the sample of all firms that exist at announcement, column V drops firms with 9 or 10 workers at announcement from our baseline sample, column VI drops firms with 6 or 13 workers at announcement from our baseline sample, and column VII adds firms with 5 or 14 workers at announcement to our baseline sample. Throughout, standard errors in parentheses are two-way clustered at the firm and month levels. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% levels, respectively. \end{table} Table 4: Effect of EPSW on Gender Wage Gap, by Majority Worker Group EPSW increases the overall gender wage gap (in favor of men) by 2.6 percentage points, which is statistically significant at conventional levels. ## 6 Conclusion We find that the equilibrium effects of Equal Pay for Similar Work (EPSW) policies dominate their direct effects. Therefore, imposing these policies may lead to unintended outcomes. Our model demonstrates that EPSW targeted specifically to equalize pay across protected classes of workers leads to firms segregating their workforce in equilibrium to avoid the bite of the policy. Although discriminatory forces may lead to pay gaps across groups without EPSW, segregation caused by EPSW results in the minority group of workers in a labor market receiving even lower relative wages. Our empirical evaluation of Chile's 2009 EPSW--which prohibited unequal pay for similar work across genders--supports these predictions. The policy caused gender segregation within firm to rise, and the gender wage gap to rise. Importantly, the rise in the wage gap only occurred in male-majority local labor markets; in local labor markets with majority female workers, the wage gap closed. Both of these findings are as predicted by our theoretical analysis. However, our model reveals how a change to EPSW can close wage gaps: removing clauses about protected classes of workers. Once they are removed, firms must pay all "similar" workers the same wage, regardless of group identity. Therefore, firms no longer have incentives to segregate their workforce by group identity in equilibrium. Such non-group-based EPSW can close wage gaps across groups. Additional design choices, such as imposing group employment quotas (Bertrand et al., 2019) or drastically _increasing_ the number of protected classes can serve a similar purpose.22 Footnote 22: Many current US state EPSW policies define group identity as the intersection of: race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, disability, or genetic information. Even taking a lower bound and assuming each of these is a binary characteristic leads to \(2^{11}\!=\!2048\) different protected classes, which removes the incentive for firms to specialize to one class, where the number of workers is likely quite small within any given local labor market. Many important questions remain in understanding the equilibrium impacts of EPSW, and equity-related labor market policies in general. One difficulty is understanding the role of complementary policies; a benefit of studying the Chilean labor market is the relative dearth of alternative anti-discrimination policies prior to and contemporaneously with the enactment of EPSW. For this reason, further theoretical study of labor-market policies may be particularly fruitful. One particular avenue for further research is understanding firm incentives to heterogenize on-the-job responsibilities and duties to evade EPSW.
2306.14418
Context-Encoded Code Change Representation for Automated Commit Message Generation
Changes in source code are an inevitable part of software development. They are the results of indispensable activities such as fixing bugs or improving functionality. Descriptions for code changes (commit messages) help people better understand the changes. However, due to a lack of motivation and time pressure, writing high-quality commit messages remains reluctantly considered. Several methods have been proposed with the aim of automated commit message generation. However, the existing methods are still limited because they only utilise either the changed code or the changed code combined with surrounding statements. This paper proposes a method to represent code changes by combining the changed code and the unchanged code which have program dependence on the changed code. This method overcomes the limitations of current representations while improving the performance of 5/6 of state-of-the-art commit message generation methods by up to 15% in METEOR, 14% in ROUGE-L, and 10% in BLEU-4.
Thanh Trong Vu, Thanh-Dat Do, Hieu Dinh Vo
2023-06-26T04:48:14Z
http://arxiv.org/abs/2306.14418v1
# Context-Encoded Code Change Representation for ###### Abstract Changes in source code are an inevitable part of software development. They are the results of indispensable activities such as fixing bugs or improving functionality. Descriptions for code changes (commit messages) help people better understand the changes. However, due to a lack of motivation and time pressure, writing high-quality commit messages remains reluctantly considered. Several methods have been proposed with the aim of automated commit message generation. However, the existing methods are still limited because they only utilise either the changed code or the changed code combined with surrounding statements. This paper proposes a method to represent code changes by combining the changed code and the unchanged code which have program dependence on the changed code. This method overcomes the limitations of current representations while improving the performance of 5/6 of state-of-the-art commit message generation methods by up to 15% in METEOR, 14% in ROUGE-L, and 10% in BLEU-4. code changes representation, automated commit message generation, program dependence, program slices. ## 1 Introduction Changes in source code are inevitable activities during the life cycle of software applications. Changes are made in order to fix bugs and improve functionality. Along with changes in source code, writing commit messages, which describe the changes, is an essential part of the software development process [1, 2]. Good commit messages help the code reviewers and maintainers easily understand the changes and the rationales behind them [3, 4, 5, 6]. Although commit messages bring many benefits, in many cases, they are overlooked. This issue is affirmed by a survey conducted by Tian _et. al._[5] on 1,600 randomly selected commit messages from five big open-source software projects. The results indicated that up to 44% of the commit messages lacked crucial information regarding the nature of the code changes and the reasons for their implementation. This means that the current commit messages contain insufficient necessary information to understand the code changes. Also according to Tian _et. al._, a high-quality commit message must have two complete parts describing what has changed and why that change has occurred. Recently, several automated commit message generation methods have been proposed. Among these works, several studies rely on pre-defined rules or patterns [7, 8, 9]. Some studies apply information retrieval techniques to reuse commit messages for similar source code changes [10, 11, 12, 13]. Recently, Seq2Seq-based neural network models have been introduced to understand code changes and create high-quality commit messages [14, 15, 16, 17, 18, 19]. Although these approaches show promising results, they still have limitations which originate from the understanding and the representation of the code changes. Some of these methods represent code changes using only the changed code, which includes the added and removed statements. Because statements in changed code usually interact with the statements in unchanged code, it is unlikely to have high-quality commit messages if only changed code is used for message generation. Some other methods try to overcome this limitation by exploiting both the changed code and its surrounding statements. However, a statement positioned right before/after the changed code does not necessarily have a semantic relationship with the changed code. Using surrounding statements which have no semantic relationship with the changed code to generate the commit message should result in a low-quality one. To have high-quality commit messages, we need to consider the changed code and the part of the code that has a semantic relationship with it. This paper proposes a context-based representation for code changes. The proposed representation aims at providing information about changes in the program through statements that impact or are impacted by the changed code. Specifically, this representation is created by combining changed codes and unchanged codes that have program dependences on the changed code. In addition, the proposed representation method also can be easily integrated with the current generation commit message methods. Our experiments on a dataset with 31,517 commits collected from 160 popular open source projects show that the proposed representation method can improve 5 out of 6 state-of-the-art methods for automated commit message generation. In particular, with the new change representation, the performance of these methods can be improved by up to 15% in METEOR, 14% in ROUGE-L, and 10% in BLEU-4. In brief, this paper makes the following contributions: 1. _Context-Encoded Representation_: A novel context-based code changes representation with substantial information about the changes. 2. An extensive experimental evaluation showing the advantages of _Context-Encoded Representation_ over the state-of-the-art code change representations. 3. A public dataset of 30K+ commits collected from 160 real-world projects, which can be used as a benchmark for evaluating related works. The rest of this paper is organized as follows. Section 2 provides the foundational knowledge about automated commit messages generation including an ex ample illustrating our motivation and approach. Section 3 delineates the steps for the representation of code changes. Section 4 describes our evaluation methodology for the proposed code change representation. Section 5 presents and analyses the experiment results. We also discuss threats to validity of our work. The related works are in Section 6, before the conclusion. ## 2 Background and Motivation This section presents the background knowledge of the automated commit message generation process. Subsequently, an exemplification of code changes accompanied by a corresponding commit message is provided. This example is used to illustrate our motivation and main ideals for representation of code changes. ### Automated Commit Message Generation Figure 1 presents the main steps in the process of automated commit message generation. First, for each change, two source code versions exist, including the versions before and after the change. Not all parts of source code is related to the change. Therefore, these two source code versions will be processed and transformed to create an intermediate change representation. At this step, existing researches may use either only changed code [12, 15, 17, 20] or both changed code and surrounding statements [11, 13, 14, 16, 21, 22]. Subsequently, this intermediate change representation is transformed into an embedded vector representing the change's semantics. Various embedding approaches have been applied in existing researches. Specifically, while Liu _et. al._ utilized the "Bags of words" model [23] in NNGen [11], recent studies have employed neural networks [12, 14, 16, 17, 21] and even pre-trained models [15, 22]. At the last step, commit messages are generated by using information retrieval techniques [11, 12] or, recently, neural networks [14, 15, 16, 17, 21, 22]. Given a code change whose commit message needs to be generated, the approaches based on information retrieval techniques manage to find the most similar code change in the database. The generated commit message is the commit message of the most similar code change [11, 12]. Meanwhile, approaches using neural networks [14, 15, 16, 17, 21, 22] take the embedded vectors as input for a machine translation process which returns the target commit message. Figure 1: Typical process for automated commit message generation ### Motivating Example Figure 2 presents an example of a commit message for a simple change in the source code of the open-source project Litho a. With this code change, if we only consider the changed code (as the existing studies [12, 15, 17, 20, 22]), what we obtain is the information that the added code is identical to the removed code. In fact, the changed statements may interact with unchanged parts of the program. Therefore, using only the changed code may not fully represent the change and may lead to difficulties in understanding the actual meaning of the change. Moreover, the changed code may be similar between commits, but their purpose and meaning are entirely different because they are combined with different unchanged codes [24]. Therefore, to accurately represent a code change, considering only the changed code seems not enough. To address the issue of insufficient information about changes when relying solely on changed code, people may additionally use the statements surrounding the changed code [11, 13, 14, 16, 21, 22]. Specifically, this approach takes a pre-defined number n of statements before and after the changed code. In the example, if n is 3, the surrounding statements help to understand that the change moves the block if (mTextState!= null) from behind to the front of the block if (mTextChangedEventHandler!= null). However, surrounding statements also may contain undesirable ones. For example, the assignment int lineCount = getLineCount() and statement if (mLineCount!= UWMEASURED_LINE_COUNT have nothing to do with the changed code but still included as its surrounding. These statements may significantly reduce the performance of commit message generation methods. Therefore, to aim at generating high-quality commit messages, it is necessary Figure 2: An example of a code change and its commit message. to accurately understand the meaning of the change in the source code. This can be achieved by considering the changed code and the unchanged code which is the context of the changed code. ## 3 Context-Encoded Code Change Representation In this section, we present the process of building code change representation named _Context-Encoded Representation_, which combines the changed code and statements that are dependent on the changed code. Figure 3 shows the main steps of our method including the construction of program dependence graph, program slice extraction, and context-encoded representation construction. ### Program Dependence Graph Construction In this study, to represent the dependencies between statements in the source code, we constructed a Program Dependence Graph (PDG). In a PDG, each node represents a statement while edges show relationships between statements. As depicted in Figure 4, the graph effectively visualizes both data and control relationships between statements. Control dependence exists between two statements if one potentially prevents the execution of the other. Data dependence occurs when two statements declare, use, or reference the same variable. ### Program Slice Extraction From the program dependence graph, statements that have program dependences on the changed statements are extracted. Specifically, the proposed method extracts statements that have data dependences and control dependences on the changed code. In particular, we apply both backward and forward slicing. Furthermore, the inter-procedural slicing technique [25] is also applied to ensure that statements outside the function will also be used to represent changes in the source code. Figure 3: Main steps in constructing a context-encoded representation for a code change. ### Context-Encoded Representation Construction Figure 5 illustrates an example of combining program slices of the source code versions before and after the change to build the corresponding context-encoded representation of the change. Specifically, the context-encoded representation is created by combining added statements, removed statements, and unchanged codes that have program dependences on the changed code. In particular, unchanged statements that are the same between the two program slices will be merged. When combining the before and after versions of a change, the order of statements in the source code is preserved. Furthermore, changed statements are also marked to distinguish them from unchanged codes. Added statements are preceded by a '+' character while the corresponding character for removed statements is '-'. In particular, context-encoded representations are formatted as a sequence of statements that have program dependences on each other. This format is compatible with the representation of the existing methods for automated commit message generation (but different in content). This allows _Context-Encoded Representation_ easily be integrated with the existing methods. ## 4 Evaluation Methodology To evaluate our approach, we seek to answer the following research questions: **RQ1: Performance Improvement.** How does _Context-Encoded Representation_ improve the performance of the state-of-the-art methods for automated commit message generation [11, 12, 26, 27, 15, 22]? **RQ2: Context Extraction Analysis.** How important are the dependences in _Context-Encoded Representation_ regarding its performance? **RQ3: Dependency Depth Analysis.** How does the depth of dependences in Figure 4: An example of a program dependence graph _Context-Encoded Representation_ impact its performance? **RQ4: Changes Complexity Analysis.** How does changed code's complexity affect _Context-Encoded Representation_? ### Metrics The metrics used to evaluate the proposed method include BLEU-4, METEOR, and ROUGE-L, which are commonly used in machine translation researches [13, 17, 20]. In addition, each metric represents a different aspect of evaluating text quality. BLEU-4 evaluates text quality up to 4 grams using uniform weights. Meanwhile, ROUGE-L evaluates text quality based on the longest common subsequence. Finally, METEOR evaluates the quality of generated text based on the harmonic mean of Precision and Recall of the unigram and their stemming and synonymy. ### Dataset The existing datasets [11] only provide information about the commit message along with the changed code and surrounding statements. This makes these datasets unsuitable for our evaluation. Therefore, we built a dataset by collecting all open-source Java projects which have at least 1,000 stars on Github (160 projects). Commits of these projects were then processed and filtered to meet the following criteria: having a sufficiently large change commit message length, being grammatically correct, and being well-evaluated by Yingchen's model [5]. Finally, 31,517 quality commits are retained (Table 1). Similar to previously published datasets [11, 12, 14], for each commit, we only consider the first sentence which usually summarises the content of the whole message. After that, the commit message is further cleansed by removing unique elements such as Issue ID, commit ID, and URL. Figure 5: An example of building _Context-Encoded Representation_ In the next step, any commits with too many changed statements will be removed (more than 20 changes). Most of these changes are "merger" or "rollback" commits. In addition, commit messages that are too short (less than five words) are also removed because they are usually meaningless. Messages larger than 150 words are also dismissed. After that, the commit messages are checked for the'verb direct object' grammatical structure. Finally, they are fed into the deep learning model proposed by Yingchen and colleagues [5] to classify whether they are good or not. Only good commit messages are retained. ### Experimental Setup For data collection, we use Pydriller library2. For analysing program dependencies, Joern [28] is used. All experiments were performed on a server with an Intel Xeon (2) @ 2.00GHz CPU, 16 GB RAM, and an NVIDIA Tesla P100 PCIe 16GB GPU running Ubuntu 20.04.4 LTS x86_64. Footnote 2: [https://pydriller.readthedocs.io](https://pydriller.readthedocs.io) The maximum commit message size is 150 words, and the input size for generating commit messages is 512. Pre-trained models used in experiments are CodeT5, UniXcoder, and CodeBERT. Specifically for the pre-trained CodeT5 model, batch_size, learning_rate, and the number of epochs are 8, 5e-5, and 10, respectively. The pre-trained model initializing weights are Salesforce/codet5-base. For the pre-trained UniXcoder model, batch_size, learning_rate, and epoch are 12, 5e-5, and 10, respectively. The pre-trained model initializing weights are microsoft/UniXcoder-base. For other methods such as CommitBART [22], CommitBERT [15], NNGen [11], and CC2Vec [12], the parameters are set with the default values. ## 5 Experimental Results ### Performance Improvement (RQ1) To evaluate the impact of _Context-Encoded Representation_ on the performance of state-of-the-art methods for automated commit message generation, we experiment \begin{table} \begin{tabular}{l c c c} \hline \hline & **\#Commit** & **\#Changed** & **\#Changed/Commit** \\ \hline **GrasalVM** & 1,763 & 12,549 & 7.12 \\ **Bagel** & 1,493 & 10,691 & 7.16 \\ **Buck** & 1,155 & 8,410 & 7.28 \\ **Loom** & 1,109 & 7,533 & 6.79 \\ **Toncat** & 1,050 & 7,295 & 6.95 \\ \hline \hline **Total** & 31,517 & 219,673 & 6.97 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics of the dataset with two scenarios: using the existing methods as they are and equipping the existing methods with _Context-Encoded Representation_. The experiment involves different commit message generation methods, including methods based on information retrieval techniques such as NNGen [11] and CC2Vec [12] and methods based on neural network such as CommitBERT [15] and CommitBART [22]. We also investigate the impact of _Context-Encoded Representation_ on performance of pre-trained models such as CodeT5 [27] and UniXcoder [26] in generating commit messages for code changes. Table 2 describes the performance of automated commit message generation methods in two cases: the original version and the version with _Context-Encoded Representation_. In general, _Context-Encoded Representation_ improves 5 out of 6 existing methods. The best performance improvement is with methods using pre-trained models. In particular, when applying _Context-Encoded Representation_, for CommitBERT [15], its performance is improved by up to 15% in the METEOR metrics, 14% in ROUGE-L, and 10% in BLEU-4 metrics. Those figures for Commit-BART [22] are 5%, 13%, and 9%, respectively. The performance of these methods is improved because when using _Context-Encoded Representation_, pre-trained models can grasp the meaning of the change from program-dependent statements. We can see that _Context-Encoded Representation_ does not improve the performance of CC2Vec [12] and NNGen [11]. These methods are based on information retrieval techniques. NNGen uses the "bags of words" model for embedding intermediate change representations.Consequently, this method cannot represent the relationships between words and cannot utilize the program dependences provided by _Context-Encoded Representation_. CC2Vec treats the added and removed source code separately before concatenating their embedded vectors into a single one representing changes in the source code. Therefore, when supplemented with unchanged code from _Context-Encoded Representation_, this method cannot exploit the program dependences. In addition, due to the characteristics of information retrieval techniques, generated commit messages are actually taken from a pre-defined set of \begin{table} \begin{tabular}{l l r r r} \hline \hline & **Methods** & **METEOR** & **ROUGE-L** & **BLEU-4** \\ \hline \multirow{2}{*}{**CommitBERT**} & Baseline & 4.66 & 10.81 & 7.11 \\ & _Context-Encoded Change_ & **5.36** & **12.28** & **7.83** \\ \hline \multirow{2}{*}{**CommitBART**} & Baseline & 8.95 & 15.28 & 11.11 \\ & _Context-Encoded Change_ & **9.38** & **17.33** & **12.16** \\ \hline \multirow{2}{*}{**CodeT5**} & Baseline & 7.71 & 15.17 & 9.75 \\ & _Context-Encoded Change_ & **8.46** & **16.29** & **10.54** \\ \hline \multirow{2}{*}{**UniXcoder**} & Baseline & 5.45 & 11.45 & 7.64 \\ & _Context-Encoded Change_ & **6.01** & **12.24** & **8.44** \\ \hline \multirow{2}{*}{**NNGen**} & Baseline & 4.04 & 9.23 & 5.83 \\ & _Context-Encoded Change_ & **4.07** & **9.27** & **5.88** \\ \hline \multirow{2}{*}{**CC2Vec**} & Baseline & 4.39 & 10.07 & 6.21 \\ & _Context-Encoded Change_ & 4.31 & 9.96 & 6.22 \\ \hline \hline \end{tabular} \end{table} Table 2: The impact of _Context-Encoded Representation_ on automated commit message generation messages, the performance of these methods may reach their upper bounds. **Answer for RQ1**: _Context-Encoded Representation_ can improve 5/6 state-of-the-art methods for automated commit messages generation and improve performance by up to 15%. ### Context Analysis (RQ2) In this experiment, we analyse the contribution of different code components in _Context-Encoded Representation_Specifically, we generate commit messages with 04 different scenarios of _Context-Encoded Representation_corresponding to 04 different ways of extracting context: (i) only the changed code, (ii) the changed code combined with its control dependence statements (iii) the changed code combined with its data dependence statements, and (iv) the changed code combined with its program dependence (i.e. both control and data dependence) statements. Table 3 shows the contributions of different code components on _Context-Encoded Representation_. We can see that _Context-Encoded Representation_ is at its best when changed code and program dependences are included. While data dependences are more helpful than control dependences in generating commit messages, we should use both of them (i.e. program dependences) to maximize the benefit from _Context-Encoded Representation_. Specifically, compared with using only changed code, applying both changed code and program dependence helps enhance performance by 8%, 7%, and 10% in BLEU-4, ROUGE-L, and METEOR, respectively. **Answer for RQ2**: The dependences within the source code has varying impacts on _Context-Encoded Representation_. The method is at its best when both control dependences and data dependences are used. ### Dependence Depth Analysis (RQ3) To analyse the impact of the depth of dependences on _Context-Encoded Representation_ we equip it with different levels of dependence depth. For example, in Figure 4, when the depth is 1, the statements that have program dependences with statement 13 are 9, 10, and 14. Meanwhile, when the depth is 2, the statements dependent on the statement 13 are 0, 9, 10, and 14. In our experiment, the depth varies from one to five. \begin{table} \begin{tabular}{l c c c} \hline \hline & **METEOR** & **ROUGE-L** & **BLEU-4** \\ \hline **Changed code** & 7.71 & 15.17 & 9.75 \\ **Changed code + control dependence** & 7.77 & 15.16 & 9.80 \\ **Changed code + data dependence** & 8.25 & 15.97 & 10.41 \\ **Changed code + program dependence** & **8.46** & **16.29** & **10.54** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of _Context-Encoded Representation_ with different types of dependence Table 4 describes the impact of dependence depths on generating commit messages. The results show that by increasing the depth of the dependences _Context-Encoded Representation_ may provide better performance for generating commit messages. However, _Context-Encoded Representation_ is at its best when dependence depth is 3. Above this value, increasing the dependence depth will result negative impact on commit message generation. **Answer for RQ3**: The depth of program dependences slightly impacts the performance of _Context-Encoded Representation_. ### Changes Complexity Analysis (RQ4) We also evaluate the impact of complexity in each change on the quality of the corresponding generated commit message. Specifically, we evaluate the quality of generated commit messages for changes including 1 to 5 changed statements, 5 to 10 changed statements, 10 to 15 changed statements, and more than 15 changed statements. Table 5 shows the impact of different input sizes on _Context-Encoded Representation_. _Context-Encoded Representation_ reaches its best performance when the number of changed statements is between 1 and 5. Its performance then gradually decreases as the number of changed statements increases. The reason is that when the number of changes increases, it is more difficult to grasp the meaning of them fully. **Answer for RQ4**: The complexity of the change has a significant impact on the performance of _Context-Encoded Representation_. The more complex the change, the lower the performance of _Context-Encoded Representation_. \begin{table} \begin{tabular}{l c c c} \hline \hline **\#Deep level** & **METEOR** & **ROUGE-L** & **BLEU-4** \\ \hline **Deep 1** & 8.31 & 15.97 & 10.42 \\ **Deep 2** & 8.30 & 16.00 & 10.35 \\ **Deep 3** & 8.46 & 16.29 & 10.54 \\ **Deep 4** & 8.26 & 15.95 & 10.36 \\ **Deep 5** & 7.68 & 15.08 & 9.76 \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of _Context-Encoded Representation_ with different context extraction depths \begin{table} \begin{tabular}{l c c c} \hline \hline **\#Number of changed statements** & **METEOR** & **ROUGE-L** & **BLEU-4** \\ \hline **From 1 to 5** & 8.35 & 16.05 & 10.61 \\ **From 5 to 10** & 8.33 & 16,00 & 10.18 \\ **From 10 to 15** & 8.20 & 15.76 & 10.27 \\ **Over 15** & 8.03 & 15.14 & 9.89 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of _Context-Encoded Representation_ with different input complexity ### Time Complexity In this paper, we use Joern to analyse the program, it takes about 1 second to explore a commit and 0.1 seconds to build the _Context-Encoded Change_. Compared to other representation techniques, there is not much difference in the training and evaluation process when using _Context-Encoded Representation_. With the commit message generation methods using pre-trained models such as CodeT5 [27] or CommitBERT [15], the training time when using _Context-Encoded Representation_ and the default method both take around 500 minutes and 540 minutes, respectively. ### Threats to Validity The main threats to the validity of our work consist of internal, construct, and external threats. **Threats to internal validity** include the influence of the method used to extract program dependencies. To reduce this threat, we use Joern [28] code analyzer, which is a widely-used code analyzer. Another threat mainly lies in the correctness of the implementation of our approach. To reduce such threats, we carefully reviewed our code and made it public [29] so that other researchers can double-check and reproduce our experiments. **Threats to construct validity** relate to the suitability of our evaluation procedure. We used _BLEU-4_, _METEOR_, and _ROUGE-L_. They are the widely-used evaluation measures for evaluating commit messages generated by the existing techniques [14, 15, 16, 17, 20, 21]. Besides, a threat may come from the adaptation of the existing commit message generation approaches. To mitigate this threat, we directly obtain the original source code from the GitHub repositories of the studied techniques. Also, we use the same hyperparameters as in the original papers [11, 12, 15, 22, 26, 27]. **Threats to external validity** mainly lie in the construction of our dataset. To reduce the impact of this threat, we applied the same data collection procedure as in the existing commit message generation approaches [14, 15, 16, 17, 20, 21] and collect the commits in high-quality Java projects. Moreover, our experiments are conducted on only the commits in Java projects. Thus, the results could not be generalized for other programming languages. In our future work, we plan to conduct more experiments to validate our results to other languages. ## 6 Related Work **Code Change Representation**. Our approach is related to code change representation approaches. Before using in a specific task, code changes could be transformed into several forms such as sequence-based [30], tree-based [31], and graph-based [24, 32]. Additionally, several techniques have been proposed to learn how to represent changes [12, 20, 33, 34]. These studies differ from our work as our approach considers the changed code sequences in relation to the related unchanged code and is designed to directly integrate into the existing commit message generation methods. **Commit message generation**. Our work is also related to learning-based commit message generation methods. Recently, a number of neural network-based approaches [14, 15, 16, 17, 20, 21] have been used to understand the semantics of code changes in the commit and translate them into commit messages. NMTGen [16] and CommitGen [14] treat code changes as pure text and use Seq2Seq neural networks with different attention mechanisms to translate them into commit messages. CoDiSum [17] extracts the structure and semantics of code and creates a multi-layer bidirectional GRU model to better understand the representation of code changes. CommitBERT [15] uses CodeBERT [35], a pre-trained language model for code, to understand the semantics of code changes and applies a transformer-based decoder [36] to generate commit messages. Our study differs from these studies as instead of focusing on end-to-end commit message generation, our method is designed to integrate into these existing commit message generation techniques. Additionally, as shown in Section 5, our approach and those approaches can be applied together to generate better commit messages. **Learning-based SE approaches**. Several **learning-based approaches** have been proposed for specific SE tasks including code recommendation/suggestion [37, 38, 39], program synthesis [40, 41], static analysis warnings [42, 43], pull request description generation [44, 45], code summarization [46, 47, 48], code clones [49], fuzz testing [50], bug detection [51], and program repair [52, 53]. ## 7 Conclusion Commit message is very important in the field of software development. Numerous methods for automated commit message generation have been proposed, yielding remarkable outcomes. Nonetheless, the representation of code changes in these methods does not fully cover information about changes and is often accompanied by noise. This paper presents a new technique for representing code changes by exploiting unchanged code that had program dependence on the changed code. Experiment results on a dataset collected from 160 open-source projects on Github, including 31,517 commits, show that by using the proposed representation method, the performance of 5 out of 6 state-of-the-art automated commit message generation methods can be improved. In particular, the performance of these methods can be improved by up to 15% METEOR compared to using current representations for code changes.
2305.15256
Discounting in Strategy Logic
Discounting is an important dimension in multi-agent systems as long as we want to reason about strategies and time. It is a key aspect in economics as it captures the intuition that the far-away future is not as important as the near future. Traditional verification techniques allow to check whether there is a winning strategy for a group of agents but they do not take into account the fact that satisfying a goal sooner is different from satisfying it after a long wait. In this paper, we augment Strategy Logic with future discounting over a set of discounted functions D, denoted SLdisc[D]. We consider "until" operators with discounting functions: the satisfaction value of a specification in SLdisc[D] is a value in [0, 1], where the longer it takes to fulfill requirements, the smaller the satisfaction value is. We motivate our approach with classical examples from Game Theory and study the complexity of model-checking SLdisc[D]-formulas.
Munyque Mittelmann, Aniello Murano, Laurent Perrussel
2023-05-24T15:40:53Z
http://arxiv.org/abs/2305.15256v1
# Discounting in Strategy Logic ###### Abstract Discounting is an important dimension in multi-agent systems as long as we want to reason about strategies and time. It is a key aspect in economics as it captures the intuition that the far-away future is not as important as the near future. Traditional verification techniques allow to check whether there is a winning strategy for a group of agents but they do not take into account the fact that satisfying a goal sooner is different from satisfying it after a long wait. In this paper, we augment Strategy Logic with _future discounting_ over a set of discounted functions \(\mathcal{D}\), denoted \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\). We consider "until" operators with discounting functions: the satisfaction value of a specification in \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) is a value in \([0,1]\), where the longer it takes to fulfill requirements, the smaller the satisfaction value is. We motivate our approach with classical examples from Game Theory and study the complexity of model-checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\)-formulas. ## 1 Introduction The goal of this paper is to advance the research on strategic reasoning and formal verification by considering a discounting effect: the utility of agents decreases over time. Boolean state-transition models have been widely used to define the semantics of temporal and strategic logics, including Linear Temporal Logic (LTL) [3], Alternating-time Temporal Logic (\(\mathsf{ATL}\)) [1], Strategy Logic (\(\mathsf{SL}\)) [14, 15]. In conjunction with model checking techniques [15], these formal frameworks are useful for the representation and verification of hardware and software systems. Given a strategic logic specification, the correctness of a system is a yes/no matter: either the system satisfies the specification or it does not. Complex systems that interact with a physical environment or that are composed of multiple autonomous agents may have quantitative aspects described by real numbers (e.g. utilities, time and costs). Evaluating the _quality_ of such systems through the Boolean satisfaction of the specifications is often inadequate. Different levels of quality may exist, and this should be reflected in the output of the verification procedure [1]. In this work, we are interested in verifying Multi-Agent Systems (MAS) whose quality assessment needs to take into account that satisfying the goal sooner is different from satisfying it after a long wait. To illustrate this setting, consider an agent whose task is to organize a trip and who is facing the problem of booking a flight. An early booking is more susceptible to becoming unfeasible in the case of unforeseen changes in the travel plans. On the other hand, waiting to book may result in more important costs for the agent. Moreover, the trip-organizing agent may be a part of a system composed of other, self-interested, agents. In this case, the agents' interactions can also influence their ability to find reasonable flight options and price tags. On one side, there is a competitive aspect when agents dispute the last available tickets. Cooperation could also take place as some companies offer discounts for group booking. To address this problem for (single-agent) systems, researchers have suggested to augment Linear Temporal Logic with future discounting [1, 1]. In the discounted setting, the satisfaction value of specifications is a numerical value, and it depends, according to some discounting function, on the time waited for eventualities to get satisfied. Discounting is a key dimension in Economics and has been studied in Markov decision processes [13] as well as game theory [12] and system theory [11] to capture the intuition that the far-away future is not as important as the near future. The multi-agent setting has also been widely investigated, including repeated games [1, 10, 12], the prisoner's dilemma game [13, 14], and negotiation protocols [15, 16], to name a few. Previous work [1, 12] have initiated to study logics inspired on \(\mathsf{ATL}\) and Markov chains for reasoning about discounting in stochastic MAS. Likewise \(\mathsf{ATL}\), these logics are unable to capture complex solution concepts in MAS (such as Nash equilibria), which are important when evaluating the possible outcomes of such systems. **Contribution.** In this work, we augment Strategy Logic with future discounting, denoted \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\), and study its complexity for model-checking. The main advantage of this logic is that it allows us to express and verify (i) the strategic abilities of agents to achieve certain goals while considering temporal discounts, and (ii) complex strategy concepts such as Nash equilibrium of discounted games. Different from previous work, we focus on deterministic games and consider temporal discounting alongside a logic that quantifies over strategies. This enables an unbounded number of alternations from strategic operators which is necessary to capture complex solution concepts. In relation to technical results, we also studied the complexity of the model-checking problem under memoryless and perfect recall strategies, which was not established in [1]. \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) represents a family of logics, each one parameterized by a set of discounting functions. Considering a set of functions allows us to model games in which each agent, or a coalition of them, is affected differently by how long in the future events occur (e.g., patient vs hurried agents). We also provide complexity results for model-checking and motivate the approach with classical examples from Game Theory. This is the first work to consider a Strategy Logic with discounting for strategic reasoning in MAS. We aim at paving the way for a new line of research that applies the formal techniques developed for verification and reasoning in MAS to game-theoretic problems involving future discounts. Outline.The paper1 is organized as follows: we start by discussing related work in Section 2. Then, we define Strategy Logic with future discounts, denoted \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) (Section 3). We proceed by introducing problems and concepts on using discounting in multi-agent games and illustrate the use of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) (Section 4). Next, we study the complexity results for model checking (Section 5). Finally, we conclude the paper and point directions for future work (Section 6). Footnote 1: This paper is an extended version of a paper accepted to the 32nd International Joint Conference in Artificial Intelligence (IJCAI 2023). ## 2 Related Work Weighted games have been studied in the literature in relation to various kinds of objectives, including parity [1], mean-payoff [1, 2], energy [16, 1], and combining qualitative and quantitative objectives in equilibrium [17, 18]. \(\mathsf{SL}[\mathcal{F}]\)[1, 19] was recently introduced as a quantitative extension of \(\mathsf{SL}\) defined over weighted concurrent game structures. It extends \(\mathsf{LTL}[\mathcal{F}]\)[1], a multi-valued logic that augments \(\mathsf{LTL}\) with quality operators. \(\mathsf{SL}[\mathcal{F}]\) subsumes both \(\mathsf{SL}\) and \(\mathsf{LTL}[\mathcal{F}]\) and is expressive enough to express complex solution concepts such as Nash equilibrium and properties about quantities. An extension of \(\mathsf{SL}[\mathcal{F}]\) with imperfect information and epistemic operators was recently proposed [10]. Other quantitative extensions of \(\mathsf{LTL}\) have been explored in the context of averaging [1, 16, 17], and mean-payoff objectives [1, 18], extensions of \(\mathsf{ATL}\) have also been investigated, such as timed \(\mathsf{ATL}\). [1, 19], multi-valued \(\mathsf{ATL}\)[1], \(\mathsf{ATL}\) with resource bounds [1, 16], and weighted versions of \(\mathsf{ATL}\)[1, 16, 17]. Another related problem is prompt requirements (see, for instance, [1, 16]), which consider a bound on the number of steps to satisfy the specification. To encode the notion that the importance of events should be discounted according to how late they occur, De Alfaro _et al._ (2005) proposed an extension of the Computational Tree Logic with quantitative semantics. In this logic, path operators are discounted by a parameter that can be chosen to give more weight to states that are closer to the beginning of the path. Later, Almagor _et al._ 2014 proposed \(\mathsf{LTL}\) augmented with an arbitrary set of discounting functions, denoted \(\mathsf{LTL}^{\text{disc}}[\mathcal{D}]\); and further explored with unary propositional quality operators and average-operator. In the context of stochastic systems, Jamroga (2008a) proposed the Markov Temporal Logic, which extends the Branching-Time Temporal Logic and captures discounted goals. Later, this approach was extended to the multi-agent setting [16]. Finally, Chen _et al._ (2013) considered a probabilistic extension of \(\mathsf{ATL}\), alongside discounted rewards. Temporal and strategic logics have been successfully applied alongside model-checking techniques to the certification of several types of MAS, such as voting protocols [1, 17], autonomous robotic systems [15], smart contracts [14], avionic systems [13], and task coordination robots [1]. ## 3 Strategy Logic With Discounting Strategy Logic with Discount (\(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\)) generalizes \(\mathsf{SL}\) by adding discounting temporal operators. The logic is actually a family of logics, each parameterized by a set \(\mathcal{D}\) of discounting functions. A function \(d:\mathbb{N}\to[0,1]\) is a _discounting function_ if \(\lim_{i\to\infty}d(i)=0\), and \(d\) is non-increasing. Examples of discounting functions include \(d(i)=\lambda^{i}\), for some \(\lambda\in(0,1)\), and \(d(i)=\frac{1}{i+1}\). For the remainder of the paper, we fix a set of discounting functions \(\mathcal{D}\), a set of atomic propositions \(\mathsf{AP}\), a set of agents Ag, and a set of strategy variables \(\mathsf{Var}\), except when stated otherwise. We let \(\mathsf{n}\) be the number of agents in Ag. The syntax of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) adds to \(\mathsf{SL}\) the operator \(\varphi\mathbf{U}_{d}\psi\) (discounting-Until), for every function \(d\in\mathcal{D}\). The logic is defined \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) as follows: **Definition 1**.: The syntax of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) is defined by the grammar \[\varphi::=p\mid\neg\varphi\mid\varphi\vee\varphi\mid\exists s.\varphi\mid(a, s)\varphi\mid\mathbf{X}\varphi\mid\varphi\mathbf{U}\varphi\mid\varphi\mathbf{U}_{d}\varphi\] where \(p\in\text{AP}\), \(s\in\text{Var}\), \(a\in\text{Ag}\), and \(d\in\mathcal{D}\). The intuitive reading of the operators is as follows: \(\exists s\,.\,\varphi\) means that there exists a strategy such that \(\varphi\) holds; \((a,s)\varphi\) means that when strategy \(s\) is assigned (or "bound") to agent \(a\), \(\varphi\) holds; \(\mathbf{X}\) and \(\mathbf{U}\) are the usual temporal operators "next" and "until". The intuition of the operator \(\mathbf{U}_{d}\) is that events that happen in the future have a lower influence, and the rate by which this influence decreases depends on the function \(d\). A variable is _free_ in a formula \(\varphi\) if it is bound to an agent without being quantified upon, and an agent \(a\) is free in \(\varphi\) if \(\varphi\) contains a temporal operator (\(\mathbf{X}\), \(\mathbf{U}\), \(\mathbf{U}_{d}\)) not in the scope of any binding for \(a\). The set of free variables and agents in \(\varphi\) is written \(\text{free}(\varphi)\), and a formula \(\varphi\) is a _sentence_ if \(\text{free}(\varphi)=\emptyset\). A state-transition model is a labeled directed graph, in which the vertices represent the system states, the edges the state changes (e.g., according to environment or agents' actions), and the labels the Boolean characteristics of the state (i.e., the truth values of state atomic propositions). In this paper, we consider state-transition models in which there are multiple agents that act simultaneously and independently. These models are called concurrent game structures (CGS). **Definition 2**.: A _concurrent game structure_ (CGS) is a tuple \(\mathcal{G}=(\operatorname{Ac},V,v_{\iota},\delta,\ell)\) where (i) Ac is a finite set of _actions_; (ii) \(V\) is a finite set of _positions_; (iii) \(v_{\iota}\in V\) is an _initial position_; (iv) \(\delta:V\times\operatorname{Ac}^{\text{Ag}}\to V\) is a _transition function_; (v) \(\ell:V\to 2^{\text{AP}}\) is a _labeling function_. In a position \(v\in V\), each player \(a\) chooses an action \(c_{a}\in\operatorname{Ac}\), and the game proceeds to position \(\delta(v,\boldsymbol{c})\) where \(\boldsymbol{c}\in\operatorname{Ac}^{\text{Ag}}\) is an _action profile_\((c_{a})_{a\in\text{Ag}}\). We write \(\boldsymbol{o}\) for a tuple of objects \((o_{a})_{a\in\text{Ag}}\), one for each agent, and such tuples are called _profiles_. Given a profile \(\boldsymbol{o}\) and \(a\in\text{Ag}\), we let \(o_{a}\) be agent \(a\)'s component, and \(\boldsymbol{o}_{-a}\) is \((o_{b})_{b\neq a}\). Similarly, we let \(\text{Ag}_{-a}=\text{Ag}\setminus\{a\}\). For a group of \(n\) agents \(A=\{a_{1},...,a_{n}\}\) and strategy profile \(\sigma=\sigma_{1},...,\sigma_{n}\) we write \((A,\sigma)\) as a shortcut for \((a_{1},\sigma_{1})...(a_{n},\sigma_{n})\). A _play_\(\pi=v_{0}v_{1}...\) in \(\mathcal{G}\) is an infinite sequence of positions such that \(v_{0}=v_{\iota}\) and for every \(i\geq 0\) there exists an action profile \(\boldsymbol{c}\) such that \(\delta(v_{i},\boldsymbol{c})=v_{i+1}\). We write \(\pi_{i}=v_{i}\) for the position at index \(i\) in play \(\pi\). A _history_\(h\) is a finite prefix of a play, \(\text{last}(h)\) is the last position of history \(h\), \(|h|\) is the length of \(h\) and Hist is the set of histories. A (perfect recall) _strategy_ is a function \(\sigma:\text{Hist}\to\operatorname{Ac}\) that maps each history to an action. A (memoryless) _strategy_ is a function \(\sigma:V\to\operatorname{Ac}\) that maps each position to an action. We let \(\text{{Str}}^{\text{R}}\) (similarly \(\text{{Str}}^{\text{r}}\)) be the set of perfect recall strategies (resp. memoryless strategies). For the remainder of the paper, we use r and R to denote memoryless and perfect recall, respectively, and we let \(\rho=\{\text{r},\text{R}\}\). An _assignment_\(\chi:\text{Ag}\cup\text{Var}\to\text{{Str}}\) is a function from players and variables to strategies. For an assignment \(\chi\), an agent \(a\) and a strategy \(\sigma\) for \(a\), \(\chi[a\mapsto\sigma]\) is the assignment that maps \(a\) to \(\sigma\) and is otherwise equal to \(\chi\), and \(\chi[s\mapsto\sigma]\) is defined similarly, where \(s\) is a variable. For an assignment \(\chi\) and a state \(v\), we let \(\text{Out}(\chi,v)\) be the unique play that continues \(v\) following the strategies assigned by \(\chi\). Formally, \(\text{Out}(\chi,v)\) is the play \(vv_{0}v_{1}...\) such that for all \(i\geq 0\), \(v_{i}=\delta(v_{i-1},\boldsymbol{c})\) where for all \(a\in\text{Ag}\), \(\boldsymbol{c}_{a}=\chi(a)(vv_{1}...v_{i-1})\). **Definition 3**.: Let \(\mathcal{G}=(\operatorname{Ac},V,v_{\iota},\delta,\ell)\) be a CGS, \(\chi\) be an assignment, and \(\rho\in\{\text{R},\text{r}\}\). The satisfaction value \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v)\in[0,1]\) of an \(\text{SL}^{\text{disc}}[\mathcal{D}]\) formula \(\varphi\) in a state \(v\) is defined as follows, where \(\pi\) denotes \(\text{Out}(\chi,v)\): \[\llbracket p\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =\begin{cases}1&\text{if }p\in\ell(v)\\ 0&\text{otherwise}\end{cases}\] \[\llbracket\exists s\,.\,\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =\max_{\sigma\in\text{{Str}}}\llbracket\varphi\rrbracket_{\chi[a \mapsto\sigma]}^{\mathcal{G},\,\rho}(v)\] \[\llbracket(a,s)\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =\llbracket\varphi\rrbracket_{\chi[a\mapsto\chi(s)]}^{\mathcal{G},\, \rho}(v)\] \[\llbracket\varphi_{1}\vee\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =\max\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v), \llbracket\varphi_{2}\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v))\] \[\llbracket\neg\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =1-\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v)\] \[\llbracket\mathbf{X}\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(v) =\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,\rho}(\pi_{1})\] \[\llbracket\varphi_{1}\mathbf{U}\varphi_{2}\rrbracket_{\chi}^{ \mathcal{G},\,\rho}(v) =\sup_{i\geq 0}\min\bigl{(}\llbracket\varphi_{2}\rrbracket_{\chi}^{ \mathcal{G},\,\rho}(\pi_{i}),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad from _how far_ in the play they are being evaluated w.r.t. the initial state. ## 4 Discounting in Multi-Agent Games We now introduce problems and concepts from Game Theory that motivated reasoning about discounts in MAS. ### Nash Equilibrium for \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) Goals _Nash equilibrium_ (NE) is a central solution concept in game theory that captures the notion of a stable solution, that is a solution from which no single player can individually improve his or her welfare by deviating [20]. Deterministic concurrent multi-player Nash equilibrium can be expressed using \(\mathsf{SL}\) (or its extensions) for Boolean valued goals [15] and quantitative goals [16]. With \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\), we can express that agent's goals are affected by how long in the future they are achieved. Let the \(\mathsf{LTL}^{\text{disc}}[\mathcal{D}]\)-formula \(\psi_{a}\) (i.e., an \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula without bindings and strategy quantification) denote the goal of agent \(a\). We can express whether a strategy profile \(\boldsymbol{\sigma}=(\sigma_{a})_{a\in\mathsf{Ag}}\) is a _Nash equilibrium_ through the \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula \[\varphi_{\text{NE}}(\boldsymbol{\sigma}):=(\text{Ag},\boldsymbol{\sigma}) \bigwedge_{a\in\text{Ag}}\big{(}\forall t.\,(a,t)\psi_{a}\big{)}\to\psi_{a}\] The existence of a Nash equilibrium is captured by the formula \(\hat{\varphi}_{\text{NE}}:=\exists\boldsymbol{\sigma}(\varphi_{\text{NE}}( \boldsymbol{\sigma}))\). This is a classical problem in game theory and, more precisely when studying games with future discounting [13]. As we shall see in the next sections, the goal \(\psi_{a}\) of an agent \(a\) may involve temporal discounts. In the booking agent example, for instance, the discounted goal \[\psi_{a}:=\text{priceunder}_{\vartheta}\text{U}_{d}\text{booked}_{a}\] specifies that the flight ticket is affordable (that is, below a threshold \(\vartheta\)) until agent \(a\) booked her ticket. The value obtained from achieving the goal later is reduced according to the discounted function \(d\). ### Secretary Problem The classical secretary problem studies the problem of an agent selecting online an element (called a "secretary") the _maximum value_ from a known number of candidates to be presented one by one in random order. As each item is presented she must either accept it, in which case the game ends, or reject it. In the second case, the next item in the sequence is presented and the agent faces the same choice as before [14]. Applications of this problem include agents' facing the decision of buying a house or hiring employees. Several variants and extensions of the secretary problem are considered in the literature, including using time-dependent discount factors to reduce the benefit derived from selecting a secretary at a later time [1]. The discounted setting captures the cost of rejecting elements. For instance, when seeking to purchase a house, an agent may prefer to chose a suboptimal house at the beginning of the game than wait longer to pick her most desirable house. Recently, Do _et al._ (2022) investigated the selection of \(k\) secretaries by a multi-agent selection committee. The hiring decision is made by a group of voting agents that specify whether they consider acceptable to hire the current candidate or not. With CGS, we can represent deterministic perfect information instances of the secretary problem. Let us consider the selection of \(k\) secretaries by multiple voting agents. For each candidate \(j\) from a finite set of candidates \(C\), we let the atomic propositions \(\text{present}_{j}\) denote whether she was presented and \(\text{hired}_{j}\) denote whether she was hired. Proposition \(k\)-hired specifies whether \(k\) secretaries were already selected2. Footnote 2: The formalization of the game as a CGS is left to the reader. Examples on how to model similar problems using CGS can be found in [12, 13]. The \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula \(\mathbf{F}_{d}\,k\)-hired represents the goal of having enough candidates hired in the future. The satisfaction value of this goal decreases according to \(d\), denoting that it is preferable to hire \(k\) candidates as soon as possible. The discounted goal \[\exists s\forall\mathbf{t}(a,s)(\text{Ag}_{-a},\boldsymbol{t})(\bigvee_{j\in C }\neg\text{present}_{j})\mathbf{U}_{\!d}\,k\text{-hired}\] represents that the voter \(a\) has a strategy to ensure that, no matter the strategies of the other agents, there are candidates still not presented until enough secretaries were hired. In Figure 1, we exemplify the CGS \(\mathcal{G}_{sec}\) representing an instance of the secretary problem with two voting agents, Ann and Bob, and three candidates, \(a\), \(b\), and \(c\). In the initial state (\(q_{0}\)), the agents vote on whether they want to hire candidate \(a\) by performing the action \(y\) or \(n\). Candidate \(a\) is hired only if both agents play \(y\), in which case the game moves to state \(q_{2}\). Otherwise, the game proceeds to state \(q_{1}\) in which they can vote for candidate \(b\) (and similarly, for candidate \(c\) in state \(q_{3}\). The game ends when one secretary is hired (states \(q_{2}\), \(q_{4}\), and \(q_{6}\)) or all candidates have been presented (state \(q_{5}\)). We let the following \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formulas denote agent \(a\)'s and agent \(b\)'s goals, resp.: \[\psi_{\text{Ann}}:=\mathbf{F}\,\text{hired}_{b}\vee\mathbf{F}_{d_{\text{Ann}}}\, 1\text{-hired}\] \[\psi_{\text{Bob}}:=\mathbf{F}_{d_{\text{Bob}}}\,1\text{-hired}\] Figure 1: \(\mathcal{G}_{sec}\) representing the secretary problem with three candidates (\(a\), \(b\) and \(c\)) and two voters (Ann and Bob). In state \(q_{0}\) (similarly, \(q_{1}\) and \(q_{3}\)), Ann and Bob vote on whether to hire candidate \(a\) (resp. \(b\) and \(c\)). States \(q_{2}\), \(q_{4}\), and \(q_{6}\) represent the situation in which candidate \(a\), \(b\) and \(c\) were hired, respectively. and we assume the discount functions \(d_{\text{Ann}}(i)=\frac{1}{i+1}\) and \(d_{\text{Bob}}(i)=(\frac{1}{2})^{i}\). In other words, Ann's goal is to hire candidate \(b\) in the future or to hire any candidate (with a discount according to \(d_{\text{Ann}}\)), while Bob's goal is to hire a candidate in the future (with a discount given by \(d_{\text{Bob}}\)). Notice that without the discount functions, hiring a secretary earlier would be similar to hiring later. The two discount functions stress that Bob is more eager to hire a secretary than Ann. Table 1 shows the value of the functions in each time \(i\). The satisfaction value of the agents' goals is only different from 0 in the states in which a candidate were hired. Let \(\sigma_{abc}\) denote the strategy of playing \(y\) for each candidate (that is, \(\sigma_{abc}(q_{0})=\sigma_{abc}(q_{1})=\sigma_{abc}(q_{3})=y\)), \(\sigma_{bc}\) denote the strategy of playing \(y\) only for candidates \(b\) and \(c\), and \(\sigma_{c}\) denote the strategy of playing \(y\) only for \(c\). Table 2 shows the satisfaction value of agents goals' from the initial state \(q_{0}\) for different assignments of strategies. As illustrated on Table 2, the strategy profile \((\sigma_{bc},\sigma_{abc})\) is a Nash equilibrium and thus \([\hat{\varphi}_{\text{NE}}]^{\hat{\mathcal{G}}_{s},s}\neq 0\) (note memoryless strategies are enough for this problem). ### Negotiation With Time Constraints Let us consider a second context where discounting is a key issue. Negotiation is a type of interaction in MAS in which disputing agents decide how to divide a resource. Time constraints, which may be in the form of both deadlines and discount factors, are an essential element of negotiation because the interaction cannot go on indefinitely and must end within a reasonable time limit [10]. Here we consider the problem of negotiation with time constraints studied in [13, 14], and generalize to the multiple agent case. In this problem, agents want to determine how to divide (single or multiple) issues, called "pies", of size 1 among themselves. The negotiation must end in at most \(n\in\mathbb{N}^{+}\) rounds. This deadline can be represented with an arbitrary discounting function \(d_{n}\) such that \(d_{n}(n)=0\). In this case, a goal in the form \(\mathbf{F}_{d_{n}}\psi\) motivates agents to achieve \(\psi\) before the \(n\)-th stage of the negotiation. The negotiation process is made by alternating offers from the agents. Initially, \(a\) starts by making an offer on how to divide a pie to the other agents \(\text{Ag}_{-a}\). Agents in \(\text{Ag}_{-a}\) can either accept or reject this offer. If agents in \(\text{Ag}_{-a}\) accept, the negotiation ends in an agreement with the proposed share. Otherwise, an agent \(b\neq a\) makes a counteroffer in the next round. The negotiation proceeds until there is an agreement on accepting an offer. The key feature of this problem is that the pie is assumed to shrink (i.e., to lose value) with time [13]. This represents the situation in which the pie perishes with time or is affected by inflation. The pie shrinkage is represented with by a discount function \(d_{pie}\). At time \(i=1\), the size of the pie is \(1\), but in all subsequent time periods \(i>1\), the pie shrinks to \(d_{pie}(i)\). Figure 2 shows the CGS \(\mathcal{G}_{ngt}\), which illustrates an instance of the negotiation problem with a single-issue and two agents, Alice and Beth. The game starts in state \(q_{0}\), where Alice can make an offer to split the pie either so as to take half or two thirds of it for herself (while the remaining of the pie is left for Beth). In the next state (either \(q_{1}\) or \(q_{2}\), according to Alice's action), Beth can perform the action \(acc\) to accept the offer or she can make a counteroffer and pass the turn to Alice. As soon as an agent accepts an offer, the negotiation ends and the pie is divided (e.g., states \(q_{3}\), \(q_{6}\), \(q_{9}\), ans \(q_{12}\)). Let us use the atomic propositions \(\text{twithid}_{a}\), \(\text{half}_{a}\), and \(\text{onethird}_{a}\) to denote whether agent \(a\in\{\text{Alice},\,\text{Beth}\}\) has received two thirds, half, or one-third of the pie. Agents may have different preferences for how much of the pie they receive. Discounting functions can be used to capture the share they are more eager to receive. For instance, let \[\psi_{a}:=\mathbf{F}_{d_{2/3}}\,\text{twithid}_{a}\vee\mathbf{F}_{d_{1/2}}\, \text{half}_{a}\vee\mathbf{F}_{d_{1/3}}\,\text{onethird}_{a}\] be the goal of agents \(a\in\{Alice,Beth\}\), with the discounting functions defined as \(d_{n/m}:=\frac{n}{m}d_{pie}(i)\) for \(n,m\in\{1,2,3\}\). This goal stresses that agent \(a\) prefers to get two-thirds of the pie over half or one-third, and half of the pie over one-third. Note that for the sake of simplicity of this example, deadlines are not considered in \(\psi_{a}\). \begin{table} \begin{tabular}{c c c c c} \hline \hline & & & \multicolumn{2}{c}{\(\chi(\text{Bob})\)} \\ & & \(\sigma_{abc}\) & \(\sigma_{bc}\) & \(\sigma_{c}\) \\ \cline{3-5} & \(\sigma_{abc}\) & (0.5, 0.5) & (1, 0.25) & (0.25, 0.125) \\ \(\chi(\text{Ann})\) & \(\sigma_{bc}\) & (1, 0.25) & (1, 0.25) & (0.25, 0.125) \\ & \(\sigma_{c}\) & (0.25, 0.125) & (0.25, 0.125) & (0.25, 0.125) \\ \hline \hline \end{tabular} \end{table} Table 2: Value of \([\hat{\psi}_{\text{NE}}]^{\hat{\mathcal{G}}_{acc,s},t}_{\chi}(q_{0}),[\hat{\psi}_{ \text{Bob}}]^{\hat{\mathcal{G}}_{acc,s},t}_{\chi}(q_{0}))\) for different strategy assignments \(\chi\). Figure 2: \(\mathcal{G}_{ngt}\) representing the single-issue negotiation problem with two agents, who alternate into proposing a division of the resource. The negotiation ends when one of the agents agree with the proposed division (e.g., at the colored states \(q_{3}\), \(q_{6}\), \(q_{9}\), \(q_{12}\)). To continue the example, consider that the discounting function \(d_{pie}\) is defined as follows \[d_{pie}(i)=\begin{cases}1&\text{if }i\leq 2\\ \Big{(}\frac{1}{2}\Big{)}^{i}&\text{otherwise}\end{cases}\] This represents that the pie starts shrinking only after the 2nd game stage (states \(q_{9}\), \(q_{10}\), \(q_{11}\) and so on). After that, the pie shrinks by half in each successive state. In this case, the rate in which the pie shrinks motivates agents to accept the first proposed division. Given the discount function \(d_{pie}(i)\) and the goals \(\psi_{\text{Alice}}\) and \(\psi_{\text{Beth}}\), a Nash equilibrium from the game is the strategy profile \((\sigma_{\text{Alice}},\sigma_{\text{Beth}})\), where \(\sigma_{\text{Alice}}\) and \(\sigma_{\text{Beth}}\) are strategies such that \(\sigma_{\text{Alice}}(q_{0})=[\frac{2}{3},\frac{1}{3}]\) and \(\sigma_{\text{Beth}}(q)=acc\) for any state \(q\). Thus, we have that \([\![\varphi_{\text{NE}}]\!]^{\mathcal{G},\,r}\neq 0\). ## 5 Model Checking \(\mathsf{SL}\) With Discounting In this section, we study the quantitative model checking problem for \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\). Let us first define it formally. **Definition 4**.: The threshold _model-checking problem_ for \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) consists in deciding, given a formula \(\varphi\), a CGS \(\mathcal{G}\), \(\rho\in\{\mathsf{R},\mathsf{r}\}\), and a threshold \(\vartheta\in[0,1]\), whether \([\![\varphi]\!]^{\mathcal{G},\,\rho}\geq\vartheta\). ### Memoryless Strategies Model-checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) with memoryless agents is no harder than model-checking \(\mathsf{LTL}\) or classical \(\mathsf{SL}\) with memoryless agents. **Theorem 1**.: _Assuming that functions in \(\mathcal{D}\) can be computed in polynomial space, model checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) with memoryless agents is Pspace-complete._ Proof.: The lower bound is inherited from \(\mathsf{SL}\)[Cermak _et al._2018]3, which is captured by \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\). For the upper bound, we first show that each recursive call only needs at most polynomial space. Most cases are treated analogously to the proof of Theorem 2 in [Maubert _et al._2021]. We focus on the case \([\![\varphi_{1}\!]\!]^{\mathcal{G},\,r}_{\chi}(v)\). Let \(\pi=\text{Out}(v,\chi)\). When evaluating a discounted operator on \(\pi\), one can restrict attention to two cases: either the satisfaction value of the formula goes below \(\vartheta\), in which case this happens after a bounded prefix (with index \(m\geq 0\)), or the satisfaction value always remains above \(\vartheta\), in which case we can replace the discounted operator with a Boolean one. This allows us to look only at a finite number of stages. In the first case, let \(m\geq 0\) denote the first index in which the satisfaction value of the formula goes below \(\vartheta\). Let \(\varphi=\varphi_{1}\mathbf{U}_{d}\varphi_{2}\), it follows that Footnote 3: The proof provided in [Cermák _et al._2018] (Thm. 1) considers \(\mathsf{SL}\) with epistemic operators, but by carefully reading the proof one can notice removing the operators and restricting to perfect information do not affect the complexity results. \[[\![\varphi]\!]^{\mathcal{G},\,r}_{\chi}(v)\!=\!\sup_{i\geq 0} \min\!\left(\!d(i)[\![\varphi_{2}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{i}), \min_{0\leq j<i}[\![\varphi_{1}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{j})\!\right)\] \[=\max_{0\leq i\leq m} \min\!\left(\!d(j)[\![\varphi_{2}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{i}), \min_{0\leq j<i}[\![\varphi_{1}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{j})\!\right)\] This can be computed by a while loop that increments \(i\), computes \([\![\varphi_{2}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{i})\), \(\min_{0\leq j<i}[\![\varphi_{1}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{j})\) and their minimum, records the result if it is bigger than the previous maximum, and stops upon reaching a position that has already been visited. This requires storing the current value of \(\min_{0\leq j<i}[\![\varphi_{1}]\!]^{\mathcal{G},\,r}_{\chi}(\pi_{j})\), the current maximum, and the list of positions already visited, which are at most \(|V|\). The second case is treated as for Boolean until (see Appendix A for more details). Next, the number of nested recursive calls is at most \(|\varphi|\), so the total space needed is bounded by \(|\varphi|\) times a polynomial in the size of the input, and is thus polynomial. ### Perfect Recall Our solution to the problem of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) model checking for perfect recall applies the automata-theoretic approach [Thomas, 1990; Vardi and Wolper, 1986]. The solution opportunely combines the techniques used for model-checking in [Almagor _et al._2014, 2014]. Let us recall relevant definitions from automata theory (see [Kupferman _et al._2000] for details). **Alternating tree automata.** An _alternating tree automaton_ (ATA) is a tuple \(\mathcal{A}=\langle\Sigma,\Delta,Q,\delta,q_{0},\aleph\rangle\), where \(\Sigma\), \(\Delta\), and \(Q\) are, respectively, non-empty finite sets of _input symbols, directions_, and _states,_\(q_{0}\in Q\) is an _initial state_, \(\aleph\) is an _acceptance condition_, and \(\delta:Q\times\Sigma\to\mathcal{B}^{+}(\Delta\times Q)\) is an _alternating transition function_ that maps each pair of states and input symbols to a positive Boolean combination on the set of propositions of the form \((d,q)\in\Delta\times Q\), called _moves_. A \(\Sigma\)-labeled tree is a pair \(\langle T,v\rangle\) where \(T\) is a tree and \(V:T\to\Sigma\) maps each node of \(T\) to a letter in \(\Sigma\). **Run** A run of an ATA \(\mathcal{A}=\langle\Sigma,\Delta,Q,\delta,q_{0},\aleph\rangle\) on a \(\Sigma\)-labeled \(\Delta\)-tree \(\tau=\langle T,v\rangle\) is a \((\Delta\times Q)\)-tree \(R\) such that all nodes \(x\in R\), where \(x=\prod_{i=1}^{n}(d_{i},q_{i})\) and \(y=\prod_{i=1}^{n}d_{i}\) with \(n\in[0,\omega[\), it holds that (i) \(p\in T\) and (ii) there is a set of moves \(S\subseteq\Delta\times Q\) with \(S\models\delta(q_{n},v(y))\) such that \(x\cdot(d,q)\in R\) for all \((d,q)\in S\). _Alternating parity tree automata_ (APT) are alternating tree automata along with a _parity acceptance condition_[Gradel _et al._2002]. We consider ATAs along with the parity acceptance condition (APT) \(\aleph=(F_{1},...,F_{k})\in(2^{Q})^{+}\) with \(F_{1}\subseteq...\subseteq F_{k}=Q\). A nondeterministic parity tree automaton (NPT) is a special case of APT in which each conjunction in the transition function \(\delta\) has exactly one move \((d,q)\) associated with each direction \(d\). **APT Acceptance.** An APT \(\mathcal{A}=\langle\Sigma,\Delta,Q,\delta,q_{0},\aleph\rangle\) accepts a \(\Sigma\)-labeled \(\Delta\)-tree \(\tau\) if and only if is there exists a run \(R\) of \(\mathcal{A}\) on \(\tau\) such that all its infinite branches satisfy the acceptance condition \(\aleph\). By \(\mathcal{L}(\mathcal{A})\) we denote the language accepted by the APT \(\mathcal{A}\), that is, the set of trees \(\tau\) accepted by \(\mathcal{A}\). The emptiness problem for \(\mathcal{A}\) is to decide whether \(\mathcal{L}(\mathcal{A})=\emptyset\). **From \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) to APT** We reuse the structure of the model-checking approach for \(\mathsf{SL}\)[Mogavero _et al._2014]. Precisely, given a CGS \(\mathcal{G}\), a state \(v\), and an \(\mathsf{SL}\)-sentence \(\varphi\), the procedure consists of building an NPT that is non-empty if \(\varphi\) is satisfied in \(\mathcal{G}\) at state \(v\) (Thm 5.8 [12]). As an intermediate step to obtain the NPT, the construction builds an APT \(\mathcal{A}\) that accepts a tree encoding of \(\mathcal{G}\) containing the information on an assignment \(\chi\) iff the CGS satisfies the formula of interest for \(\chi\). The NPT \(\mathcal{N}\) is obtained by using an APT direction projection with distinguished direction \(v\) to the APT \(\mathcal{A}\) (Thm 5.4 [12]). The size of the APT \(\mathcal{A}\) is polynomial in the size of \(\mathcal{G}\) and exponential in the number \(k\) of alternations of strategy quantifiers. Then, building the NPT \(\mathcal{N}\) and checking its emptiness requires an additional exponent on top of the number of alternations \(k\), which leads to a final complexity \((k+1)\)-Exptime-complete (and Ptime in the size of the \(\mathcal{G}\)). For adapting this procedure to model checking of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) with perfect recall, we need to unpack and extend the construction of the APT shown in Lemma 5.6 in [12], which we do here in the rest of this section. We define a translation for each \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula \(\varphi\) to an APT \(\mathcal{A}\) that recognizes a tree encoding \(\tau\) of a CGS \(\mathcal{G}\), containing the information on the assignment \(\chi\) iff \([\![\psi]\!]_{X}^{\mathcal{G},\,\mathsf{R}}(v_{\iota})\geq\vartheta\). Defining the appropriate transition function for the \(\mathcal{A}\) follows the semantics of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) in the expected manner. The transitions involving the discounting operators need a careful treatment, as discounting formulas can take infinitely many satisfaction values. As for \(\mathsf{LTL}^{\text{disc}}[\mathcal{D}]\)[1], given a threshold \(\vartheta\) and a computation \(\pi\), when evaluating a discounted operator on \(\pi\), one can restrict attention to two cases: either the satisfaction value of the formula goes below \(\vartheta\), in which case this happens after a bounded prefix, or the satisfaction value always remains above \(\vartheta\), in which case we can replace the discounted operator with a Boolean one. As for [12], we use the concept of encoding for a CGS assignment. First, let \(\mathsf{Val}_{\varphi}\!:=\!\text{free}(\varphi)\!\rightarrow\!\text{Ac}\). **Assignment-State Encoding.** Let \(\mathcal{G}\) be a CGS, \(v\in V\) be a state, and \(\chi\) be an assignment. Then, the assignment-encoding for \(\chi\) is the \((\mathsf{Val}_{\varphi}\times V)\)-labeled \(V\)-tree \(\tau\), \(\langle T,u\rangle\), such that \(T\) is the set of histories \(h\) of \(\mathcal{G}\) given \(\chi\) starting in \(v\) and \(u(h):=(f,q)\), where \(q\) is the last state in \(h\) and \(f:\text{free}(\varphi)\rightarrow\text{Ac}\) is defined by \(f(s):=\chi(s)(h)\) for each free variable \(s\in\text{free}(\psi)\). **Lemma 1**.: _Let \(\mathcal{G}\) be a CGS, \(\varphi\) an \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula, and \(\vartheta\in[0,1]\) be a threshold. Then, there exists an \(\mathcal{A}_{\varphi,\vartheta}=\langle\mathsf{Val}_{\varphi}\times V,V,Q, \delta,q_{0},\aleph\rangle\) such that, for all states \(q\in Q\), and assignments \(\chi\), it holds that \([\![\varphi]\!]_{X}^{\mathcal{G},\,\mathsf{R}}(v)>\vartheta\) iff \(\tau\in\mathcal{L}(\mathcal{A}_{\varphi,\vartheta})\), where \(\tau\) is the assignment-state encoding for \(\chi\)._ Proof sketch.: The construction of the APT \(\mathcal{A}_{\varphi,\vartheta}\) is done recursively on the structure of the formula \(\varphi\). Let \(xcl(\varphi)\) be the extended closure of \(\varphi\) defined analogously to [1]. The state space \(Q\) consists of two types of states. Type-1 states are assertions of the form \((\psi>t)\) or \((\psi<t)\), where \(\psi\in xcl(\varphi)\) is not an \(\mathsf{SL}\) formula and \(t\in[0,1]\). Type-2 states correspond to \(\mathsf{SL}\) formulas. The precise definition of \(xcl(\varphi)\), Type 1 and Type 2 states is analogously to [1] and can be found in Appendix B. Let \(S\) be the set of Type-1 and Type-2 states for all \(\psi\in xcl(\varphi)\) and thresholds \(t\in[0,1]\). Then, \(Q\) is the subset of \(S\) constructed on-the-fly according to the transition function defined below. The transition function \(\delta:(\mathsf{Val}_{\varphi}\times V)\rightarrow\mathcal{B}^{+}(V\times Q)\) is defined as follows. For Type-2 states, the transitions are as in the standard translation from \(\mathsf{SL}\) to APT [12]. For the other states, we define the transitions as follows. Let \((f,v)\in(\mathsf{Val}_{\varphi}\times V)\) and \(\oplus\in\{<,>\}\). * \(\delta((p>t),(f,v))=\left[\begin{array}{ll}true&\text{if $p\in\ell(v)$ and $t<1$},\\ false&\text{otherwise}.\end{array}\right.\) * \(\delta((p<t),(f,v))=\left[\begin{array}{ll}false&\text{if $p\in\ell(v)$ or $t=0$},\\ true&\text{otherwise}.\end{array}\right.\) * \(\delta((\exists s\psi)\oplus t),(f,v))=\bigvee_{c\in\mathsf{Ac}}\delta^{ \prime}(\psi\oplus t,(f[s\mapsto c],v))\) where \(\delta^{\prime}_{\psi}\) is obtained by nondeterminizing the APT \(\mathcal{A}_{\psi,t}\), by applying the classic transformation [13] which gives the equivalent NPT \(\mathcal{N}_{\psi,t}=\langle\mathsf{Val}_{\psi}\times V,V,Q^{\prime},\delta^{ \prime},q^{\prime}_{0},\aleph^{\prime}\rangle\). * \(\delta((s,a)\psi\oplus t),(f,v))=\delta^{\prime}((\psi\oplus t),(f^{\prime},v))\) where \(f^{\prime}=f[t\mapsto f(s)]\) if \(t\in\text{free}(\psi)\), and \(f^{\prime}=f\) otherwise. The remaining cases are a simple adaptation of the proof in [1] (Thm 1) to the input symbols \(\mathsf{Val}_{\varphi}\times V\). We provide more details of the proof in Appendix B. The initial state of \(\mathcal{A}_{\varphi,\vartheta}\) is \((\varphi>\vartheta)\). The accepting states are these of the form \((\psi_{1}\mathds{U}\psi_{2}<t)\) for Type-1 states, as well as accepting states that arise in the standard translation of Boolean \(\mathsf{SL}\) to APT for Type-2 states. While the construction as described above is infinite only finitely many states are reachable from the initial state, and we can compute these states in advance. Using the threshold and the discounting behavior of the discounted-Until, we can restrict attention to a finite resolution of satisfaction values, enabling the construction of a finite automaton. Its size depends on the functions in \(\mathcal{D}\). Intuitively, the faster the discounting tends to 0, the fewer states there will be. Thus, the exact complexity of model checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) (which relies on the size of the APT) depends on two aspects. First, the alternation of quantifiers in the formula and, second, the type of discounting functions considered. In the specific setting where \(\mathcal{D}\) is composed of exponential-discounting functions, (i.e., \(\mathcal{D}\subseteq\{d(j)=\lambda^{j}:j\in(0,1)\cap\mathbb{Q}\}\)), the overall complexity remains as it is for \(\mathsf{SL}\). Exponential discounting functions are perhaps the most common class of discounting functions, as they describe many natural processes (e.g., temperature change and effective interest rate [12, 13]). **Theorem 2**.: _Assuming that functions in \(\mathcal{D}\) are exponential-discounting, model checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) with memoryfull agents is \((k+1)\)-Exptime and \(k\)-Expspace w.r.t the number \(k\) of quantifiers alternations in the specification._ Proof sketch.: The model checking procedure from [12] is \((k+1)\)-Exptime-complete and \(k\)-Expspace w.r.t the number \(k\) of quantifiers alternations in the specification. Let \(\vartheta\in(0,1)\) be a threshold. When discounting by an exponential-discounting function \(d(j)=\lambda^{j}\in\mathcal{D}\), the number of states in the APT constructed as per Lemma 1 is proportional to the maximal number \(j\) such that \(\lambda^{j}<\vartheta\), which is polynomial in the description length of \(\vartheta\) and \(\lambda\)[1]. ## 6 Conclusion and Discussion In this paper, we proposed Strategy Logic with discounting (\(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\)), which contains an operator that captures the idea that the longer it takes to fulfill a requirement, the smaller the satisfaction value is. This work extends the research on temporal and strategic reasoning in Game Theory. As advocated by Pauly and Wooldridge (2003), logics for strategic reasoning can have an important role in the specification and verification of game-theoretical problems and, in particular, related to Automated Mechanism Design (AMD). Indeed, recent works have proposed a new approach for AMD based on model-checking and synthesis from specifications in \(\mathsf{SL}[\mathcal{F}]\)[16, 17]. Remarkably, \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) provides less complicated machinery in relation to \(\mathsf{SL}[\mathcal{F}]\), as it is defined over classical concurrent game structures. More importantly, it brings a new dimension for reasoning about mechanisms that take into consideration how events are affected by how long in the future they occur. There are several interesting directions for future work, including considering synthesis from \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\)-specifications as well as the setting of imperfect information. With \(\mathsf{SL}\) already, imperfect information yields undecidability, but known tractable fragments exist [1, 1]. We will investigate them in the case of \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\). ## Appendix A Model checking with memoryless strategies **Theorem 1**.: _Assuming that functions in \(\mathcal{D}\) can be computed in polynomial space, model checking \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) with memoryless agents is Pspace-complete._ Proof.: The lower bound is inherited from \(\mathsf{SL}\)[1]4, which is captured by \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\). For the upper bound, we first show that each recursive call only needs at most polynomial space. Most cases are treated analogously to the proof of Theorem 2 in [16]. Footnote 4: The proof provided in [1](Thm. 1) considers \(\mathsf{SL}\) with epistemic operators and imperfect information, but these operators do not affect the complexity results. First, observe that each assignment \(\chi\) can be stored in space \(O((|\text{free}(\varphi)|+|\mathsf{Ag}|)\cdot|V|\cdot\log|\mathrm{Ac}|)\). Next, for the base case, it is clear that \(\llbracket\!\!\rrbracket\!\rrbracket_{\chi}^{\mathcal{G},\,\mathsf{r}}(v)\) can be computed in constant space. For strategy quantification \(\llbracket\!\!\rrbracket\!\rrbracket_{\chi}^{\mathcal{G},\,\mathsf{r}}(v)\), besides the recursive call to \(\llbracket\!\!\rrbracket\!\rrbracket_{\chi(\!\!\rrbracket\!\rrbracket\! \rrbracket\!\times\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! 1. _If_ \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,R}(v)>0\) _then_ \(\mathcal{G},\chi,v\models\varphi^{+}\)_, and if_ \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,R}(v)<1\) _then_ \(\mathcal{G}\chi v\models\varphi^{<1}\)_._ 2. _If_ \(\mathcal{G},\chi,v\models\varphi^{+}\) _then_ \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,R}(v)>0\) _and if_ \(\mathcal{G},\chi,v\models\varphi^{<1}\) _then_ \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,R}(v)<1\)_._ Henceforth, given an \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula \(\varphi\), we refer to \(\varphi^{+}\) as in Proposition 1. Before detailing the proof for the model checking we introduce some additional definitions. For a function \(f:\mathbb{N}\rightarrow[0,1]\) and for \(k\in\mathbb{N}\), we define \(f^{+k}:\mathbb{N}\rightarrow[0,1]\) as follows. For every \(i\in\mathbb{N}\) we have that \(f^{+k}(i)=f(i+k)\). Let \(\varphi\) be an \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula over \(AP\). We define the _extended closure_ of \(\varphi\), denoted \(xcl(\varphi)\), to be the set of all the formulas \(\psi\) of the following _classes_: 1. \(\psi\) is a subformula of \(\varphi\). 2. \(\psi\) is a subformula of \(\theta^{+}\) or \(\neg\theta^{+}\), where \(\theta\) is a subformula of \(\varphi\). 3. \(\psi\) is of the form \(\theta_{1}\mathbf{U}_{d^{+k}}\theta_{2}\) for \(k\in\mathbb{N}\), where \(\theta_{1}\mathbf{U}_{d}\theta_{2}\) is a subformula of \(\varphi\). **Lemma 1**.: _Let \(\mathcal{G}\) be a CGS, \(\varphi\) an \(\mathsf{SL}^{\text{disc}}[\mathcal{D}]\) formula, and \(\vartheta\in[0,1]\) be a threshold. Then, there exists an \(\mathcal{A}_{\varphi,\vartheta}=\langle\!\langle\mathit{Val}_{\varphi}\times V, V,Q,\delta,\vartheta_{0},\aleph\rangle\) such that, for all states \(q\in Q\), and assignments \(\chi\), it holds that \(\llbracket\varphi\rrbracket_{\chi}^{\mathcal{G},\,R}(v)>\vartheta\) iff \(\tau\in\mathcal{L}(\mathcal{A}_{\varphi,\vartheta})\), where \(\tau\) is the assignment-state encoding for \(\chi\)._ Proof sketch.: The construction of the APT \(\mathcal{A}_{\varphi,\vartheta}\) is done recursively on the structure of the formula \(\varphi\). The state space \(Q\) consists of two types of states. Type-1 states are assertions of the form \((\psi>t)\) or \((\psi<t)\), where \(\psi\in xcl(\varphi)\) is of Class 1 or 3 and \(t\in[0,1]\). Type-2 states correspond to \(\mathsf{SL}\) formulas of Class 2. Let \(S\) be the set of Type-1 and Type-2 states for all \(\psi\in xcl(\varphi)\) and thresholds \(t\in[0,1]\). Then, \(Q\) is the subset of \(S\) constructed on-the-fly according to the transition function defined below. We later show that \(Q\) is indeed finite. The transition function \(\delta:(\text{Val}_{\varphi}\times V)\rightarrow\mathcal{B}^{+}(V\times Q)\) is defined as follows. For Type-2 states, the transitions are as in the standard translation from \(\mathsf{SL}\) to APT [10]. For the other states, we define the transitions as follows. Let \((f,v)\in(\text{Val}_{\varphi}\times V)\), \(\oplus\in\{<,>\}\), and \(\pi=\text{Out}(v,\chi)\). * \(\delta((true>t),(f,v))=\begin{cases}true&\text{ if }t<1\\ \text{\it false}&\text{ if }t=1\end{cases}\) * \(\delta((\text{\it false}>t),(f,v))=\text{\it false}\) * \(\delta((true<t),(f,v))=\begin{cases}true&\text{ if }t>0\\ \text{\it false}&\text{ if }t=0.\end{cases}\) * \(\delta((p>t),(f,v))=\begin{cases}true&\text{ if }p\in\ell(v)\text{ and }t<1\\ \text{\it false}&\text{ otherwise.}\end{cases}\) * \(\delta((p<t),(f,v))=\begin{cases}\text{\it false}&\text{ if }p\in\ell(v)\text{ or }t=0,\\ true&\text{ otherwise.}\end{cases}\) * \(\delta((\psi_{1}\vee\psi_{2}\oplus t),(f,v))=\delta((\psi_{1}\oplus t),(f,v) )\vee\delta((\psi_{2}\oplus t),(f,v))\) * \(\delta((\exists s\psi)\oplus t),(f,v))=\bigvee_{c\in\text{Ac}}\delta^{\prime} (\psi\oplus t,(f[s\mapsto c],v))\) where \(\delta^{\prime}_{\psi}\) is obtained by nondeterminizing the APT \(\mathcal{A}_{\psi,t}\), by applying the classic transformation [12] which gives the equivalent NPT \(\mathcal{N}_{\psi,t}=\langle\!\langle\text{Val}_{\psi}\times V,V,Q^{\prime}, \delta^{\prime},q^{\prime}_{0},\aleph^{\prime}\rangle\) * \(\delta((s,a)\psi\oplus t),(f,v))=\delta^{\prime}((\psi\oplus t),(f^{\prime},v))\) where \(f^{\prime}=f[t\mapsto f(s)]\) if \(t\in\text{free}(\psi)\), and \(f^{\prime}=f\) otherwise * \(\delta((\neg\psi\oplus t),(f,v))=\delta^{\prime}((\psi\oplus t),(f,v))\) where \(\delta^{\prime}\) is obtained by dualizing the automaton \(\mathcal{A}_{\psi,t}\)[12], which gives the automata \(\bar{\mathcal{A}}_{\psi,t}=\langle\!\langle\text{Val}_{\psi}\times V,V,Q^{ \prime},\delta^{\prime},q^{\prime}_{0},\aleph^{\prime}\rangle\) * \(\delta((\mathbf{X}\psi_{1}>t),(f,v))=\delta((\psi_{1}>t),(f,\pi_{0}))\) * \(\delta((\mathbf{X}\psi_{1}<t),(f,v))=\delta((\psi_{1}<t),(f,\pi_{0}))\) * \(\delta((\psi_{1}\mathbf{U}\psi_{2}>t),(f,v))\!=\!\begin{cases}\delta_{-}& \text{ if }0<t<1\\ \text{\it false}&\text{ if }t\geq 1\\ \text{\it false}&\text{ if }t=0\end{cases}\) where \(\delta_{>}=\delta((\psi_{2}>t),(f,v))\vee[\delta((\psi_{1}>t),(f,v))\wedge( \psi_{1}\mathbf{U}\psi_{2}>t)]\) and \(\delta_{0}=\delta(((\psi_{1}\mathbf{U}\psi_{2})^{+}),(f,v))\) * \(\delta((\psi_{1}\mathbf{U}\psi_{2}<t),(f,v))\!=\!\begin{cases}\delta^{\prime}& \text{ if }0<t\leq 1\\ \text{\it true}&\text{ if }t>1\\ \text{\it false}&\text{ if }t=0\end{cases}\) where \(\delta_{<}=\delta((\psi_{2}<t),(f,v))\wedge[\delta((\psi_{1}<t),(f,v))\vee( \psi_{1}\mathbf{U}\psi_{2}<t)]\) * \(\delta((\psi_{1}\mathbf{U}_{d}\psi_{2}>t),(f,v))\!=\!\begin{cases}\delta_{-}& \text{ if }0<t\frac{t}{d(0)}<1\\ \text{\it false}&\text{ if }\frac{t}{d(0)}\geq 1\\ \text{\it false}&\text{ if }\frac{t}{d(0)}=0\end{cases}\) where \(\delta_{>}=\delta((\psi_{2}>\frac{t}{d(0)}),(f,v))\vee[\delta((\psi_{1}> \frac{t}{d(0)}),\\ (f,v))\wedge(\psi_{1}\mathbf{U}_{d^{+}}\psi_{2}>t)]\) and \(\delta(((\psi_{1}\mathbf{U}_{d}\psi_{2})^{+}),(f,v))\) * \(\delta((\psi_{1}\mathbf{U}_{d}\psi_{2}<t),(f,v))\!=\!\begin{cases}\delta_{-}& \text{ if }0<\frac{t}{d(0)}\leq 1\\ true&\text{ if }\frac{t}{d(0)}>1\\ false&\text{ if }\frac{t}{d(0)}=0\end{cases}\) where \(\delta_{<}=\delta((\psi_{2}<\frac{t}{d(0)}),(f,v))\vee[\delta((\psi_{1}> \frac{t}{d(0)}),\\ (f,v))\wedge(\psi_{1}\mathbf{U}_{d^{+}}\psi_{2}<t)]\) and \(\delta(((\psi_{1}\mathbf{U}_{d}\psi_{2})^{+}),(f,v))\) * \(\delta((\psi_{1}\mathbf{U}_{d}\psi_{2}<t),(f,v))\!=\!\begin{cases}\delta_{-}& \text{ if }0<\frac{t}{d(0)}\leq 1\\ true&\text{ if }\frac{t}{d(0)}>1\\ false&\text{ if }\frac{t}{d(0)}=0\end{cases}\) ## Acknowledgments We thank the ANR project AGAPE ANR-18-CE23-0013, the PNNR FAIR project, the InDAM project "Strategic Reasoning in Mechanism Design", the PRIN 2020 Project RIPER, and the EU ICT-48 2020 project TAILOR (No. 952215).
2305.12539
Value-at-Risk-Based Portfolio Insurance: Performance Evaluation and Benchmarking Against CPPI in a Markov-Modulated Regime-Switching Market
Designing dynamic portfolio insurance strategies under market conditions switching between two or more regimes is a challenging task in financial economics. Recently, a promising approach employing the value-at-risk (VaR) measure to assign weights to risky and riskless assets has been proposed in [Jiang C., Ma Y. and An Y. "The effectiveness of the VaR-based portfolio insurance strategy: An empirical analysis" , International Review of Financial Analysis 18(4) (2009): 185-197]. In their study, the risky asset follows a geometric Brownian motion with constant drift and diffusion coefficients. In this paper, we first extend their idea to a regime-switching framework in which the expected return of the risky asset and its volatility depend on an unobservable Markovian term which describes the cyclical nature of asset returns in modern financial markets. We then analyze and compare the resulting VaR-based portfolio insurance (VBPI) strategy with the well-known constant proportion portfolio insurance (CPPI) strategy. In this respect, we employ a variety of performance evaluation criteria such as Sharpe, Omega and Kappa ratios to compare the two methods. Our results indicate that the CPPI strategy has a better risk-return tradeoff in most of the scenarios analyzed and maintains a relatively stable return profile for the resulting portfolio at the maturity.
Peyman Alipour, Ali Foroush Bastani
2023-05-21T18:48:58Z
http://arxiv.org/abs/2305.12539v1
Value-at-Risk-Based Portfolio Insurance: Performance Evaluation and Benchmarking Against CPPI in a Markov-Modulated Regime-Switching Market ###### Abstract Designing dynamic portfolio insurance strategies under market conditions switching between two or more regimes is a challenging task in financial economics. Recently, a promising approach employing the value-at-risk (VaR) measure to assign weights to risky and riskless assets has been proposed in [Jiang C., Ma Y. and An Y. "The effectiveness of the VaR-based portfolio insurance strategy: An empirical analysis", International Review of Financial Analysis 18(4) (2009): 185-197]. In their study, the risky asset follows a geometric Brownian motion with constant drift and diffusion coefficients. In this paper, we first extend their idea to a regime-switching framework in which the expected return of the risky asset and its volatility depend on an unobservable Markovian term which describes the cyclical nature of asset returns in modern financial markets. We then analyze and compare the resulting VaR-based portfolio insurance (VBPI) strategy with the well-known constant proportion portfolio insurance (CPPI) strategy. In this respect, we employ a variety of performance evaluation criteria such as Sharpe, Omega and Kappa ratios to compare the two methods. Our results indicate that the CPPI strategy has a better risk-return tradeoff in most of the scenarios analyzed and maintains a relatively stable return profile for the resulting portfolio at the maturity. keywords: Finance, Constant Proportion Portfolio Insurance, Value at Risk, Regime-Switching, Omega Performance Measure. + Footnote †: journal: ## 1 Introduction Portfolio insurance products are popular structured investment tools widely used by private and institutional investors, providing their holders with capital protection in down turning markets, while allowing benefits from upside market potentials (see e.g. [38, 41, 50] and the many references therein). The first milestone research in this direction is due to Leland and Rubinstein [39] in 1976 who proposed a synthetically constructed put option by trading on the underlying risky portfolio and a risk-free asset to dynamically hedge the risk of issuer/guarantor liability. At the same time, Brennan and Schwartz [15], studying equity-linked life insurance policies guaranteeing a minimum return, reached a similar result by using the then new arbitrage-free pricing methodology of Black and Scholes [13] and Merton [44]. These ideas were culminated in option-based portfolio insurance (OBPI) strategies which invest the initial endowment in a risky reference portfolio covered by a put option with a strike chosen to be proportional to the guaranteed amount [6]. In a parallel line of research, Perold [46] (see also Perold and Sharpe [47] and Black and Jones [11]) introduced an alternative dynamic trading strategy, called constant proportion portfolio insurance (CPPI), based on continuous rebalancing of a portfolio containing a safe asset (e.g. treasury bills) and a risky one (e.g. a financial index) in response to fluctuations in market conditions. By choosing a floor value, \(F_{T}\), as the minimum acceptable portfolio level at the maturity, \(T\), at each rebalancing time, \(t\), the difference between the portfolio value, \(V_{t}\) and the (discounted) floor is computed as \(C_{t}=V_{t}-F_{T}e^{-r(T-t)}\) (called the _cushion_) and the _exposure_ to the risky asset is determined by \(E_{t}=mC_{t}\) in which \(m\) is called the _multiple_. The excess will naturally be invested in the riskless asset. It is shown (see e.g. Black and Perold [12]) that in the absence of borrowing constraints and transaction costs, CPPI is a special case of the HARA utility-maximizing rules that have appeared in the literature of continuous-time asset allocation (see e.g. Merton [43]). For more details on the basic CPPI model and its extensions, see e.g. Hirsa [32], Boulier and Kanniganiti [14], Temocin, et al. [54], Bertrand and Prigent [10], Ben Ameur and Prigent [1, 2] among many other references. When the basic CPPI strategy is exploited with a constant multiple during the holding-period, some inevitable shortcomings will arise due mainly to ignoring the investor's beliefs and risk preferences (see e.g. Hakanoglu et al. [28]). This drawback has led the researchers to the family of dynamic proportion portfolio insurance (DPPI) strategies which basically adopt the choice of the multiple (and so the risk exposure) to the volatility of market prices (see e.g. Chen and Chang [19], Chen et al. [20], Hamidi et al. [30]). The basic idea is to vary the multiple in such a way that some "guarantee condition" at the maturity is satisfied with a given probability for specified market conditions. This condition is closely related to the probability of violating the floor protection widely known in the literature as the _gap risk_ (see e.g. Jessen [1, 33]). Value-at-Risk (VaR) based portfolio insurance (VBPI) is a new approach to dynamic asset allocation which is based on the portfolio's VaR concept (see e.g. [7] for a general overview of VaR-based risk management strategies). For a given time horizon, \(t\) and confidence level, \(p\), the value at risk is the loss in market value over the time horizon \(t\) that is exceeded with probability \(1-p\). In this strategy, the insured portfolio is constructed and rebalanced frequently in such a way that the portfolio level at each time step exceeds the floor at a given confidence level [34]. Based on the fact that this approach targets the gap risk faced by the insurer, it leads naturally to a path-dependent portfolio strategy consistent with the performance measure used to evaluate the insurance strategy. It could also be viewed as a generalized CPPI strategy which provides the buyer with additional flexibility to benefit more from upward market movements while limiting the potential loss from downward moves. While the basic CPPI strategy has been proposed and analyzed mainly for diffusion-based dynamics of the underlying risky portfolio, there have been some efforts in the literature to extend this methodology to more complex situations in which the underlying assete follows a dynamic process such as jump-diffusion (see e.g. [23, 18]), regime-switching diffusion (see e.g. [27]) and regime-switching exponential Levy model (see e.g. [55]). However, a systematic study in the literature is still lacking for the extension of the VaR-based DPPI strategies to such families of asset return distributions. As a first step to fill this gap, we consider here a continuous-time regime-switching market in which the model parameters switch from one regime to another according to an unobservable Markov process (see e.g. [24] and the many references therein). These models provide a natural way to capture discrete shifts in market behavior in an efficient and flexible way. We compare the performance of constrained CPPI and VBPI strategies in this case to demonstrate the effect of regime shifts. In this respect, we perform a Monte-Carlo simulation study by generating sample paths from the underlying asset's risky dynamics and show that the CPPI strategy has a better risk-return tradeoff in most of the scenarios examined and maintains a relatively stable return profile for the resulting portfolio at the maturity. This complements the available results in the literature of portfolio insurance which claim the effectiveness of CPPI-based strategies in the presence of realistic circumstances and incompleteness assumptions on the market (see, e.g. [48]). The structure of this paper is as follows: In Section 2, we provide the necessary background material about regime-switching diffusion processes. Section 3 in concerned with the details of stylized and constrained CPPI strategies when the underlying risky asset follows a regime-switching geometric Brownian motion process. In Section 4, we first present the details of our proposed VaR-based strategy for the regime-switching case and then demonstrate the required computational procedure to estimate the VaR quantity needed in our proposed algorithm. Section 5 is devoted to introduce the risk-adjusted performance measures used in this paper and also the downside risk measures employed to evaluate the possible pitfalls of the proposed strategy. In the remainder and Section 6, we present the details of our comprehensive numerical experiments to validate and benchmark the VaR-based strategy against the well-known CPPI method. We conclude the paper by commenting on some possible research directions in this field. ## 2 The Financial Market Model In the sequel, we consider an investment horizon of \([0,T]\) and assume that the value of the riskless asset, denoted by \(B\), grows with a constant risk-free rate, \(r\) as \[dB_{t}=rB_{t}dt. \tag{2.1}\] On the other hand, the market value of the risky asset, denoted by \(S\), is given by a regime-switching geometric Brownian motion (GBM) of the form \[dS_{t}=\mu_{\alpha_{t}}S_{t}dt+\sigma_{\alpha_{t}}S_{t}dW_{t}, \tag{2.2}\] with a positive initial value, \(S_{0}\). In the above equation, \(W=\left(W_{t}\right)_{0\leqslant t\leqslant T}\) is a standard Wiener process defined on a complete filtered probability space, \(\left(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{0\leqslant t\leqslant T},\mathbb{ P}\right)\) and \(\mu_{\alpha_{t}}\) and \(\sigma_{\alpha_{t}}\) are the drift and diffusion terms depending on a continuous-time stationary Markov process, \(\left(\alpha_{t}\right)_{0\leqslant t\leqslant T}\), independent of \(W\). The process, \(\left(\alpha_{t}\right)_{0\leqslant t\leqslant T}\), takes values in the set \(\mathcal{H}=\{1,2,....H\}\) where each element represents a possible economic or financial regime or state of the world. The generator of this Markov chain is given by the matrix \(Q=\left(q_{ij}\right)_{H\times H}\) in which \(q_{ij}\geqslant 0\) for all \(i\neq j\) and \(\sum_{j=1}^{H}q_{ij}=0,\ i\in\mathcal{H}\). The transition probability matrix could be obtained as \[P(t,s)=\exp(Q(s-t)), \tag{2.3}\] for each \(s,t\) (\(0<s\leq t\)) with the elements \[p_{ij}(t,s)=\mathbb{P}(\alpha_{s}=j|\alpha_{t}=i),\quad i,j\in\mathcal{H}. \tag{2.4}\] The probability of being in a specific state at time \(t\) will be denoted by \(p_{i}(t)\) and is given by \[p_{i}(t)=\mathbb{P}(\alpha_{t}=i)=\sum_{k=1}^{H}p_{k}(0)p_{ki}(0,t),\quad \forall i\in\mathcal{H}. \tag{2.5}\] Also the stationary transition probabilities corresponding to the Markov process are given by \[p_{i}=\lim_{t\rightarrow\infty}p_{i}(t),\quad\forall i\in\mathcal{H}. \tag{2.6}\] In the special case where the continuous-time Markov chain, \(\alpha_{t}\), contains only two states (e.g. state 1 denoting a stable low-volatility regime and state 2 denoting a more unstable high volatility regime), the matrix \(Q\) could be written as \[Q=\left(\begin{array}{cc}-q_{11}&q_{11}\\ q_{22}&-q_{22}\end{array}\right), \tag{2.7}\] with positive \(q_{ij}\)s and the stationary probabilities are obtained as \[p_{1}=\frac{q_{11}}{q_{11}+q_{22}},\ \ \ \ \ p_{2}=\frac{q_{22}}{q_{11}+q_{22}}. \tag{2.8}\] In the remainder, we describe the details of CPPI and VBPI strategies analysed in this paper. ## 3 Stylized Constant Proportion Portfolio Insurance CPPI is a dynamic self-financing strategy in which positions in risky and risk free assets are rebalanced dynamically so that the terminal value of the portfolio lies above a guaranteed level, \(F_{T}\) (the floor), which is usually given as a percentage, \(\pi\) (\(0\leq\pi\leq 1\)), of the initial investment \[F_{T}=\pi V_{0}, \tag{3.1}\] and the value of the floor at any given time, \(t\in[0,T]\), is obtained as \[F_{t}=e^{-r(T-t)}F_{T}. \tag{3.2}\] The difference between the market value of the portfolio, \(V_{t}^{\rm CPPI}\) and the floor is called the cushion and is denoted by \(C_{t}\). In the standard CPPI strategy, there is no restriction on the risky part of the portfolio1, but to apply more realistic conditions, constraints are usually imposed on the cushion to prohibit short-selling of the risky asset (see e.g. [22]). Under these conditions, the cushion value at any time \(t\in[0,T]\) is given by Footnote 1: The standard CPPI strategy allows the exposure to be leveraged at any level i.e., there is no constraint on the borrowing. \[C_{t}=(V_{t}^{\rm CPPI}-F_{t})^{+}, \tag{3.3}\] and the total amount invested in the risky asset (which is called the exposure) is obtained by multiplying a constant coefficient, \(m\), in the cushion value \[E_{t}=mC_{t}. \tag{3.4}\] It is obvious that higher multiples will result in more profits from stock price increases. Nevertheless, this will also cause faster convergence of the portfolio value to the floor in the case of a dramatic decrease in the stock prices. Note also that different values of the parameter, \(m\), change the behavior of the payoff function and \(m>1\) provides a convex payoff structure. From (3.3) and (3.4), it is deduced that if \(V_{t}^{\rm CPPI}\leqslant F_{t}\), then the exposure will be zero and so the entire portfolio value will be invested in the risk-free asset (see e.g. [23] for more details). It could also easily be seen that the value of the portfolio at any time is equal to the current floor plus the cushion value. As cushion is non-negative, the value of the CPPI portfolio is always above the current floor. ### Constrained CPPI In realistic situations, we usually impose a constraint on the portfolio by limiting the exposure to the bounds \[0<E_{t}<pV_{t}{}^{\rm CPPI},\] in which \(p>0\) is a given constant. So the value of the exposure in the constrained CPPI strategy will be obtained as \[E_{t}=\min\{mC_{t},pV_{t}{}^{\rm CPPI}\}. \tag{3.5}\] Note that in the constrained CPPI case, the portfolio composition is path-dependent (see [14]) and the fraction of the portfolio invested in the risk-free asset is given by \[B_{t}=V_{t}{}^{\rm CPPI}-E_{t}, \tag{3.6}\] and the evolution of the portfolio value is described by the following stochastic differential equation (SDE) (see e.g. [14]) \[dV_{t}{}^{\rm CPPI}=E_{t}\frac{dS_{t}}{S_{t}}+(V_{t}{}^{\rm CPPI}-E_{t})\frac{ dB_{t}}{B_{t}}. \tag{3.7}\] As mentioned above, we assume that the dynamics of the risky part of the portfolio is given by a regime-switching GBM given by (2.2) and so substituting it in (3.7), we obtain the equation \[dV_{t}{}^{\rm CPPI}=E_{t}\Big{(}(\mu_{\alpha_{t}}-r)d_{t}+\sigma_{\alpha_{t}} dW_{t}\Big{)}+rV_{t}{}^{\rm CPPI}d_{t}. \tag{3.8}\] Based on restrictions on the exposure given by (3.3) and (3.5), the dynamics of the CPPI portfolio could be re-written as \[dV_{t}{}^{\rm CPPI}=\left\{\begin{array}{ll}rV_{t}{}^{\rm CPPI}d_{t},&C_{t} \leqslant 0,\\ mC_{t}((\mu_{\alpha_{t}}-r)d_{t}+\sigma_{\alpha_{t}}dW_{t})+V_{t}{}^{\rm CPPI}rd _{t},&0<mC_{t}<pV_{t}{}^{\rm CPPI},\\ pV_{t}{}^{\rm CPPI}((\mu_{\alpha_{t}}-r)d_{t}+\sigma_{\alpha_{t}}dW_{t})+V_{t}{ }^{\rm CPPI}rd_{t},&pV_{t}{}^{\rm CPPI}\leqslant mC_{t}.\end{array}\right. \tag{3.9}\] By applying Ito's lemma (see e.g. [45]) and discretizing the dynamics (3.9) at a set of discrete nodes \(t_{n}=n\Delta t\), the solution of this SDE could be approximated by a discrete process of the form \[V_{t_{n+1}}^{\rm CPPI}=V_{t_{n}}^{\rm CPPI}+\left\{\begin{array}{ll}rV_{t_{ n}}^{\rm CPPI}\Delta t,&C_{t_{n}}\leqslant 0,\\ C_{t_{n}}(m(\mu_{\alpha_{t_{n}}}-r)+r)\Delta t+m\sigma_{\alpha_{t_{n}}}\Delta W _{t_{n}},&0<mC_{t_{n}}<pV_{t_{n}}^{\rm CPPI},\\ C_{t_{n}}(p(\mu_{\alpha_{t_{n}}}-r)+r)\Delta t+p\sigma_{\alpha_{t_{n}}}\Delta W _{t_{n}},&pV_{t_{n}}^{\rm CPPI}\leqslant mC_{t_{n}}.\end{array}\right. \tag{3.10}\] ## 4 VaR-Based Portfolio Insurance Similar to the CPPI method, VBPI is a dynamic trading strategy which rebalances the portfolio composition according to the Value-at-Risk (VaR) concept. As the value at risk measure concentrates on the downward tail of the return distribution, the VBPI strategy could address the gap risk by allocating the funds between risky and risk-less assets in such a way as the maximum loss is equated to the VaR at a specified confidence level. This strategy gives a specific discipline for rebalancing portfolios such that the gap risk is controlled. In the remainder, we assume that the risky part of the portfolio follows a regime-switching geometric Brownian motion as given by (2.2). Let \(0\leq w_{t}\leq 1\) be the fractional allocation of funds to the risk-free asset which will result in \(\beta_{t}=w_{t}\frac{V_{t}}{B_{t}}\) as the number of riskless assets and \(\eta_{t}=(1-w_{t})\frac{V_{t}}{S_{t}}\) as the number of risky assets in the portfolio. The value of riskless assets grows at a constant risk-free rate according to \[B_{t}=B_{0}\exp(rt), \tag{4.1}\] and the value of the risky portfolio will follow a regime-switching dynamics given by \[S_{t}=S_{0}\exp\bigg{[}\sum_{i=1}^{H}\int_{0}^{t}\bigg{(}\mu_{i}-\frac{\sigma_ {i}^{2}}{2}\bigg{)}\delta(i,\alpha_{s})ds+\sum_{i=1}^{H}\int_{0}^{t}\sigma_{i} \delta(i,\alpha_{s})dW_{s}\bigg{]}. \tag{4.2}\] In the above expression, \(\delta(i,\alpha_{s})\) is an indicator function being equal to 1 if we are in the state \(\alpha_{s}=i\) and 0, otherwise. So, the value of the portfolio at time, \(t\), is given by \[V_{t}^{\rm VBPI}=\beta_{t}B_{t}+\eta_{t}S_{t}=w_{t}V_{0}\exp(rt)+(1-w_{t})V_{ 0}\bigg{(}\frac{S_{t}}{S_{0}}\bigg{)}. \tag{4.3}\] According to the dynamic VaR-based approach of Jiang et al. [34], our goal is to adjust the weights in such a way that the constraint \[P(V_{t}^{\rm VBPI}\leq F_{t})=\alpha, \tag{4.4}\] will hold in all rebalancing times. By substituting (4.3) in (4.4) and denoting the log-return process by \(R_{t}=\ln\big{(}\frac{S_{t}}{S_{0}}\big{)}\), we could show that (4.4) is equivalent to \[P\bigg{(}\frac{S_{t}}{S_{0}}\leq\frac{F_{t}-w_{t}V_{0}\exp(rt)}{(1-w_{t})V_{0 }}\bigg{)}=P\bigg{(}R_{t}\leq\ln\bigg{(}\frac{F_{t}-w_{t}V_{0}\exp(rt)}{(1-w_{ t})V_{0}}\bigg{)}\bigg{)}=\alpha. \tag{4.5}\] Equation (4.5) could be re-written as \[\int_{-\infty}^{\ln\big{(}\frac{F_{t}-w_{t}V_{0}exp(rt)}{(1-w_{t})V_{0}}\big{)} }sf_{R_{t}}(s)ds=\alpha, \tag{4.6}\] in which \(f_{R_{t}}(\cdot)\) is the probability density function (pdf) of the log-return process, \(R_{t}\). According to (4.6), the upper limit in the integral term is the VaR of the random variable \(R_{t}\) at confidence level, \(\alpha\), which is defined as the possible maximum loss of a portfolio over a given time horizon within a fixed confidence level and is given by \[\ln\Big{(}\frac{F_{t}-w_{t}V_{0}\exp(rt)}{(1-w_{t})V_{0}}\Big{)}=q_{t}. \tag{4.7}\] So the weight of the risky asset will be obtained as \[w_{t}=\frac{F_{t}-V_{0}\exp(q_{t})}{V_{0}\Big{(}\exp(rt)-\exp(q_{t})\Big{)}}. \tag{4.8}\] In the remainder of this section, we describe an efficient way to calculate the value of \(q_{t}\) at each rebalancing time. ### Details of Calculating Value at Risk In order to calculate the VaR in (4.8) above, we need to know the distribution of \(R_{t}\). As described in Hainaut [27], the characteristic function of \(R_{t}\), denoted as \(\varphi_{t}(\cdot)\) and defined by \[\varphi_{t}(\vartheta)=\mathbb{E}\big{(}e^{i\vartheta R_{t}}\big{)}=\int_{- \infty}^{+\infty}e^{i\vartheta s}f_{R_{t}}(s)ds, \tag{4.9}\] could be obtained analytically and so using the characteristic function, we could determine the probability density function of \(f_{R_{t}}\) in a simple and efficient way. Substituting \(R_{t}=\ln\Big{(}\frac{S_{t}}{S_{0}}\Big{)}\) in (4.9), we could write \[\varphi_{t}(\vartheta)=\mathbb{E}\bigg{(}\Big{(}\frac{S_{t}}{S_{0}}\Big{)}^{ i\vartheta}\bigg{)}. \tag{4.10}\] In order to obtain an analytic expression for \(\varphi_{t}(\vartheta)\), we need the following result. **Proposition 4.1** (Hainaut [27]).: _Let the matrix, \(B_{\gamma}\), be defined by_ \[B_{\gamma}=Q^{\prime}+diag\left(\begin{array}{c}\gamma\bigg{(}\mu_{1}-\frac{ \sigma_{1}^{2}}{2}\bigg{)}+\frac{1}{2}\gamma^{2}\sigma_{1}^{2}\\ \cdot\\ \cdot\\ \cdot\\ \gamma\bigg{(}\mu_{H}-\frac{\sigma_{H}^{2}}{2}\bigg{)}+\frac{1}{2}\gamma^{2} \sigma_{H}^{2}\end{array}\right),\quad\forall\gamma\in\mathbb{R}. \tag{4.11}\] _Then we have_ \[\mathbb{E}\bigg{(}\bigg{(}\frac{S_{t}}{S_{0}}\bigg{)}^{\gamma}|\mathcal{F}_{0 }\bigg{)}=\mathbb{E}(\langle\exp(B_{\gamma}t)\delta(0);\,\textbf{1}\rangle| \mathcal{F}_{0})=\sum_{i=1}^{M}p_{i}(0)(\langle\exp(B_{\gamma}t)e_{i};\, \textbf{1}\rangle), \tag{4.12}\] _in which \(\delta(t)=(\delta(i,\alpha_{t}):i\in\mathcal{H})^{\prime}\) is a vector taking its values in the set of unit vectors \(\{e_{1},e_{2},\cdots,e_{H}\}\) and **1** is a vector of \(H\) ones._ According to equation (4.12), the characteristic function of \(R_{t}\) could be calculated as \[\varphi_{t}(\vartheta)=\sum_{i=1}^{H}p_{i}(0)(\langle\exp(B_{i\vartheta}T)e_{i}; \mathbf{1}\rangle). \tag{4.13}\] By inverting the Fourier transform, the probability density function of \(R_{t}\) could now be obtained as \[f_{R_{t}}(s)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\varphi(\vartheta)e^{-i \vartheta s}d\vartheta=\frac{1}{\pi}\int_{0}^{+\infty}\varphi(\vartheta)e^{- i\vartheta s}d\vartheta. \tag{4.14}\] As described in Hainaut [27], the integral in (4.14) could be calculated by the fast Fourier transform (FFT) method. The output of the FFT algorithm is the distribution function of \(R_{t}\). Using the obtained distribution, the \(\alpha\)-quantile of \(R_{t}\) which is the desired VaR level could be calculated by \[q_{t}=w_{t}V_{t}\exp(rT)+(1-w_{t})V_{t}\exp(r_{\alpha}). \tag{4.15}\] ## 5 Risk-Adjusted Performance Measures Evaluating portfolio performance is a key activity in financial economics [40; 51]. During the years, a wide variety of performance measures have been introduced into the field of finance to guide the investors (both private and institutional) in comparing and ranking of investment portfolios and evaluating the added value of portfolio managers (see, e.g. [31; 49; 26; 25; 16]). There exist a number of performance measures in the literature which focus mainly on the mean and variance of the return distribution (see e.g. [21] and references therein). A commonly used measures is the Sharpe ratio (see e.g. [52]) calculated as the ratio of the expected excess return of an investment to its return volatility. Originally motivated by mean-variance analysis and the Capital Asset Pricing Model (CAPM), the Sharpe ratio is routinely used in many different contexts, from performance attribution to tests of market efficiency and risk management. This measure has been used in various researches in order to compare different strategies ([3], [30], [29]). On the other hand, Keating and Shadwick [36] and Cascon et al. [17] have introduced a new performance measure, called "Omega" which considers the returns both below and above a given loss threshold, \(L\), selected by the investor. ### Omega Measure By dividing the returns into two classes according to the loss threshold, \(L\), the returns below it are considered as losses and above it as gains. More precisely, let \(F_{X}(x)\) denote the cumulative distribution function (CDF) of the return distribution defined on the interval \((a,b)\). The Omega measure is defined by the expression \[\Omega_{X}(L)=\frac{\int_{L}^{b}(1-F(x))dx}{\int_{a}^{L}F(x)dx}, \tag{5.1}\] which could equivalently be written in terms of the final portfolio value, \(V_{T}\), as (see e.g. [35]) \[\Omega_{V}(L)=\frac{\mathbb{E}(V_{T}-L)^{+}}{\mathbb{E}(L-V_{T})^{+}}. \tag{5.2}\] It is obvious that the Omega measure takes account of the entire return distribution while requiring no parametric assumptions on the distribution. It provides an appropriate performance measure to compare different strategies. At any threshold level, \(L\), investors prefer the strategies with a higher Omega value [8]. ### Kappa Measure The other performance measure used to compare the performance of two or more strategies is the "Kappa" measure defined by (see e.g. [8]) \[\kappa_{n}(L)=\frac{\mathbb{E}(V_{T})-L}{\left[\mathbb{E}\big{[}(L-V_{T})^{+} \big{]}^{n}\right]^{\frac{1}{n}}}. \tag{5.3}\] It is shown that the Sortino ratio (see e.g. [53]) could be recovered by considering the case \(n=2\) in the above definition. In the context of portfolio insurance, Bertrand and Prigent [8] have employed both the Omega and Kappa measures to compare the performance of OBPI and CPPI strategies. They show that the CPPI method performs better than the OBPI strategy for jump-diffusion dynamics of the underlying risky asset. We could also mention the work of Zagst and Kraus [56] in which they analyze and compare the performance of OBPI and CPPI strategies by means of stochastic dominance criteria. They derive parameter conditions implying the second and third order stochastic dominance of the CPPI strategy. ### Downside Risk Measures A well-studied risk in portfolio insurance, called the "gap-risk", is the probability of the portfolio value to fall below the floor and failing to guarantee the desired final amount. This risk is measured by a quantity called the expected shortfall given default (ES) which describes the amount which is lost if a shortfall occurs (see e.g. [54; 5]). In order to make it precise, we first define a loss variable, \(L_{T}\), taken to be the amount the portfolio value is below the guarantee level at maturity and expressed as \[L_{T}=[F_{T}-V_{T}|F_{T}<V_{T}]. \tag{5.4}\] Expected value (expected shortfall) of the loss is a measure that estimates the average amount of the loss when the value of the portfolio at the maturity is below the guarantee amount (see, e.g. [37]) \[\mathbb{E}[L_{T}]=\mathbb{E}[G-V_{T}|V_{T}<F_{T}]. \tag{5.5}\] A portfolio insurance strategy incurs a shortfall (breaks through the floor), if \(V_{t}<F_{t}\) occurs during the investment horizon. Percentage of times a loss occurs, when measured over a large number of simulations, \(M\), could be estimated by \[\mathbb{P}[L_{T}]\approx\frac{1}{M}\sum_{j=1}^{M}\mathbb{1}_{\{V_{T}(\omega_{j })<F_{T}\}}, \tag{5.6}\] and could be interpreted as the probability that the PI strategy to experience a loss. Note that \(\mathbb{P}[L_{T}]\) could be interpreted from the buyers perspective as the probability that they will receive only the guarantee at maturity. ## 6 Numerical Experiments In this section, we study both the VBPI and CPPI strategies with daily, weekly and monthly rebalancing frequencies of the portfolio between a risky and risk-free asset for a planning horizon of \(T=1\) years2. We take the portfolio initial level to be \(V_{0}=100\) and the guaranteed level at the maturity to be \(F_{T}=100\) (i.e. \(\pi=1\)). We also let the yield of the bond to be constant at an annual rate of \(r=4\%\). We assume that the risky asset is driven by a time-inhomogeneous Markov-modulated diffusion process with two distinct regimes (considered here as bullish and bearish markets) with parameter values for each regime as Footnote 2: We assume 260 trading days and 52 weeks in a typical year. \[\mu_{1}=0.14,\quad\sigma_{1}=0.16\quad\ \ \mu_{2}=-0.01\quad\sigma_{2}=0.2.\] The generator matrix of the underlying Markov process is assumed to be given by \[Q=\left(\begin{array}{cc}-0.25&0.25\\ 0.25&-0.25\end{array}\right).\] In order to demonstrate the effect of regime shifts on the performance of the CPPI and VBPI strategies, we have conducted a Monte-Carlo simulation study by generating sample paths from the underlying asset's risky dynamics. To make the two strategies comparable in different scenarios, multiples in the CPPI strategy are chosen such that the strategy's initial allocation to the risky asset is the same as that under the corresponding VBPI strategy (see e.g. [34]). Using (4.8) and the relation \[mC_{0}=(1-w_{0})V_{0}, \tag{6.1}\] the value of \(m\) could be calculated. In the above equation, the weight of the risky asset in the CPPI strategy is assumed the same as the weight of risky asset in the VBPI strategy at the initial time. In Figures 1-3, we have demonstrated the Omega performance measure as a function of different threshold levels. As pointed out by Bacmann and Scholz [4], Omega involves all the moments of the return distribution including skewness and kurtosis and so, it is an appropriate indicator of the effectiveness of insurance strategies (see also [34]). In all figures and for high threshold levels, the VBPI strategy has a better performance than the CPPI method while in low thresholds, CPPI has a better Omega measure. Also by increasing the confidence level (CL), the Omega measure increases, showing that the performance of the portfolios in higher confidence levels are improved. The histogram (frequency distribution) of the terminal values of CPPI and VBPI portfolios for daily, weakly and monthly rebalancing periods is depicted in Figures 4-6. They show that the left tail of the frequency distribution for the CPPI strategy is shorter than VBPI. Also by increasing the confidence level, the left tail becomes shorter in both cases. They collectively show that the performance of CPPI strategy is better than VBPI. In Figure 7, we have plotted the expected final portfolio levels under the VBPI and CPPI strategies (denoted respectively by EVVBPI and EVCPPI) versus the rebalancing period and confidence level. As it is evident from these figures, the expected value increases in both cases as we increase the rebalancing period and decreases as we increase the confidence level. Figure 8 which is concerned with the standard deviation of the final portfolio levels versus the rebalancing period and confidence level, shows a similar behavior but here the CPPI method is indifferent to changing the rebalancing period. In Figure 9, we have plotted the normalized number of portfolios which fail to give the floor value (shortfall probability) versus the rebalancing period and confidence level. In this case, increasing the confidence level will result in decreasing the number of portfolios which fail to give the floor value. Here again we conclude that the CPPI performance is better than VBPI for low confidence levels. In Figure 10, we have compared the expected terminal value of the two strategies. It shows that the initial behavior of the two strategies are the same, however by increasing the rebalancing period, the value of VBPI portfolio attains higher levels. In Table 1 we have compared the two strategies in terms of Omega and Kappa performance measures for different threshold levels. We observe that for all threshold values, daily rebalancing leads to better results. Among different confidence levels and for daily rebalancing, the 99 percent confidence level will lead to better results while at weekly and monthly rebalancing periods, lower confidence levels will lead to better results. Table 2 also compares the two strategies in terms of the Sharpe ratio. For daily rebalancing, we obtain better results in comparison to the weekly and monthly rebalancing periods. It is evident from the above experiments that by increasing the confidence level, the return and the standard deviation of the corresponding portfolios decreases, as expected. Also by increasing the rebalancing period, the performance of both strategies degrades. In all the presented results, the CPPI strategy has a better performance than the VBPI method except for the Omega measure in which for high threshold levels, the VBPI strategy has a slightly better performance. ## 7 Conclusion The main contribution of this paper is the development of a constrained CPPI as well as a VBPI strategy under a regime-switching diffusion model. In this respect and for the CPPI strategy, we derive a stochastic differential equation for the portfolio value consisting of a risky and a riskless asset where the exposure is constrained both from below and above. In the VBPI case, we derive expressions for the weights of the risky and riskless assets in the portfolio based on the value at risk of the portfolio return process at each discrete rebalancing time. We employ a Fourier-based method to approximate the VaR by inverting the characteristic function of the underlying Figure 1: Omega measure of CPPI and VBPI portfolio at 90% CL. Figure 2: Omega measure of CPPI and VBPI portfolio at 95% CL. Figure 4: Frequency of the terminal value of the CPPI and VBPI portfolios at 90% CL. Figure 3: Omega measure of CPPI and VBPI portfolio at 99% CL. Figure 5: Frequency of the terminal value of the CPPI and VBPI portfolios at 95% CL Figure 6: Frequency of the terminal value of the CPPI and VBPI portfolios at 99% CL Figure 8: Comparing Standard deviation of CPPI and VBPI portfolios. Figure 7: Comparing terminal expected value of the CPPI and VBPI portfolios. Figure 9: Comparing shortfall probability of the CPPI and VBPI portfolios. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Omega CPPI} & \multicolumn{4}{c}{VBPI} \\ \cline{2-13} CL & Daily & Weekly & Monthly & Daily & Weekly & Monthly & Daily & Weekly & Monthly & Daily & Weekly & Monthly \\ \hline & \multicolumn{4}{c}{Thershold 1\%} & & & & & & & & & & & \\ 90 & 46.48 & 0.71 & 0.49 & 4.08 & 0.72 & 0.49 & 15.61 & -0.22 & -0.41 & 1.81 & -0.22 & -0.41 \\ 95 & 629.76 & 0.66 & 0.37 & 7.66 & 0.67 & 0.37 & 85.65 & -0.25 & -0.49 & 2.97 & -0.24 & -0.49 \\ 99 & 947.24 & 0.61 & 0.27 & 99.87 & 0.62 & 0.27 & 425.12 & -0.28 & -0.57 & 25.62 & -0.27 & -0.56 \\ & \multicolumn{4}{c}{Thershold 2\%} & & & & & & & & & & \\ 90 & 4.74 & 0.37 & 0.28 & 1.91 & 0.38 & 0.28 & 2.14 & -0.52 & -0.61 & 0.60 & -0.51 & -0.60 \\ 95 & 12.63 & 0.25 & 0.16 & 2.65 & 0.29 & 0.16 & 4.87 & -0.6 & -0.71 & 0.99 & -0.58 & -0.71 \\ 99 & 50.45 & 0.16 & 0.08 & 6.75 & 0.19 & 0.09 & 13.04 & -0.68 & -0.79 & 2.96 & -0.66 & -0.78 \\ & \multicolumn{4}{c}{Thershold 3\%} & & & & & & & & & & \\ 90 & 1.36 & 0.22 & 0.17 & 1.04 & 0.22 & 0.17 & 0.26 & -0.67 & -0.72 & 0.03 & -0.67 & -0.72 \\ 95 & 2.02 & 0.12 & 0.08 & 1.21 & 0.14 & 0.08 & 0.62 & -0.76 & -0.81 & 0.15 & -0.74 & -0.81 \\ 99 & 3.37 & 0.05 & 0.03 & 1.65 & 0.07 & 0.03 & 1.25 & -0.83 & -0.88 & 0.44 & -0.82 & -0.87 \\ & \multicolumn{4}{c}{Thershold 4\%} & & & & & & & & & & \\ 90 & 0.58 & 0.14 & 0.11 & 0.63 & 0.14 & 0.11 & -0.33 & -0.76 & -0.79 & -0.29 & -0.76 & -0.79 \\ 95 & 0.57 & 0.06 & 0.04 & 0.66 & 0.08 & 0.04 & -0.32 & -0.84 & -0.87 & -0.26 & -0.82 & -0.87 \\ 99 & 0.58 & 0.02 & 0.01 & 0.62 & 0.03 & 0.02 & -0.29 & -0.90 & -0.92 & -0.29 & -0.89 & -0.92 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing the performance of CPPI and VBPI strategies via the Omega and Kappa performance measures. Figure 10: Comparing expected shortfall of the CPPI and VBPI portfolios. asset. We compare the two approaches based on some performance measures and show that the constrained CPPI method performs well in most of the scenarios examined and provides us with a better control on the gap risk of the investment strategy. For future research, we propose to add a jump term into the model to better capture the real effects present in the market environment. We could also consider a flexible floor value which potentially improves the efficiency of the proposed portfolio insurance strategies.
2310.01095
LoCUS: Learning Multiscale 3D-consistent Features from Posed Images
An important challenge for autonomous agents such as robots is to maintain a spatially and temporally consistent model of the world. It must be maintained through occlusions, previously-unseen views, and long time horizons (e.g., loop closure and re-identification). It is still an open question how to train such a versatile neural representation without supervision. We start from the idea that the training objective can be framed as a patch retrieval problem: given an image patch in one view of a scene, we would like to retrieve (with high precision and recall) all patches in other views that map to the same real-world location. One drawback is that this objective does not promote reusability of features: by being unique to a scene (achieving perfect precision/recall), a representation will not be useful in the context of other scenes. We find that it is possible to balance retrieval and reusability by constructing the retrieval set carefully, leaving out patches that map to far-away locations. Similarly, we can easily regulate the scale of the learned features (e.g., points, objects, or rooms) by adjusting the spatial tolerance for considering a retrieval to be positive. We optimize for (smooth) Average Precision (AP), in a single unified ranking-based objective. This objective also doubles as a criterion for choosing landmarks or keypoints, as patches with high AP. We show results creating sparse, multi-scale, semantic spatial maps composed of highly identifiable landmarks, with applications in landmark retrieval, localization, semantic segmentation and instance segmentation.
Dominik A. Kloepfer, Dylan Campbell, João F. Henriques
2023-10-02T11:11:23Z
http://arxiv.org/abs/2310.01095v1
# LoCUS: Learning Multiscale 3D-consistent Features from Posed Images ###### Abstract An important challenge for autonomous agents such as robots is to maintain a spatially and temporally consistent model of the world. It must be maintained through occlusions, previously-unseen views, and long time horizons (e.g., loop closure and re-identification). It is still an open question how to train such a versatile neural representation without supervision. We start from the idea that the training objective can be framed as a patch retrieval problem: given an image patch in one view of a scene, we would like to retrieve (with high precision and recall) all patches in other views that map to the same real-world location. One drawback is that this objective does not promote _reusability_ of features: by being unique to a scene (achieving perfect precision/recall), a representation will not be useful in the context of other scenes. We find that it is possible to balance retrieval and reusability by constructing the retrieval set carefully, leaving out patches that map to far-away locations. Similarly, we can easily regulate the scale of the learned features (e.g., points, objects, or rooms) by adjusting the spatial tolerance for considering a retrieval to be positive. We optimize for (smooth) Average Precision (AP), in a single unified ranking-based objective. This objective also doubles as a criterion for choosing landmarks or keypoints, as patches with high AP. We show results creating sparse, multi-scale, semantic spatial maps composed of highly identifiable landmarks, with applications in landmark retrieval, localization, semantic segmentation and instance segmentation. + Footnote †: Code and model weights for this project can be found at [https://www.robots.ox.ac.uk/~vqg/research/locus](https://www.robots.ox.ac.uk/~vqg/research/locus). ## 1 Introduction For an autonomous agent to be able to take useful actions, it must maintain a spatially and temporally consistent Figure 1: Problem setting. Our goal is to train a network to extract features that are identifiable and 3D-consistent, so that features at image locations corresponding to the same region in 3D space, but viewed from different positions, are similar. This can be done at multiple scales, from large (e.g., the kitchen islands in the large green circles) to small (e.g., the drawers in the small red circles). However, simply optimizing for “unique” representations at each location (e.g., via contrastive learning) runs the risk of over-fitting to the training scenes, as such objectives will discourage reuse of the same representation for different places. Instead, we encourage reusable landmark representations, such as the concept of a kitchen island, which may appear in different scenes (top and bottom panels) with appearance variations. world model. This model may comprise the agent itself (e.g., pose estimation), the environment (e.g., mapping), and dynamic objects (e.g., instance detection), and must be maintained when confronted with previously-unseen views, occlusions, and long time horizons (e.g., loop closure and re-identification). However, visual observations used by an agent for this purpose are inherently inconsistent: the same landmark may appear significantly different from different viewpoints in space and time, due to (self) occlusion, reflections, lighting variations, and dynamic effects, among other factors. Errors in the estimation of the agent's internal state compound these problems. Therefore, it is important for any vision-based agent to convert observations into some spatio-temporally consistent form. Existing approaches [4, 21, 41] do this at the observation synthesis (mapping) stage, by aggregating or distilling visual information in 3D. We argue that significant progress can be made before this point, at the observation processing stage, which lends itself to a more flexible image-centered representation that is useful for a range of tasks. The key is to encourage consistency between image features that unproject to the same region of 3D space, within a spatial tolerance, defining a landmark at a given scale. This can be achieved by formulating the problem as one of patch retrieval: given an image patch from one view of a scene, retrieve all patches in other views that correspond to the same 3D location, with high precision and recall. To encourage reusability, so that the learned features are useful in new scenes, we exclude patches from the retrieval set if (when unprojected) they exceed a fixed distance from the query. Excluding such patches ensures that the representations of similar-looking landmarks in distant places are not pushed apart unnecessarily, which would promote over-fitting unique representations to the training scenes, thus making them non-reusable in new scenes. Moreover, by adjusting the spatial tolerance that defines the positive set, we can regulate the scale of the learned features. This allows us to learn features at a small scale (e.g., local textures and structures), medium scale (e.g., household objects), and large scale (e.g., whole rooms or places) in the same framework. We learn this representation by optimizing a ranking-based metric, (smooth) Average Precision (AP), which doubles as a criterion for choosing distinctive landmarks (keypoints). The resulting Location-Consistent Universal Stable (LoCUS) features are semantically-meaningful, 3D-consistent at the selected scale, and balance distinctiveness with reusability, producing sparse, multi-scale, and semantic maps. We demonstrate applications in landmark retrieval, localization, semantic segmentation and instance segmentation. To summarize, our contributions are: 1. A framework for learning 3D-consistent features from posed images via retrieval, taking into account multiple scales and how to trade off retrieval performance vs. generalization performance (reusability). 2. A unified ranking-based objective function that facilitates the selection of highly-identifiable landmarks. 3. An evaluation of the proposed features on real images of indoor environments, on the tasks of place recognition, semantic segmentation, instance segmentation and re-identification, as well as relative pose estimation. ## 2 Related work The topics of keypoint detection and description [12, 14, 31], feature matching [44, 11, 16, 35, 39], structure-from-motion [36], and SLAM [15, 27, 40] have a rich history. Here, we concentrate on the most recent and related work. Image retrieval.Learning representations for image retrieval--the task of ranking all instances in a retrieval set according to their relevance to a query image--is well-studied [2, 30, 17]. Metric learning approaches use, e.g., contrastive [10] or triplet [43] losses to encourage positive instances to be close, while negative instances are separated by a margin. Other approaches optimize (approximations to) ranking-based metrics like Average Precision (AP) directly [33, 5]. For example, Smooth-AP [5] proposes a sigmoid relaxation of the ranking function, where the tightness of the approximation is controlled by the temperature. Optimizing a ranking metric allows a model to target the correct ranking without caring about the absolute feature distances. We leverage the image retrieval literature by defining our learning task as a patch retrieval problem. By carefully defining the retrieval set, we can balance feature distinctiveness with re-usability. While image retrieval methods retrieve entire images, we retrieve 3D spherical regions projected to 2D. That is, while methods such as Brown et al. [5] compute a single feature per image that is then used to retrieve other images of the same class, we compute features for pixel patches that are then used to retrieve pixel patches that cover (parts of) the same 3D spherical region. More details can be found in Sec. 3. Learning visual features and keypoints.Several works explore methods to learn better image features or keypoints to facilitate 2D-2D matching [44, 11, 16, 35, 39] or 2D-3D matching [11, 6] for relative/absolute pose estimation or triangulation. For example, Fathy et al. [16] use metric learning to learn 2D-2D matchable features, while Campbell et al. [6] learn geometric features that facilitate 2D-3D matching via an end-to-end trainable blind PnP solver. Keypoint detectors, by contrast, aim to find a sparse set of repeatable points in an image [12, 14, 31]. For example, SuperPoint [12] jointly computes keypoints and descriptors using a convolutional network trained in a self-supervised framework. Similarly, D2-Net [14] obtains keypoints via non-maximum suppression on the learned feature maps. R2D2 [31] argues that repeatable regions are not necessarily discriminative, so learns to predict keypoint repeatability and reliability separately. Unlike the features learned in these works, the features that optimize our loss function do not vary as rapidly, allowing them to more closely resemble the real scene geometry and enabling segmentation. Neural mapping and reconstruction.Deep learning approaches have gradually closed the gap on classical Structure-from-Motion [36] and SLAM [15, 27] approaches to mapping and reconstruction. For example, Neural Radiance Fields (NeRF) [25] has demonstrated photorealistic reconstruction for known cameras, and been extended to RGBD SLAM [38, 47], RGB SLAM [46], and semantic mapping [45]. Earlier, MapNet [21] investigated neural localization and mapping through convolution operators, resulting in an environment map that stores multi-task information distilled from the RGBD input, which exhibits emergent semantic meaning. Our approach produces very different kinds of maps: sparse, multi-scale, and semantic, composed of highly identifiable landmarks. Self-supervised visual feature learning.Vision transformers (ViT) [13] have demonstrated a strong capacity for learning useful and meaningful features from large amounts of unlabelled image data [7, 18, 42]. For example, DINO [7] demonstrated that self-supervised ViT features could be used for unsupervised object segmentation. The model was trained via self-distillation between a student network and a momentum teacher network that receive two different random transformation of an image and are encouraged to encode similar features. STEGO [18] extends DINO to unsupervised semantic segmentation via contrastive learning. It trains a shallow segmentation network appended to a fixed DINO backbone with contrastive terms that encourage the learned features to form compact clusters while preserving their global relationships. CutLER [42] extends DINO to unsupervised object detection and segmentation, achieving extremely compelling results. The model generates training data for a detector by creating foreground object masks using normalized cuts on the patch-wise similarity matrix of DINO features, with additional object masks being found through an iterative masking procedure. N3F [41] showed that DINO image features can be distilled into a 3D feature field using the same rendering loss as NeRF [25], given camera pose supervision. They demonstrate that the resulting features are 3D-consistent, enabling 3D instance segmentation and scene editing. Our approach builds on these self-supervised methods by proposing a proxy patch retrieval task defined in 3D, unlike STEGO and CutLER, allowing us to adapt DINO features so that they learn invariances to viewing direction and instance. Like N3F, we require camera pose supervision to enable our 3D-aware loss. Unlike N3F, our features are defined in image space and can be predicted from a single image, facilitating applications like relative pose estimation. ## 3 Method Our training procedure will be centered on the concept of recognizing _landmarks_: regions of space that are visually identifiable and unique within a bounded region, but reusable outside that region. We mean that landmark embeddings (representations) are "reusable" in the sense that the same embedding may be shared by more than one landmark, as long as they are far away in the spatial domain. Assume that we are given a set of training images, divided into \(n\) (potentially overlapping) rectangular patches \(x_{i}\), i.e., the receptive fields of a Convolutional Neural Network (CNN) or the tokens of a Visual Transformer (ViT). Each training patch \(x_{i}\in\mathcal{P}\) is also associated with an environment \(e_{i}\in\mathcal{E}\) (e.g. the identity of a house in a training set composed of distinct houses) and real-world coordinates within that environment \(p_{i}\in\mathbb{R}^{3}\), obtained for example by projecting the center coordinates of the patch using known camera geometry (camera pose and approximate depth) [19]. Note that this information is only needed for training - at test time no such information is necessary. The training set is then \(\mathcal{X}=\{(x_{1},e_{1},p_{1}),\ldots,(x_{n},e_{n},p_{n})\}\). Assume that we have also defined a set of _tentative landmarks_\(\mathcal{L}=\{(\theta_{1},\epsilon_{1},\ell_{1}),\ldots,(\theta_{m},\epsilon_{m}, \ell_{m})\}\) in 3D space: points \(\ell_{i}\in\mathbb{R}^{3}\) in environments \(\epsilon_{j}\in\mathcal{E}\) and associated embeddings \(\theta_{i}\in\mathbb{R}^{c}\). These do not have to correspond to actual landmarks (or identifiable locations in 3D), and can be sampled uniformly across space.1 Footnote 1: We will discuss more efficient sampling strategies in Section 3.3. We wish to train a deep neural network \(\phi:\mathcal{P}\mapsto\mathbb{R}^{c}\) to output embeddings that can be used to match each patch \(x_{i}\) to a landmark embedding \(\theta_{j}\), by computing pairwise scores \[s_{ij}=\frac{\phi(x_{i})^{\mathsf{T}}\theta_{j}}{\|\phi(x_{i})\|\|\theta_{j}\|}, \tag{1}\] consisting of a cosine distance (inner product of normalized embeddings), where higher scores denote more likely matches. To specify whether a match is correct or not, we place a sphere of radius \(\rho_{j}\) around the landmark \(\ell_{j}\), and any retrievals there (and in the same environment) are considered positive: \[y_{ij}^{+}=\mathbb{1}\left(\|p_{i}-\ell_{j}\|\leq\rho_{j}\wedge e_{i}=\epsilon _{j}\right), \tag{2}\] where \(\mathbb{1}(\cdot)\in\{0,1\}\) is the indicator function. We will use \(y_{ij}^{+}\) as a binary mask to denote positive matches, while \(y_{ij}^{\Omega}=1\) is a trivial mask that denotes the union of positives and negatives. Both are used to define the Smooth Average Precision (Smooth-AP): \[\widetilde{\text{AP}}_{j}=\frac{1}{\sum_{i}^{n}y_{ij}^{+}}\sum_{i}^{n}y_{ij}^{+ }\frac{1+\sum_{kl}^{nm}y_{kl}^{+}\sigma_{\tau}(s_{kl}-s_{ij})}{1+\sum_{kl}^{ nm}y_{kl}^{\Omega}\sigma_{\tau}(s_{kl}-s_{ij})}, \tag{3}\] with the sigmoid \(\sigma_{\tau}(x)=\frac{1}{1+\exp(-x/\tau)}\). In the limit \(\tau\to 0\), \(\widetilde{\text{AP}}_{j}\) recovers the exact AP with \(\theta_{j}\) as the query embedding. Discussion.Eq. 3 is similar to Smooth-AP, proposed by Brown et al. [5], with a few differences that were necessary to adapt it to patch-based landmark retrieval: 1) the retrieval set consists of rectangular image patches, so \(\phi\) can be applied convolutionally; 2) the positive set is defined by 3D Euclidean distance (Eq. 2) with per-landmark radii \(\rho_{j}\); and 3) we wrote Eq. 3 as a function of binary masks \(y_{ij}^{+}\) and \(y_{ij}^{\Omega}\), instead of nested sets. This objective encourages the features from two image patches to be similar if they correspond to 3D locations that are at most a distance \(\rho\) apart, since they will be in each other's positive sets. Thus the objective directly encourages 3D-location-consistent features, extracting similar features for different viewpoints of the same 3D location. Empirical support for this is given in Sec. 4.2. The objective also encourages semantic meaningfulness, extracting similar features for image patches that correspond to the same object. First, note that if two 3D locations are separated by greater than \(\rho\) but less than \(2\rho\), they are both within the positive set of a third landmark location, encouraging all three features to be similar. Second, note that the Smooth-AP loss does not minimise the similarity of a landmark with patches in the negative set, it only encourages the similarity with respect to the positive set to be greater than that with the negative set. Together, this results in similar features being extracted across an object, facilitating segmentation. Multi-scale landmarks.The radius \(\rho_{j}\) of each landmark defines its overall scale, as any matching embeddings \(\phi(x_{i})\) must be invariant to different positions within this radius. Thus \(\phi\) may learn to recognize not only small-scale keypoints, but also landmarks at the scale of household objects, whole rooms or even larger regions (place recognition), as illustrated in fig. 1. Despite these changes, Smooth-AP still offers a few other challenges to be adapted to our setting, which we will detail in the next sections. ### Landmark reusability: "don't-care" regions Optimizing for AP has one unfortunate side effect: every high score matching a patch \(x_{i}\) further away from a (tentative) landmark \(\ell_{j}\) than \(\rho_{+}\) will be treated as a false positive, and thus suppressed during training. Likewise, all patches in different environments \(e_{i}\neq e_{j}\) will be treated the same. While this seems reasonable on the surface, at the optimum it will force all landmarks to be _unique_ to a particular place in an environment and thus useless in a new environment or far away location. We would like some landmarks to be _reusable_ and shared among different environments, for example for one landmark to represent a living room in different homes, as opposed to overfitting to a single living room. In analogy with "don't-care" conditions in digital circuit design [24], which reduce circuit complexity by freeing up modeling capacity for input-output combinations that are Figure 2: Illustration of the projections of the spherical regions that define the landmark retrieval objective (sec. 3.1). The small green sphere around the tentative landmark \(\ell_{j}\) defines the region inside which image patches are considered positive matches with the landmark (\(y_{ij}^{+}=1\)). The larger orange sphere defines the region with positive and negative matches (\(y_{ij}^{\Omega}=1\)). Importantly, outside this region matches are ignored (\(y_{ij}^{\Omega}=0\)), as well as in other environments (bottom panel). As a result, a contrastive (or retrieval) self-supervised objective does not suppress similar embeddings for semantically-similar but spatially distant landmarks, such as the two kitchen islands in the two environments shown. not important, we propose to define "don't-care" regions where the Smooth-AP objective does not constrain the deep network's output. Instead of the trivial mask \(y^{\Omega}_{ij}=1\) that denotes the universe of all patches as positives and negatives, we instead reduce this universe to \[y^{\Omega}_{ij}=\mathbb{1}\left(\|p_{i}-\ell_{j}\|\leq\kappa\rho_{j}\wedge e_{ i}=\epsilon_{j}\right), \tag{4}\] with \(\kappa>1\) a multiplier for the distance threshold. Together, \(\kappa\) and \(\rho_{j}\) define two concentric regions: a sphere of radius \(\rho_{j}\) around a landmark, where any retrievals are considered positive (Eq. 2), and a spherical shell at distance \(d\) from the landmark, with \(\rho_{j}<d\leq\kappa\rho_{j}\), where any retrievals are considered negative (Eq. 4). Any points outside the radius \(\kappa\rho_{j}\) are not considered as part of the retrieval set, and are not assigned a label. The end result is that two different tentative landmarks can have very similar embeddings \(\theta_{j}\), as long as they are at a distance greater than \(\kappa\rho_{j}\), and this embedding reuse will not be penalized by the Smooth-AP (Eq. 3). ### Automatic landmark selection with Vectorized-Smooth-AP So far we referred to landmarks as "tentative", so that they may not correspond to actual identifiable regions of space. However, optimizing for Eq. 3 assumes a landmark \(\ell_{j}\) is fixed as a query. If we maximize Eq. 3 in expectation over \(j\) (analogously to Brown et al. [5]), we implicitly give equal importance to all tentative landmarks, even if some may correspond to places that are not easily identifiable (e.g., a wall or empty region). Rather than devise a heuristic to identify good landmarks, we instead just let the Smooth-AP objective focus on pairs of landmarks and patches that maximize AP, by considering all pairs as if they're part of a single query \[\overrightarrow{\text{AP}}=\frac{1}{\sum_{ij}^{nm}y^{+}_{ij}}\sum_{ij}^{nm}y^{ +}_{ij}\frac{1+\sum_{kl}^{nm}y^{+}_{kl}\sigma_{\tau}(s_{kl}-s_{ij})}{1+\sum_{ kl}^{nm}y^{\Omega}_{kl}\sigma_{\tau}(s_{kl}-s_{ij})}. \tag{5}\] Eq. 5 is equivalent to _vectorizing_ the matrix of masks \(Y^{+}\in\mathbb{R}^{n\times m}\) with elements \(y^{+}_{ij}\), by stacking its elements into a single vector \(\mathbf{y}^{+}\in\mathbb{R}^{nm}\), and computing the Smooth-AP objective (Eq. 3) with this modified input. While subtle, this has the effect that \(\overrightarrow{\text{AP}}\) will be maximized by first distinguishing the easiest landmark-patch pairs from the rest, while ignoring those that are too ambiguous. By neglecting to emphasize all tentative landmarks equally, the objective adaptively selects highly distinguishable landmarks. We can identify them by evaluating the _non-vectorized_ Smooth-AP (\(\overrightarrow{\text{AP}}_{j}\)) on each individually, and taking the top-\(k\) landmarks: \[\ell^{*}=\underset{j}{\text{top-}k}\ \widetilde{\text{AP}}_{j}.\] ### Sampling tentative landmarks We now turn to the definition of the tentative landmark positions \(\ell_{j}\) and embeddings \(\theta_{j}\). Sampling positions \(\ell_{j}\).While ideally it would be sufficient to sample the landmark positions randomly across space (either within a bounded region, or restricted to the visible hull), in a mini-batch with limited memory this is often not efficient. The reason is that \(y^{+}_{ij}\) may have too few non-zero values due to non-intersecting image views, especially with a limited number of images in an environment, or in very large environments. We found that sampling uniformly across space is very inefficient, as over \(94\%\) of the chosen tentative landmarks are not visible by more than 2 views (in the training set of Matterport3D [8]; see sec. 4 for details on the experimental setting). This creates very poor query sets for retrieval, with only one or two positive embeddings, which cause over-fitting as the network easily attains \(100\%\) AP on such tentative landmarks. Instead, we need to bias the sampling more towards more visible locations. Thankfully, there is a simple way to sample spatial positions proportionally to how often they are visible in the training set of views: simply sample uniformly among all image patches across all training images. This guarantees that the sampled distribution is proportional to how often a 3D position is visible, and is easy to implement. Sampling embeddings \(\theta_{j}\).A straightforward way to define the embedding for \(\ell_{j}\) is to average the embeddings of all patches that map to that location in space: \[\theta_{j}=\frac{1}{n}\sum_{i}^{n}y^{+}_{ij}\phi(x_{i}).\] In practice, we found that approximating this average by a single patch \(\phi(x_{i})\) such that \(y^{+}_{ij}>0\) (chosen at random) is sufficient, which simplifies the implementation. ## 4 Experiments In this section, we will detail our experiments, where we evaluate the ability of LoCUS features to perform place recognition and relative pose estimation, as well as evaluate its emergent semantic properties, in the form of semantic segmentation and instance segmentation with object re-identification. ### Experimental setup Datasets.Our primary dataset for training and evaluation is the Matterport3D dataset [8], which contains a wide variety of indoor environments, captured densely with RGB and depth information. It also includes dense segmentations at the object level, which facilitate the evaluation of our model's semantic properties. Training details.We train a 2-stage transformer [13] with 128-dimensional internal features, on top of a frozen DINO backbone [7]. The final features extracted from image patches have 64 dimensions, and the DINO backbone computes 768-dimensional features, so we use two linear layers to map between these feature spaces, resulting in 503,232 trainable parameters. This model is trained by implementing the Vectorized-Smooth-AP objective from Eq. 5. We maximize the objective using the Adam optimizer with an initial learning rate of \(10^{-4}\) and mini-batches of size 16, sampled from the Matterport3D training set [8], and train for 20 epochs. For all experiments, we set the hyper-parameters \(\tau=0.01\) and \(\rho_{j}=0.2\) (in meters). With these settings, the model can be trained on a single NVIDIA RTX 2080Ti GPU. ### Place recognition and retrieval Since our method is trained with a specific relaxation of Average Precision (AP) on retrieval-focused sets of image patches, its primary objective is most closely aligned with place recognition via retrieval. As such, we start by evaluating its AP on unseen validation environments, which contain objects and layouts that were not seen during training. This assesses the reusability of features produced by our method. Baselines.For this experiment, we compare with pretrained ResNet50 [20], DINO [7], and DINOv2 [29] baselines. The features of the final layer of the ViT are reduced to 64 using PCA, the same dimension as our features (similar to Tschernezki et al. [41]). Since our model shares almost all of its weights with the DINO baseline, this comparison well-illustrates the effect and advantages of our training method. Results.The results for this experiment are reported in Table 1. In addition to the AP on the validation set, for both our method and the DINO [7] baseline, we also report the AP on the training set. As expected, our LoCUS features significantly outperform the DINO baseline, despite sharing almost all weights. While the retrieval performance decreases on the validation set, this decline is minimal compared to the effect of the training, demonstrating the reusability of the features in unseen environments. ### Semantic and instance segmentation We now turn to scene-level object segmentation. There are two broad categories of segmentation classes: 1) amorphous geometry ("stuff") such as walls, floor and ceiling; and 2) distinct objects ("things") such as furniture or appliances. The former are useful for evaluating semantic segmentation at the texture level, while the latter requires distinguishing individual objects, and thus allows us to evaluate instance segmentation. This setting is slightly more broad than instance segmentation: an object must not merely be segmented distinctly from other objects in a given image, but it must be also _re-identified_ in different images from varied points of view, so it also encompasses the task of object re-identification. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model} & Objective (\(\overrightarrow{\text{AP}}\)) & \multicolumn{2}{c}{Average Precision (AP)} \\ & Train & Val. & Train & Val. \\ \hline ResNet50 [20] & 0.11 & 0.11 & 0.11 & 0.12 \\ DINO [7] & 0.20 & 0.20 & 0.20 & 0.20 \\ DINOv2 [29] & 0.17 & 0.17 & 0.17 & 0.17 \\ LoCUS (Ours) & **0.56** & **0.54** & **0.57** & **0.55** \\ \hline \hline \end{tabular} \end{table} Table 1: Place recognition (retrieval) results, for our LoCUS features and the DINO [7] baseline. We report our objective, the smooth vectorized AP (\(\overrightarrow{\text{AP}}\)), and the Average Precision (AP), which quantifies the retrieval performance. For the same features, AP, which corresponds to our objective in the limit of \(\tau\to 0\), will always be higher than \(\overrightarrow{\text{AP}}\). Figure 3: Visualization of co-segmentation results, obtained by thresholding the cosine distance (Eq. 1) of the LoCUS features of a query image patch (blue and orange, highlighted in the top left image) and LoCUS features of patches in other views (remaining images). The thresholded regions are indicated in matching colours. Qualitative results on co-segmentation.We start by exploring a single co-segmentation task, highlighting a patch in one image and then finding all matching patches in other views, by simply thresholding the similarity metric (Eq. 1). The results can be seen in Section 4.2. We can observe that, despite dramatic changes in viewpoint, the learned LoCUS features are very stable over 3D space, successfully matching over very significant changes in distance, rotation, partial occlusion and out-of-view regions. Implementation.For the quantitative evaluation, we use linear probes to assess the learned features' correlation with respect to the semantic classes, as is common in self-supervised learning [7]. To do this, we extract the LoCUS features \(\phi(x_{i})\) over all training images (considered frozen) and train a patch-wise linear classifier with a cross-entropy loss and the ground-truth segmentation labels. We use the same optimizer settings as for the main objective until convergence, for all methods. Evaluation setting and metrics.We use an evaluation set of _unseen scenes_, which are not part of the training set, and thus test the generalization ability of the methods. We report segmentation metrics on "stuff" pixels only (semantic segmentation), on "things" pixels only (instance segmentation and object re-identification), and on all pixels taken together. For each case, we compute three metrics: 1. mAP: For each object instance (in the case of instance segmentation) or for each class (in the case of semantic segmentation), we calculate the average precision (AP) of the linear classifier, in a one vs. all mode (i.e., considering all other pixels as negative labels). We then average across all instances or classes to obtain a mAP score. 2. mIoU: We calculate the Intersection-over-Union (IoU) [34] between the predictions and ground-truth binary masks for each object (or class) separately, and then report the average. 3. Jaccard (Jac): Similarly to the mIoU, we compute the Jaccard index separately for each object (or class), and report the average. The Jaccard index is given by \(\text{TP}\ /\ (\text{FP}+\text{FN})\), given the counts of binary True Positives (TP), False Positives (FP) and False Negatives (FN). Baselines.To provide a point of comparison to the semantic segmentation capabilities of the proposed features, we also report results for a number of segmentation baselines. We evaluate pretrained ResNet50 [20], DINO [7], and DINOv2 [29] feature extractors, first reducing the computed features to the same number of dimensions as ours (64) using PCA computed over the full training set. This is the same strategy employed to evaluate Neural Feature Fusion Fields [41]. Similar to our method, we then use a linear probe to produce the segmentations. We also evaluate two recent segmentation-specific models, Mask2Former [9] and MaskDINO [23] in their default setting, and a setting where we relabel each predicted segmentation mask with the ground-truth scene-consistent instance ID ("Oracle"). The former performs poorly because it does not maintain consistent instance identities across frames (no object re-identification), as is required by this task. Results.The results are shown in Table 2. Our proposed LoCUS features are better able to discriminate both semantic classes, such as undifferentiated ceiling or wall regions, as well as to re-identify particular object classes. Figure 6 visualizes the qualitative results. The proposed method outperforms the baseline feature extractor methods, especially on the instance segmentation and re-identification task, showing that the object identity predictions are stable under viewpoint changes. DINO features are trained to be invariant to image-space augmentations [7], and so understandably do not enjoy the same stability across viewpoints, especially when they change dramatically. Our method performs comparably with the Oracle methods despite not receiving the ground-truth labels. ### Relative pose estimation Since our LoCUS features are trained to be stable across 3D viewpoints, within a specified scale, they should be helpful for tasks that require spatial reasoning. Furthermore, the fact that we can train features for landmarks at different scales should help with coarse-to-fine strategies. For this reason, we focus on relative pose estimation. Note \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Semantic} & \multicolumn{3}{c}{Instance} & \multicolumn{3}{c}{Overall} \\ Model & mAP & mIoU & Jac & mAP & mIoU & Jac & mAP & mIoU & Jac \\ \hline ResNet50 & 0.39 & 0.26 & 0.55 & 0.18 & 0.11 & 0.12 & 0.19 & 0.12 & 0.41 \\ DINO [7] & 0.49 & 0.34 & 0.63 & 0.40 & 0.28 & 0.28 & 0.40 & 0.29 & 0.52 \\ DINOv2\({}^{\dagger}\) & **0.55** & 0.38 & 0.67 & 0.49 & 0.35 & 0.39 & 0.49 & 0.35 & 0.58 \\ Mask2Former & - & 0.03 & 0.07 & - & 0.00 & 0.00 & - & 0.00 & 0.06 \\ + Oracle\({}^{*}\) & - & **0.41** & **0.71** & - & 0.39 & 0.53 & - & 0.39 & 0.64 \\ MaskDINO & - & 0.05 & 0.15 & - & 0.00 & 0.00 & - & 0.00 & 0.12 \\ + Oracle\({}^{*}\) & - & **0.41** & **0.71** & - & **0.40** & **0.54** & - & **0.40** & **0.65** \\ LoCUS (Ours) & 0.53 & 0.37 & 0.67 & **0.54** & **0.40** & 0.42 & **0.54** & 0.39 & 0.59 \\ \hline \hline \end{tabular} \end{table} Table 2: Semantic and instance segmentation (respectively “stuff” and “things”) results, with object re-identification. Both models extract 64-dimensional feature vectors for \(8\times 8\) pixel patches, which are then classified into the relevant classes using a linear probe. Semantic classes contain “stuff” pixels grouped into their semantic categories, while instance classes contain pixels belonging to individual objects. \({}^{\star}\)Uses ground-truth instance labels. \({}^{\dagger}\)Released after submission deadline. that this is different from other settings like Simultaneous Localization And Mapping (SLAM), since those assume temporal continuity over a video stream. In contrast, we perform relative pose estimation between single pairs of images, without any extra context, which severely limits the information available. Dataset.We use the image pairs generated from Matterport3D for relative pose estimation that were introduced in SparsePlanes [22]. We remark that there is very limited overlap between the views of each pair, which makes this an extremely challenging task. Metrics.We report standard metrics for translation and rotation error. For translation, we report median and average errors, as well as the fraction of pairs that have an error smaller than 1 meter. For rotation, we also report median and average errors, and the fraction of pairs with error smaller than 30 degrees. Method.Given a pair of images, we extract the LoCUS features of each patch \(\phi(x_{i})\). We then calculate pairwise scores (Eq. 1), and for each patch, filter out all scores that are smaller than a threshold of 0.7. We then use two conditions on the continuity of the mapping from patches in image A to patches in image B to remove outliers from the set of patch pairs. Details on this process can be found in the supplementary material. Taking the top-100 pairs by score, we use a standard robust 5-point RANSAC algorithm [28] to calculate the essential matrix with the smallest error, and then find the corresponding relative pose (up to unknown scale) using a RANSAC chirality check [3]. The unknown scale can in practice be recovered using for example very coarse depth measurements; here we simply scale the translation vector by its ground truth length. Baselines.We compare with several baselines from the literature. Most of these were specifically engineered for geometric matching tasks, while ours focuses on coarser (multi-scale) landmark retrieval. As such, we expect ours to be more robust at matching in the large scale, while other methods to do better at very fine-grained geometric matching. We report results for SuperPoint [12] (pre-trained and with its feature dimension reduced to 64 using PCA) with nearest neighbours (NN) search and SuperPoint with FGINN for outlier removal. Given the pixel matches extracted in this way, we compute the in the same way as our method (5-point RANSAC [28]). We also report results for a number of methods that do not extract features, but are specialised to estimate relative poses more directly: SparsePlanes [22], 8-Point-Supervision [32], and PlaneFormers [1]. Results.The results are presented in Table 3. We can see that, despite not being trained specifically for camera localization, the spatial stability of the trained features does help localize the camera correctly in most instances. Nevertheless, we would expect that with greater overlap between views, methods that are more geared towards fine-grained keypoint matching would do better than coarse matching methods such as ours, which are more concerned with coarse place (landmark) recognition. The most comparable method are the two relative pose estimation algorithms using SuperPoint keypoints, which our method outperforms. Figure 6: Qualitative results of semantic segmentation (wall, ceiling, floor classes) in the first two rows and of instance segmentation (household objects) in the third and fourth row. Note that object instance identities are stable across viewpoints, thus also performing object re-identification. ### Ablation study We also evaluated the relative impact of different design decisions in our method, and assessed its robustness to different hyper-parameter choices. The results from the preceding sections used the optimal combination under the constraint of similar memory consumption found in this study. We refer the interested reader to the supplemental material for detailed results. ## 5 Conclusion We have proposed a method for learning multi-scale view-invariant features from posed images by optimizing a novel retrieval-based objective: Vectorized-Smooth-AP. This objective modulates the DINO [7] ViT features towards 3D-consistency and adaptively selects highly-distinguishable landmarks. Moreover, we select the retrieval set in such a way to encourage the model to balance retrieval (distinctiveness) with reusability (generalisability), through the introduction of a "don't-care" region beyond a certain spatial extent. We demonstrate compelling performance when using these features for several downstream tasks, including place recognition and retrieval, semantic and instance segmentation with re-identification, and relative pose estimation, demonstrating the utility of our learned features. This result reinforces the strong semantic properties of self-supervised image features and shows how aggregating information in 3D, via the ranking loss function and camera pose supervision, can improve their effectiveness, especially for 3D-aware tasks. Nonetheless, strategies for removing the weak camera pose supervision warrant investigation, since a fully self-supervised approach would facilitate access to greater quantities of data. Depending on the environment, Structure-from-Motion [36] or SparsePose [37] may be able to alleviate this requirement, making it possible to train on larger-scale video data. Ethics and attribution.We use the Matterport3D dataset [8] in a manner compatible with their terms and the end user license agreement, available at this URL: [https://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf](https://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf). The dataset may accidentally contain personal data, but there is no extraction of personal or biometric information in this research. Acknowledgements.We are grateful for funding from EPSRC AIMS CDT EP/S024050/1 (D.K.), Continental AG (D.C.), and the Royal Academy of Engineering (RF/201819/18/163, J.H.).
2305.05045
Improved upper bounds on longest-path and maximal subdivision transversals
Let $G$ be a connected graph on $n$ vertices. The Gallai number $Gal(G)$ of $G$ is the size of the smallest set of vertices that meets every maximum path in $G$. Gr\"unbaum constructed a graph $G$ with $Gal(G)=3$. Very recently, Long, Milans, and Munaro, proved that $Gal(G)\leq 8n^{{3}/{4}}$. This was the first sublinear upper bound on $Gal(G)$ in terms of $n$. We improve their bound to $Gal(G)\leq 5 n^{{2}/{3}}$. We also tighten a more general result of Long et al. For a multigraph $M$ on m edges, we prove that if the set $L(M,G)$ of maximum $M$-subdivisions in $G$ is pairwise intersecting and $n\geq m^{6}$, then $G$ has a set of vertices with size at most $5 n^{{2}/{3}}$ that meets every $Q\in \mathcal{L}(M,G)$
Henry Kierstead, Eric Ren
2023-05-08T21:03:34Z
http://arxiv.org/abs/2305.05045v1
# Improved upper bounds on longest-path and maximal subdivision transversals ###### Abstract Let \(G\) be a connected graph on \(n\) vertices. The Gallai number \(\operatorname{Gal}(G)\) of \(G\) is the size of the smallest set of vertices that meets every maximum path in \(G\). Grunbaum constructed a graph \(G\) with \(\operatorname{Gal}(G)=3\). Very recently, Long, Milans, and Munaro, proved that \(\operatorname{Gal}(G)\leq 8n^{3/4}\). This was the first sublinear upper bound on \(\operatorname{Gal}(G)\) in terms of \(n\). We improve their bound to \(\operatorname{Gal}(G)\leq 5n^{2/3}\). We also tighten a more general result of Long et al. For a multigraph \(M\), we prove that if the set \(\mathcal{L}(M,G)\) of maximum \(M\)-subdivisions in \(G\) is pairwise intersecting and \(n\geq m^{6}\), then \(G\) has a set of vertices with size at most \(5n^{2/3}\) that meets every \(Q\in\mathcal{L}(M,G)\) ## 1 Introduction For a graph \(G=(V,E)\), let \(|G|=|V|\) and \(\|G\|=|E|\). Two graphs or vertex sets _meet_ if they have a common vertex. Define the Gallai number \(\operatorname{Gal}(G)\) of \(G\) to be the size of the smallest set \(S\subseteq V\) that meets every longest path in \(G\). It is folklore [9, Exercise 1.2.40] that if \(G\) is connected then any two longest paths have a common vertex. This result prompted Gallai [3] to ask whether \(\operatorname{Gal}(G)=1\) for all connected graphs \(G\). Walther [7] found a graph \(G\) with \(\operatorname{Gal}(G)=2\) and \(|G|=25\). Then Walther and Voss [8], and independently Zamfirescu [11], observed that replacing a vertex \(v\) of the Petersen graph by three leaves, each adjacent to a distinct neighbor of \(v\), yields a graph \(G\) with \(\operatorname{Gal}(G)=2\) and \(|G|=12\). Grunbaum [2] later produced a graph \(G\) with \(\operatorname{Gal}(G)=3\) and \(|G|=324\); soon after Zamfirescu [11] found a 270-vertex example. Both Walther and Zamfirescu [10] then posed the still-open question of whether the Gallai number has a constant upper bound for connected graphs. It even remains open whether any connected graph \(G\) has \(\operatorname{Gal}(G)\geq 4\). On the upper end, Rautenbach and Sereni [6] proved in 2014 that all connected graphs \(G\) satisfy \(\operatorname{Gal}(G)\leq\lceil|G|/4-|G|^{2/3}/90\rceil\). In 2020, Long, Milans, and Munaro [4] proved the first sublinear upper bound: \(\operatorname{Gal}(G)\leq 8|G|^{3/4}\). In fact, Long et al. proved a more general result. Let \(M\) be a connected multigraph. An \(M\)-_subdivision_ is (a copy of) a graph obtained from \(M\) by subdividing each of its edges \(0\) or more times. For example, a path is a \(K_{2}\)-subdivision, and a cycle is a \(C_{1}\)-subdivision, where \(C_{1}\) is the multi-cycle with one vertex and one loop. An \(M\)-subdivision \(Q\subseteq G\) is _maximum_ if no \(M\)-subdivision has more edges than \(Q\). Let \(\mathcal{L}(M,G)\) be the set of maximum \(M\)-subdivisions in \(G\). A _transversal_1 of \(\mathcal{L}(M,G)\) is a set of vertices \(S\subseteq V(G)\) that meets every \(Q\in\mathcal{L}(M,G)\). Let \(\tau(M,G)\), be the size of a minimal transversal of \(L(M,G)\). So \(\tau(K_{2},G)=\operatorname{Gal}(G)\). A family of sets (e.g. \(\mathcal{L}(M,G)\)) is _pairwise intersecting_ if any two of its members meet. As shown in [4], it is easy to check that if \(G\) is \((\|M\|^{2}+1)\)-connected then \(\mathcal{L}(M,G)\) is pairwise intersecting. The following proposition somewhat improves this bound when \(M\) has cut edges, and it is tight for our two motivating examples, \(M\in\{K_{2},C_{1}\}\). We prove it in Section 3. Footnote 1: We use “transversal” in the hypergraph sense, i.e., a vertex-cover, not in the combinatorics sense, which would require unique intersections. **Proposition 1**.: _Let \(M\) be a connected multigraph with \(c\) cut-edges, and let \(G\) be a graph. If \(\mathcal{L}(M,G)\) is not pairwise intersecting then the connectivity of \(G\) is at most \(\|M\|^{2}-c\|M\|+\binom{c}{2}\)._ Long et al. proved: **Theorem 2** ([4]).: _Let \(M\) be a connected multigraph and \(G\) be a graph. If \(\mathcal{L}(M,G)\) is pairwise intersecting then \(\tau(M,G)\leq 8\|M\|^{5/4}|G|^{3/4}\)._ Notice that if \(|G|\leq\|M\|^{5}\) then \(\tau(M,G)\leq|G|<8\|M\|^{5/4}|G|^{3/4}\). Here we tighten their bound by lowering the exponents when \(|G|>\|M\|^{3}\), and by eliminating the dependence on \(M\) when \(|G|>\|M\|^{6}\). **Theorem 3**.: _Let \(M\) be a connected multigraph and \(G\) be a graph. If \(\mathcal{L}(M,G)\) is pairwise intersecting and \(\|M\|^{3}<|G|\) then \(\tau(M,G)\leq\max\{5|G|^{2/3},2\|M\|^{2}|G|^{1/3}\}\). In particular, if \(|G|\geq\|M\|^{6}\) then \(\tau(M,G)\leq 5|G|^{2/3}\)._ **Notation**.: Fix \(i,j,n\in\mathbb{N}\). Set \([n]:=\{i\in\mathbb{N}:1\leq i\leq n\}\) and \(i\oplus j:=i+j\mod n\). We denote a path \(P\) with \(V(P)=\{v_{1},\ldots,v_{n}\}\) and \(E(P)=\{v_{1}v_{2},\ldots,v_{n-1}v_{n}\}\) by \(v_{1}\ldots v_{n}=v_{n}\ldots v_{1}\), and set \(v_{i}P:=v_{i}\ldots v_{n}\), \(Pv_{i}:=v_{1}\ldots v_{i}\) and \(v_{i}Pv_{j}:=v_{i}\ldots v_{j}\). We form walks by concatenating paths: in a graph \(G\), if \(Q:=w_{1}\ldots w_{s}\) then \(Pv_{i}w_{j}Q\) is defined to be \(v_{1}\ldots v_{i}w_{j}\ldots w_{s}\) if \(v_{i}w_{j}\in E(G)\) and \(v_{1}\ldots v_{i}w_{j+1}\ldots w_{s}\) if \(v_{i}=w_{j}\); else it is undefined. Let \(d_{G}(x,y)\) denote the distance form \(x\) to \(y\) in \(G\). The cycle \(C:=P+v_{n}v_{1}\) is denoted by \(v_{1}\ldots v_{n}v_{1}\). If \(Q:=v_{i}\ldots v_{j}\) then \(v_{i}(C-E(Q))v_{j}\) is the path \(v_{i}v_{i\oplus(n-1)}...v_{j\oplus 1}v_{j}\). Let \(v_{i}Cv_{j}\) denote the longer of \(v_{i}(C-v_{j\oplus 1})v_{j}\) and \(v_{i}(C-v_{i\oplus 1})v_{j}\) with ties going to the first. Let \(A,B\subseteq V(G)\). An \(A,B\)-path is a path \(P=v_{1}\ldots v_{t}\) with \(V(P)\cap A=\{v_{1}\}\) and \(V(P)\cap B=\{v_{t}\}\). An \(A,B\)_-separator_ is a set \(S\subseteq V(G)\) that meets every \(A,B\)-path. An \(A,B\)_-connector_ is a set of disjoint \(A,B\)-paths. As in Diestel [1], we assume that \(V(G)\cap E(G)=\emptyset\) and treat \(G\) as the set \(V(G)\cup E(G)\). We will need Menger's Theorem. **Menger's Theorem** ([5]).: _Let \(G\) be a graph with \(A,B\subseteq V(G)\). If \(S\) is a minimum \(A,B\)-separator in \(G\) and \(\mathcal{T}\) is a maximum \(A,B\)-connector in \(G\), then \(|S|=|\mathcal{T}|\)._ ## 2 Small transversals of \(\mathcal{L}(M,g)\) In this section we prove Theorem 3. First we simplify our notation for this and the next section. Fix a connected multigraph \(M=(W,F)\) with \(\|M\|=:m\) and a graph \(G\) with \(|G|=:n\). For \(H\subseteq G\), let \(\mathcal{L}(H):=\mathcal{L}(M,H)\), \(\tau(H):=\tau(M,H)\), \(\mathcal{L}:=\mathcal{L}(G)\) and \(\mu:=|Q|\), where \(Q\in\mathcal{L}\). An \(H\)-_transversal_ is a transversal of \(\mathcal{L}(H)\). For a graph \(Q\in\mathcal{L}\) and an edge \(e:=uv\in F\), let \(q^{u},q^{v},Q_{e}\) be the branch vertices and subdivided edge of \(Q\) corresponding to \(u,v,e\). We start with two (easy) general lemmas on paths and cycles. **Lemma 4**.: _Let \(C=v_{1}P_{1}v_{2}P_{2}v_{3}P_{3}v_{4}P_{4}v_{1}\) be a cycle, where each \(P_{i}\) is a path with \(\|P_{i}\|\geq 1\). Then \(\|P_{1}\|<\|P_{2}v_{3}P_{3}v_{4}P_{4}\|\) or \(\|P_{3}\|<\|P_{4}v_{1}P_{1}v_{2}P_{3}\|\). _ **Lemma 5**.: _Suppose \(C\subseteq G\) is a cycle and \(P=x\ldots y\subseteq G\) is a path with \(x,y\in C\). If \(\|P\|<d_{C}(x,y)\) then ther \(e\) is a cycle \(C^{\prime}\subseteq G\) with \(|C|/2<|C^{\prime}|<|C|\)._ Proof.: Let \(V(P\cap C)=\{x_{0},\ldots,x_{t}\}\), where \(P=x_{0}P_{1}x_{1}\ldots x_{t-1}P_{t}x_{t}\), \(x_{0}=x\), and \(x_{t}=y\). As \[\sum_{i\in[t]}\|P_{i}\|=\|P\|<d_{C}(x,y)\leq\sum_{i\in[t]}d_{C}(x_{i-1},x_{i}),\] there is \(i\in[t]\) with \(\|P_{i}\|<d(x_{i-1},c_{i})\leq|C|/2\). Let \(C^{\prime}\) be the cycle obtained by replacing the short \(x_{i-1},x_{i}\)-path in \(C\) by \(P_{i}\). Proof of Theorem 3.: Assume \(\mathcal{L}\) is pairwise intersecting and \(M^{3}<n\). We will show that \(\tau(\mathcal{L})\leq 5n^{2/3}\). Note that \(\tau(\mathcal{L})\leq\mu\). Similarly to [4], a pair \((X,Y)\) is called an \(H\)_-pretransversal_ if \(Y\subseteq X\subseteq V(H)\), \(n^{1/3}|Y|\leq|X|\), and for every graph \(Q\in\mathcal{L}(H)\) either \(Q\subseteq H-X\) or \(Q\cap Y\neq\emptyset\). Then \(Y\) is a \(G\)-transversal with \(|Y|\leq n^{2/3}\) if and only if \((V,Y)\) is a \(G\)-pretransversal. Let \((X,Y)\) be a \(G\)-pretransversal with \(|X|\) maximum; it exists because \((\emptyset,\emptyset)\) is a candidate. If \(X=V\) then we are done with \(\tau(\mathcal{L})\leq n^{2/3}\). Else set \(H:=G-X\). If \((X^{\prime},Y^{\prime})\), is an \(H\)-pretransversal then \((X^{\prime}\cup X,Y\cup Y^{\prime})\) is a \(G\)-pretransversal. So by maximality: \[\text{There is no $H$-pretransversal}. \tag{1}\] If \(S\) is an \(H\)-transversal then \(S\cup Y\) is a \(G\)-transversal. Thus it suffices to show that \(\tau(\mathcal{L}(H))\leq\max\{m^{2}n^{1/3},4n^{2/3}\}\). Arguing by contradiction, assume \[\tau(\mathcal{L}(H))>\max\{m^{2}n^{1/3},4n^{2/3}\} \tag{2}\] As we are working only in \(H\), let \(\tau:=\tau({\cal L}(H))\), and let _transversal_ mean \(H\)-transversal. The proof follows easily from the next four claims. The last two depend on the first, but there are no other dependencies. Some readers may prefer to skip to the last paragraph of this section before reading the claims and their proofs. _Claim 1_ ([4]): For all disjoint sets \(A,B\subseteq V(H)\) with \(s:=\min(|A|,|B|)\), there is an \(A,B\)-connector \({\cal K}\) in \(H\) with \(|{\cal K}|\geq s/n^{1/3}\). _Proof._ Suppose not. By Menger's Theorem, there is an \(A,B\)-separator \(Y^{\prime}\) in \(H\) such that \(n^{1/3}|Y^{\prime}|<s\). As \({\cal L}\) is pairwise intersecting, there is a component \(H^{\prime}\) of \(H-Y^{\prime}\) such that no other component of \(H-Y^{\prime}\) contains a member of \({\cal L}({\cal H})\). Since \(Y^{\prime}\) separates \(A\) and \(B\), at least one (say \(A\)) of \(A\), \(B\) must be contained in \(X^{\prime}:=V(H-H^{\prime})\). Thus all \(Q\in{\cal L}(H)\) intersect \(Y^{\prime}\) or are contained in \(H^{\prime}\). Also \(n^{1/3}|Y^{\prime}|\leq s\leq|A|\leq|X^{\prime}|\). So \((X^{\prime},Y^{\prime})\) is an \(H\)-pretransversal, contradicting (1). \(\triangle\) _Claim 2_: \(H\) has a cycle \(C_{0}\) with \(|C_{0}|>mn^{1/3}\). _Proof._ Let \(Q\in{\cal L}(H)\), and pick a minimum transversal \(S\) subject to \(S\subseteq V(Q)\). By (2), \(|S|>m^{2}n^{1/3}\), so there is an edge \(e\in F\) with \(|Q_{e}\cap S|>mn^{1/3}\). Pick a shortest path \(P:=q_{i}\ldots q_{j}\subseteq Q_{e}\) such that \((S\smallsetminus V(Q_{e}))\cup V(P)\) is a transversal. By minimality, \(|P|\geq|Q_{e}\cap S|\geq mn^{1/3}\), and for each end \(v\) of \(P\) there is an \(M\)-subdivision \(R(v)\in{\cal L}(H)\) that meets \(P\) only at \(v\). As \({\cal L}\) is pairwise intersecting, there is a \(q_{i},q_{j}\)-path \(R\subseteq R(q_{i})\cup R(q_{j})\). Now \(C_{0}:=P\cup R\) is a cycle with \(|C_{0}|>mn^{1/3}\). \(\triangle\) _Claim 3_: Suppose \(C\subseteq G\) is a cycle that is not a transversal. Then (i) if \(|C|>mn^{1/3}\) then \(G\) has a cycle that is longer than \(C\) and (ii) \(|C|\leq 2n^{2/3}\). _Proof._ Let \(Q\in{\cal L}(H)\) witnesses that \(C\) is not a transversal. By Claim 1, there is a \(C,Q\)-connector \({\cal T}\) with \(|{\cal T}|>\min\{|C|,\mu\}/n^{1/3}\). (i) Suppose \(|C|>mn^{1/3}\). Pigeonholing, there is an edge \(e\in F\) such that \(Q_{e}\) meets \({\cal T}\) at least twice. Say \(T_{i}=x_{i}\ldots y_{i}\in{\cal T},i\in[2]\). Applying Lemma 4, to the cycle \(C^{\prime}:=x_{1}T_{1}y_{1}Q_{e}y_{2}T_{2}x_{2}Cx_{1}\), and using the maximality of \(Q\), yields that \(|C^{\prime}|>|C|\). (ii) Suppose \(|C|>2n^{2/3}\). For each edge \(e=uv\in F\), let \(h_{e}\) be the number of paths in \({\cal T}\) that end in \(Q_{e}\). When \(h_{e}\neq 0\) these ends partition \(E(Q_{e})\) into \(h_{e}-1\)_inner_ paths and two _outer_ paths containing \(q^{u}\) or \(q^{v}\). See Figure 1. The number of inner subpaths Figure 1: Partitioned \(Q_{wx},Q_{xy},Q_{yz}\) with \(0,1,2\) inner paths is at least \[\sum_{e\in F}(h_{e}-1)=|\mathcal{T}|-m>2n^{1/3}-m>n^{1/3}.\] By Lemma 4, \(\|I\|>|C|/2>n^{2/3}\) for all inner paths \(I\), so \(|Q|>n\), a contradiction. \(\triangle\) _Claim 4_: Suppose \(C=v_{1}\ldots v_{l}v_{1}\subseteq G\) is a cycle with \(l>4n^{2/3}\). Then there is a cycle \(C^{\prime}\subseteq G\) with \(l/2<|C^{\prime}|<l\). _Proof._ Let \(l=4k+r,0\leq r<4\). Define paths \(P_{1}=v_{1}\ldots v_{k+1}\), \(P_{2}=v_{k+2}\ldots v_{2k+1}\), \(P_{3}=v_{2k+2}\ldots v_{3k+3}\), and \(P_{4}=v_{3k+4}\ldots v_{l}\), where \(a\in\{0,1\}\). Then \[C=v_{1}P_{1}v_{k+1}v_{k+2}P_{2}v_{2k}v_{2k+1}P_{3}v_{3k+1}v_{3k+2}P_{4}v_{l}v_{1},\] \(|P_{1}|=k+1=|P_{3}|\) and \(k-1=|P_{2}|\leq|P_{4}|\). By Claim 1, there is a \(P_{1},P_{3}\)-connector \(\mathcal{T}\) with \(|\mathcal{T}|\geq(k+1)/n^{1/3}>n^{1/3}\). Pigeonholing, there is a path \(T=x\ldots y\in\mathcal{T}\) with \[\|T\|+1\leq|T|\leq\lfloor\frac{n}{|\mathcal{T}|}\rfloor\leq\lfloor n^{2/3} \rfloor<k+1\leq|P_{2}|+2\leq d_{C}(x,y)+1.\] Now \(\|T\|<d_{C}(x,y)\), so we are done by Lemma 5. See Figure 2. \(\triangle\) Now we complete the proof of Theorem 3. By Claim 2, \(H\) has a maximum cycle \(C_{1}\) with \(|C_{1}|\geq mn^{1/3}\). By Claim 3(i), \(C_{1}\) is a transversal. Let \(C\subseteq G\) be a minimum cycle subject to \(C\) being a transversal. By (2), \(|C|>4n^{2/3}\). By Claim 4, there is a cycle \(C^{*}\subseteq G\) with \(2n^{2/3}<|C^{*}|<|C|\). By minimality, \(|C^{*}|\) is not a transversal, but by Claim 3(ii) \(C^{*}\), is a transversal, a contradiction. ## 3 Pairwise-intersecting families In this section we prove Proposition 1 and speculate on possible improvements. _Proof of Proposition 1._ Suppose \(\mathcal{L}(M,G)\) is not pairwise intersecting; say \(Q,R\in\mathcal{L}(M,G)\) are disjoint. Let \(\mathcal{T}:=\{T_{i}:i\in[t]\}\) be a maximum \(Q,R\)-connector. Define a bipartite multigraph \(\mathcal{H}\) with \(V(\mathcal{H})=\bigcup_{e\in F}(Q_{e}\cup R_{e})\) and \(E(\mathcal{H})=\mathcal{T}\), where the ends of the edge \(T_{i}\) in \(\mathcal{H}\) are \(Q_{e}\) and \(R_{e^{\prime}}\) when the ends of the path \(T_{i}\) in \(G\) are in \(Q_{e}\) and \(R_{e^{\prime}}\); if there is more than one option for the end of the edge \(T_{i}\) (when this end is a branch vertex with degree at least three), then make an arbitrary choice. Let \(X\) be the set of cut edges in \(F\), \(Y=F\smallsetminus X\); then \(c=|X|\). Using Menger's Theorem, it suffices to show that \(\|\mathcal{H}\|\leq m^{2}-cm+\binom{c}{2}\). We first show that \(\mathcal{H}\) is simple; thus \(\|\mathcal{H}\|\leq m^{2}\) (this is essentially the argument in [4]). If not, then there are \(e,e^{\prime}\in F\) and distinct \(i,j\in[t]\) with \(T_{i},T_{j}\in E(Q_{e},R_{e^{\prime}})\). But then, by Lemma 4, we can enlarge \(Q_{e}\) or \(R_{e^{\prime}}\), contradicting maximality. Now suppose \(e=xy\in X\), \(M_{1}\) & \(M_{2}\) are the two components of \(M-e\) with \(y\in V(M_{2})\) and \(e^{\prime}\in E(M_{2})\). For \(i\in[2]\), put \(Q^{i}=\bigcup_{f\in E(M_{i})}Q_{f}\) and \(R^{i}=\bigcup_{f\in E(M_{i})}R_{f}\). Suppose \(Q_{e}R_{e^{\prime}},R_{e}Q_{e^{\prime}}\in E(\mathcal{H})\). Then there are paths \(P=u\ldots v,P^{\prime}=u^{\prime}\ldots v^{\prime}\in\mathcal{T}\) with \(u\in P_{e}\), \(v\in R_{e^{\prime}}\), \(u^{\prime}\in R_{e}\) and \(v^{\prime}\in Q_{e^{\prime}}\). As \(M\) is connected, there are a \(q^{y},v^{\prime}\)-path \(\dot{Q}\subseteq Q\) and an \(r^{y},v\)-path \(\dot{R}\subseteq R\). See Figure 3. One of the \(M\)-subdivisions \[Q^{1}\cup q^{x}Q_{e}q^{y}\dot{Q}v^{\prime}P^{\prime}u^{\prime}R_{e}r^{y}\cup R ^{2}\text{ and }R^{1}\cup r^{x}R_{e}r^{y}\dot{R}vPuQ_{e}q^{y}\cup Q^{2}\] has size greater than \(\mu\), contradicting maximality. Thus at most one of \(Q_{e}R_{e^{\prime}},R_{e}Q_{e^{\prime}}\) is an edge of \(\mathcal{H}\). So \[\|\mathcal{H}\| \leq|Y|^{2}+\frac{1}{2}(|X||Y|+|Y||X|+|X|(|X|-1))\] \[=(m-c)^{2}+c(m-c)+\frac{c(c-1)}{2}=m^{2}-cm+\binom{c}{2}.\qed\] We have no reason to believe that Proposition 1 is close to optimal. It may be interesting to investigate the case that \(M\) is a tree. For instance, it is not hard to see that \(Q_{e}R_{e^{\prime}}\notin E(\mathcal{H})\) for all \(e,e^{\prime}\in F\) with \(e,e^{\prime}\) incident to leaves in \(M\). Thus if \(M\) is a star and \(G\) is connected, then \(\mathcal{L}(M,G)\) is pairwise intersecting. It is well known Figure 3: Violating maximality of \(Q,R\) [1, Exercise 1.27] that if \(G\) is a tree and \(\mathcal{L}:=\mathcal{L}(M,G)\) is pairwise intersecting, then \(\tau(\mathcal{L})=1\). We also have no reason to believe that Theorem 3 is close to optimal. For the case \(M=K_{2}\) (maximum paths) \(\tau\) could be constant. One advantage of considering the more general problem is that it may be easier to develop techniques for proving lower bounds in this setting. **Acknowledgement.** We thank a referee for suggesting that we extend our original argument for longest path transversals to maximum subdivision transversals.
2304.12610
Fast Continuous Subgraph Matching over Streaming Graphs via Backtracking Reduction
Streaming graphs are drawing increasing attention in both academic and industrial communities as many graphs in real applications evolve over time. Continuous subgraph matching (shorted as CSM) aims to report the incremental matches of a query graph in such streaming graphs. It involves two major steps, i.e., candidate maintenance and incremental match generation, to answer CSM. Throughout the course of continuous subgraph matching, incremental match generation backtracking over the search space dominates the total cost. However, most previous approaches focus on developing techniques for efficient candidate maintenance, while incremental match generation receives less attention despite its importance in CSM. Aiming to minimize the overall cost, we propose two techniques to reduce backtrackings in this paper. We present a cost-effective index CaLiG that yields tighter candidate maintenance, shrinking the search space of backtracking. In addition, we develop a novel incremental matching paradigm KSS that decomposes the query vertices into conditional kernel vertices and shell vertices. With the matches of kernel vertices, the incremental matches can be produced immediately by joining the candidates of shell vertices without any backtrackings. Benefiting from reduced backtrackings, the elapsed time of CSM decreases significantly. Extensive experiments over real graphs show that our method runs faster than the state-of-the-art algorithm orders of magnitude.
Rongjian Yang, Zhijie Zhang, Weiguo Zheng, Jeffery Xu Yu
2023-04-25T06:54:35Z
http://arxiv.org/abs/2304.12610v1
# Fast Continuous Subgraph Matching over Streaming Graphs ###### Abstract. Streaming graphs are drawing increasing attention in both academic and industrial communities as many graphs in real applications evolve over time. Continuous subgraph matching (shorted as CSM) aims to report the incremental matches of a query graph in such streaming graphs. It involves two major steps, i.e., candidate maintenance and incremental match generation, to answer CSM. Throughout the course of continuous subgraph matching, incremental match generation backtracking over the search space dominates the total cost. However, most previous approaches focus on developing techniques for efficient candidate maintenance, while incremental match generation receives less attention despite its importance in CSM. Aiming to minimize the overall cost, we propose two techniques to reduce backtrackings in this paper. We present a cost-effective index CaLiG that yields tighter candidate maintenance, shrinking the search space of backtracking. In addition, we develop a novel incremental matching paradigm KSS that decomposes the query vertices into conditional kernel vertices and shell vertices. With the matches of kernel vertices, the incremental matches can be produced immediately by joining the candidates of shell vertices without any backtrackings. Benefiting from reduced backtrackings, the elapsed time of CSM decreases significantly. Extensive experiments over real graphs show that our method runs faster than the state-of-the-art algorithm orders of magnitude. Subgraph Matching, Streaming Graph, Backtracking Reduction + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science
2310.06922
Tracing the Snowball bifurcation of aquaplanets through time reveals a fundamental shift in critical-state dynamics
The instability with respect to global glaciation is a fundamental property of the climate system caused by the positive ice-albedo feedback. The atmospheric carbon dioxide concentration at which this Snowball bifurcation occurs changes through Earth's history because of the slowly increasing solar luminosity. Quantifying this critical CO$_2$ level is not only interesting from a climate dynamics perspective, but also a prerequisite for understanding past Snowball Earth events as well as the conditions for habitability on Earth and other planets. Earlier studies are limited to investigations with simple climate models for Earth's entire history, or studies of individual time slices carried out with a variety of more complex models and for different boundary conditions, making comparisons and the identification of secular changes difficult. Here we use a coupled climate model of intermediate complexity to trace the Snowball bifurcation of an aquaplanet through Earth's history in one consistent model framework. We find that the critical CO$_2$ concentration decreases more or less logarithmically with increasing solar luminosity until about 1 billion years ago, but drops faster in more recent times. Furthermore, there is a fundamental shift in the dynamics of critical states about 1.2 billion years ago, driven by the interplay of wind-driven sea-ice dynamics and the surface energy balance: For critical states at low solar luminosities, the ice line lies in the Ferrel cell, stabilised by the poleward winds despite moderate meridional temperature gradients under strong greenhouse warming. For critical states at high solar luminosities on the other hand, the ice line rests at the Hadley-cell boundary, stabilised against the equatorward winds by steep meridional temperature gradients resulting from the increased solar energy input at lower latitudes and stronger Ekman transport in the ocean.
Georg Feulner, Mona Bukenberger, Stefan Petri
2023-10-10T18:22:59Z
http://arxiv.org/abs/2310.06922v1
Tracing the Snowball bifurcation of aquaplanets through time reveals a fundamental shift in critical-state dynamics ###### Abstract The instability with respect to global glaciation is a fundamental property of the climate system caused by the positive ice-albedo feedback. The atmospheric concentration of carbon dioxide (CO\({}_{2}\)) at which this Snowball bifurcation occurs changes through Earth's history, most notably because of the slowly increasing solar luminosity. Quantifying this critical CO\({}_{2}\) concentration is not only interesting from a climate dynamics perspective, but also an important prerequisite for understanding past Snowball Earth episodes as well as the conditions for habitability on Earth and other planets. Earlier studies are limited to investigations with very simple climate models for Earth's entire history, or studies of individual time slices carried out with a variety of more complex models and for different boundary conditions, making comparisons and the identification of secular changes difficult. Here we use a coupled climate model of intermediate complexity to trace the Snowball bifurcation of an aquaplanet through Earth's history in one consistent model framework. We find that the critical CO\({}_{2}\) concentration decreases more or less logarithmically with increasing solar luminosity until about 1 billion years ago, but drops faster in more recent times. Furthermore, there is a fundamental shift in the dynamics of the critical state about 1.2 billion years ago (unrelated to the downturn in critical CO\({}_{2}\) values), driven by the interplay of wind-driven sea-ice dynamics and the surface energy balance: For critical states at low solar luminosities, the ice line lies in the Ferrel cell, stabilised by the poleward winds despite moderate meridional temperature gradients under strong greenhouse warming. For critical states at high solar luminosities on the other hand, the ice line rests at the Hadley-cell boundary, stabilised against the equatorward winds by steep meridional temperature gradients resulting from the increased solar energy input at lower latitudes and stronger Ekman transport in the ocean. ## 1 Introduction Today, the climate of planet Earth is in a state which is neither too hot nor too cold for life, with the ice line being far from the equator. The position of the ice line is ultimately determined by the planetary energy balance, in particular the incoming solar radiation and the concentration of greenhouse gases in the atmosphere, as well as the ice-albedo feedback and meridional heat transport in Earth's climate system (e.g. North et al., 1981). One of the consequences of the positive ice-albedo feedback is the phenomenon that over a range of boundary conditions more than one state of Earth's climate can be stable for the same levels of incoming solar radiation and greenhouse gases. So even with the current solar luminosity and atmospheric composition, Earth might as well rest in a state a lot less welcoming to living beings - a fully glaciated Snowball state with surface temperatures multiple tens of degrees lower than today (e.g. North, 1990). There would be no liquid water at Earth's surface, a necessary condition at least for complex life as we know it. For solar luminosities and/or greenhouse-gas concentrations lower than today, a bifurcation point in phase space is reached at some point. This means that for a certain set of parameters, the system ends up with only the Snowball state being stable. If the current climate cooled down, the ice line would descend slowly towards the equator at first. But at some point, the system would tip: Earth would rapidly cool down, ending up in a global glaciation (Opik, 1953). The Snowball instability crossed in this case is fundamental to the climate system and ultimately caused by the positive ice-albedo feedback. The first to quantify the Snowball instability in terms of the critical solar luminosities for given system parameters were Budyko (1969) and Sellers (1969) who analysed the Snowball bifurcation with one-dimensional energy balance models (EBMs, North et al. 1981). These EBMs are built upon the principle of energy conservation at a given latitude and consist of either a time-dependent differential equation for the temperature as a function of latitude or a time-independent equation assuming that the system is in equilibrium. Budyko and Sellers found that even a small decrease in solar luminosity would suffice for an Earth with today's system parameters (in terms of the greenhouse effect) to end up in a state of complete glaciation. Despite their simplicity, many authors have used EBMs - often incorporating various modifications and extensions - ever since the pioneering studies by Budyko (1969) and Sellers (1969), especially in order to understand how various factors influence the Snowball instability and the climate system's phase space properties (Faegre, 1972; Schneider and Gal-Chen, 1973; Held and Suarez, 1974; North, 1975a, b; Gal-Chen and Schneider, 1976; Ghil, 1976; Drazin and Griffel, 1977; Lindzen and Farrell, 1977; Cahalan and North, 1979; North and Coakley, 1979; North et al., 1981, 1983; North, 1990; Huang and Bowman, 1992; Ikeda and Tajika, 1999; Shen and North, 1999; Rose and Marshall, 2009; Roe and Baker, 2010). In terms of Earth's long-term habitability, the Snowball bifurcation is particularly relevant in light of the fact that the solar luminosity was considerably lower in the past (e.g. Gough, 1981). The time evolution of the critical CO\({}_{2}\) concentration required to prevent global glaciation has typically been studied with radiative-convective models (RCMs; Ramanathan and Coakley, 1978) rather than EBMs. However, in RCMs, the ice-albedo feedback is not taken explicitly into account, but has typically been considered by requiring a global mean surface temperature well above the freezing point of water. Examples for such investigations can be found in Kasting (1987) or von Paris et al. (2008), for example. In addition to considerations of planetary habitability, open problems related to specific time periods in Earth's history are the second important reason why one should be interested in the Snowball instability. The most relevant geologic eons in this respect are the Archean (about 4 to 2.5 Ga, 1 Ga \(=10^{9}\) years ago) with evidence for habitable conditions on Earth despite a considerably fainter young Sun (Feulner, 2012; Charnay et al., 2020) and the Proterozoic (about 2.5 to 0.54 Ga) for which there is geological evidence for near-global glaciations both in the beginning (Tang and Chen, 2013) and towards the end of the con (Hoffman and Schrag, 2002). During the Archean, the luminosity of the young Sun was reduced by 20-25% as compared to today. Results from EBMs and RCMs suggest that such a large reduction in the incoming solar radiation should have turned Earth into a Snowball, yet the geologic record suggests the presence of liquid water. This puzzling discrepancy, known as the _Faint Young Sun Paradox_(Feulner, 2012; Charnay et al., 2020), has led to considerable interest in quantifying the Snowball bifurcation point on early Earth. For four decades, climate-modelling work on the Faint Young Sun Paradox has been dominated by radiative-convective climate models essentially neglecting the ice-albedo feedback. More recently, however, a number of research groups have published first results from three-dimensional climate models investigating solutions for the Faint Young Sun paradox (Kienert et al., 2012, 2013; Charnay et al., 2013; Wolf and Toon, 2013; Kunze et al., 2014; Le Hir et al., 2014). However, all of these model studies are limited either by simplifications in their atmosphere or in their ocean/sea-ice components and use a variety of different assumptions and boundary conditions, leading to a large spread in simulated critical CO\({}_{2}\) concentrations. In contrast to the Archean, there is geologic evidence for low-latitude glaciations during the Proterozoic (Tang and Chen, 2013; Hoffman and Schrag, 2002), so the central question becomes _What are the conditions required for Snowball events?_ rather than _What prevented Earth from freezing over completely?_ This problem has been investigated in a number of modelling studies, in particular for the Neoproterozoic (e.g. Chandler and Sohl, 2000; Lewis et al., 2003; Poulsen et al., 2002; Donnadieu et al., 2004; Lewis et al., 2007; Pierrehumbert et al., 2011; Voigt et al., 2011; Voigt and Abbot, 2012; Liu et al., 2013; Feulner and Kienert, 2014; Feulner et al., 2015; Liu et al., 2017; Braun et al., 2022; Horner et al., 2022), yielding critical CO\({}_{2}\) concentrations from below 40 ppm to about 700 ppm depending on model type and boundary conditions. In most climate modelling studies, global glaciation is initiated once the ice margin moves to latitudes lower than about 30\({}^{\circ}\). Some models, however, exhibit stable critical states with ice lines much closer to the equator, often referred to as waterbelt or Jormungand states (e.g. Abbot et al., 2011). In these models, such states are stabilised by a number of mechanisms, most importantly the low albedo of bare sea ice and wind-driven meridional heat transport in the ocean towards the ice line (Voigt and Abbot, 2012). It should be noted, however, that the existence of stable states with tropical sea-ice margins often occurs in models lacking representation of sea-ice dynamics which has been demonstrated to strongly destabilise such states (Voigt and Abbot, 2012). Recently, Braun et al. (2022) have shown that tropical waterbelt states are strongly influenced by cloud radiation and microphysics, questioning the stability of these states under Neoproterozoic boundary conditions. Despite the relevance of the phenomenon for our understanding of past climate states and planetary habitability, there has - to our knowledge - not yet been an attempt to study the Snowball instability throughout Earth's history within one consistent framework and using a more complex climate model. At one end of the spectrum, earlier studies comprise conceptual investigations with very simple climate models like RCMs or EBMs. The latter in particular help to understand the principles of the instability and changes in phase space. At the other end of the spectrum, there are many investigations of single events in Earth's history with climate models of various complexities, ranging from models of intermediate complexity to atmosphere-ocean general circulation models (AOGCMs), and with a variety of different boundary conditions. These studies provide detailed insights about the time slices investigated, but the lack of uniform simulation design, model architecture and boundary conditions makes it hard to compare them to each other or to study the evolution of the Snowball instability with time. In this study, we want to bridge the gap between studies of the Snowball instability for single time slices with complex models and conceptual investigations of its time evolution. To this end, we use an Earth-system model of intermediate complexity (EMIC, Claussen et al. 2002) in an aquaplanet configuration to scan for the Snowball bifurcation point for time slices spanning the last 4 billion years, thus quantifying the time evolution of the bifurcation and identifying a qualitative shift in critical state dynamics. Although aquaplanet setups were used in the context of Snowball glaciations before (e.g. Pierrehumbert et al., 2011; Braun et al., 2022; Horner et al., 2022), we uniquely focus on the long-term evolution of the bifurcation point. This article is organised as follows. In Sect. 2, we describe the coupled climate model used to scan the Snowball bifurcation of aquaplanets through Earth history, the boundary conditions as well as design of our numerical experiments. In Sect. 3, we present the Snowball bifurcation for aquaplanets through time and compare our results to earlier studies (Sect. 3.1). Furthermore, we describe global properties of the critical states for the different time slices (Sect. 3.2) and discuss the two different dynamical regimes for the critical state (Sect. 3.3). Finally, in Sect. 4 we summarise the major findings of our work in the context of earlier studies, discuss limitations and outline potential future research. ## 2 Methods: Coupled climate model simulations ### Model description Scanning for the Snowball bifurcation for more than a dozen time slices throughout Earth's history requires a relatively fast coupled climate model. We employ the Earth-system model of intermediate complexity CLIMBER-\(3\alpha\)(Montoya et al., 2005). CLIMBER-\(3\alpha\) consists of a modified version of the ocean general circulation model (OGCM) MOM3 (Pacanowski and Griffies, 1999; Hofmann and Morales Maqueda, 2006) with a horizontal resolution of \(3.75^{\circ}\times 3.75^{\circ}\) and \(24\) vertical levels, a dynamic/thermodynamic sea-ice model (Fichefet and Morales Maqueda, 1997) at the same horizontal resolution and allowing for partially ice-covered grid cells, and a fast statistical-dynamical atmosphere model (Petoukhov et al., 2000) with a coarse horizontal resolution of \(22.5^{\circ}\) in longitude and \(7.5^{\circ}\) in latitude. We emphasize that the sea-ice model explicitly takes into account sea-ice dynamics, a factor which has been found to be of crucial importance for the Snowball bifurcation (Lewis et al., 2003, 2007; Voigt and Abbot, 2012). The Snowball bifurcation also critically depends on cryosphere albedo (e.g. Yang et al., 2012a). Our model uses clear-sky albedo values (averaged over all wavelengths) of 0.50 and 0.40 for freezing and melting sea ice, and 0.75 and 0.65 for cold and warm snow, respectively. Albedo values for ultraviolet+visible light are 0.30 larger than near-infrared albedos, and a partitioning of 60% (ultraviolet+visible) and 40% (near-infrared) is assumed. The effects of snow cover on sea ice are explicitly taken into account. The main limitations of the model relate to its simplified atmosphere component (Petoukhov et al., 2000). Particularly relevant for this study are the coarse spatial resolution, the highly parameterised vertical structure, the simple two-layer cloud scheme with cloud fractions depending on humidity and vertical velocity, and the simplified description of large-scale circulation patterns, including the fixed annual-mean width of the Hadley and Ferrel cells. Note that the boundary between the Hadley cells moves with the thermal equator, with a corresponding, but smaller shift in the boundaries between the Hadley and the Ferrel cells, see Petoukhov et al. (2000, Section 3.2). Thus the overall changes of the large-scale circulation with the seasonal cycle are represented in the model in principle. We note, however, that despite these limitations the Snowball bifurcation points derived for Neoproterozoic time slices with our model (Feulner and Kienert, 2014) fall well within the range of those from state-of-the-art atmosphere-ocean general circulation models (AOGCMs, see also Figure 1) and agree very well with models using similar cryosphere albedo values (Voigt and Abbot, 2012; Yang et al., 2012c; Liu et al., 2013). The impact of model limitations on our results will be discussed in Sect. 4. ### Boundary conditions and design of numerical experiments To facilitate comparison between the different time slices we chose an aquaplanet configuration without any continents. In contrast to some other coupled model simulations of aquaplanets, we do not place small islands at the poles; the poles are treated similar to the North pole in present-day simulations with ocean models using spherical grids, applying filtering in the polar regions to prevent numerical instability. The ocean topography was generated by randomly assigning an ocean depth to each grid cell using a Gaussian probability distribution with a mean depth of 3000 m and a variance of 450 m. For each grid cell, the resulting random depth value was then assigned to the corresponding vertical level of the ocean model's grid. We chose to have an ocean with varying depth rather than a flat ocean floor in order to avoid potential numerical instabilities. The Snowball bifurcation point is derived for a total of 18 time slices ranging from today to 3600 Ma (million years ago), see Table 1. The solar constant was scaled based on the approximation formula from Gough (1981), assuming a present-day value of \(S_{0}\!=\!1361\) W/m\({}^{2}\) (Kopp and Lean, 2011) and an age of the Sun of 4570 Ma (Bonanno et al., 2002). Orbital parameters were set to a circular orbit with obliquity \(23.5^{\circ}\). For each time slice, a number of equilibrium simulations were run for different CO\({}_{2}\) concentrations bracketing the Snowball bifurcation (see Table 1). In addition, we have run model experiments at two fixed levels of CO\({}_{2}\) (10,000 ppm and 10 ppm) and decreasing solar luminosities of 1140 W/m\({}^{2}\), 1130 W/m\({}^{2}\), 1125 W/m\({}^{2}\) and 1120 W/m\({}^{2}\) for 10,000 ppm, and 1334 W/m\({}^{2}\), 1329 W/m\({}^{2}\) and 1324 W/m\({}^{2}\) for 10 ppm respectively. For the lowest value of the solar constant in each of these cases the model entered a Snowball state. The total atmospheric pressure was kept constant at 1 bar in all simulations. Most simulations were initialised from a warm, ice-free state with idealised symmetric present-day ocean temperature and salinity fields. Note that critical state characteristics might depend on initial conditions (e.g. Yang et al., 2012a); our results are valid for trajectories starting from a warm, ice-free state, other initial conditions are not investigated here. In many cases simulations pinpointing the Snowball bifurcation point were branched from runs with higher CO\({}_{2}\) concentrations. All simulations were integrated for at least 2,000 model years after the last change in CO\({}_{2}\) concentration to allow the ocean approaching equilibrium conditions. ## 3 Results ### The Snowball bifurcation for aquaplanets through time The critical CO\({}_{2}\) concentration for the Snowball bifurcation through Earth's history as derived from the aquaplanet simulations with CLIMBER-3\(\alpha\) is shown in Figure 1. As expected, there is excellent agreement between the bifurcation points derived at the two fixed CO\({}_{2}\) levels and the closest corresponding simulation derived at fixed values of the solar constant. In other words, there is no fundamental difference between scanning for the critical values in the horizontal and the vertical direction in the diagram. In the figure the values are also compared to proxy estimates for the past CO\({}_{2}\) concentration in Earth's atmosphere and to earlier modelling studies for individual time slices, the latter differentiated by model type. With solar luminosity increasing over time, the critical CO\({}_{2}\) concentration in our aquaplanet simulations falls from \(\sim 10^{5}\) ppm to below 1 ppm between 4 Ga and today. This decrease is more or less logarithmic from 4 Ga to 1 Ga, with more steeply falling CO\({}_{2}\) levels in more recent times. The logarithmic decrease with linearly increasing insolation can be understood in terms of the well-known logarithmic behaviour of the CO\({}_{2}\) radiative forcing (Huang and Bani Shahabadi, 2014). The downturn of the critical CO\({}_{2}\) concentration for solar luminosities approaching the modern value can be attributed to three factors: (1) The steadily increasing incoming solar radiation (which shows a strong variation with latitude) leads to a more positive surface radiation balance in particular over the open ocean areas despite a relatively weak greenhouse warming (which is more uniformly distributed), making sea-ice formation at low latitudes increasingly difficult. (2) Even at the very low global mean surface air temperatures of \(\sim\!-15^{\circ}\)C of the critical states (see Figure 2), there is greenhouse warming due to water vapour which becomes significant at very low CO\({}_{2}\) levels. This is also facilitated by the fact that the maximum of the thermal emission spectrum is shifted towards longer wavelengths and thus into the H\({}_{2}\)O rotational bands because of the lower temperature. (3) Finally, there is also a warming contribution from cloud radiative effects. The critical CO\({}_{2}\) concentrations as a function of the solar constant \(S\) derived from our aquaplanet simulations can be approximated by the following formula (\(S_{0}\) is the present-day solar constant): \[p\mathrm{CO}_{2,\mathrm{crit}}\,=\,a_{1}\exp\left[a_{2}\left(a_{3}-\frac{S}{S_ {0}}\right)^{a_{4}}\right] \tag{1}\] Fitting this function to the points derived from the aquaplanet simulations yields the following parameters: \(a_{1}\!=\!(0.0836\!\pm\!0.0721)\) ppm, \(a_{2}\!=\!26.6\!\pm\!0.7\), \(a_{3}\!=\!1.0\!\pm\!0.00002\) and \(a_{4}\!=\!0.475\!\pm\!0.051\), providing a good approximation over the entire range of solar luminosities, see Figure 1. Before discussing our results in the context of previous model studies for several key time periods in Earth's history in Sect. 3.1.2, we will describe and discuss more general features which can be derived from this synthesis and earlier studies. #### 3.1.1 General observations on the Snowball bifurcation **The presence of continents and ice sheets makes global glaciations easier.** For models of similar design, the presence of continents makes Earth more susceptible to glaciation as compared to an aquaplanet configuration, predominantly due to the higher surface albedo of land areas compared to open oceans and the lower water vapour content of the atmosphere (Poulsen et al., 2002; Voigt et al., 2011; Liu et al., 2013; Kunze et al., 2014). The aquaplanet simulation for a modern solar constant and 277 ppm of CO\({}_{2}\), for example, has a global and annual mean surface air temperature of \(19.4^{\circ}\)C compared to \(15.1^{\circ}\)C in a pre-industrial simulation using our model (Feulner, 2011). Similarly, our simulations with Neoproterozoic continents (Feulner and Kienert, 2014) indicate slightly higher critical CO\({}_{2}\) values than the aquaplanet simulation at similar solar luminosity, although the difference is small in this case due to the low albedo of bare land assumed in Feulner and Kienert (2014). Note that in the extreme case of a fully land-covered planet global glaciation becomes more difficult because the drier atmosphere leads to reduced cloud and snow cover and thus a lower albedo compared to the aquaplanet case (Abe et al., 2011). It has also been shown already that the presence of tropical ice sheets shifts the Snowball bifurcation point to values \(\sim\)3-10 times higher (Liu et al., 2017) than without ice sheets (Liu et al., 2013). The combined effect of continents and polar ice sheets is also the most likely cause for the lower critical CO\({}_{2}\) concentrations of the aquaplanet simulations for modern boundary conditions (Yang et al., 2012a, b, c) and the Late Paleozoic Ice Age (Feulner, 2017). In addition, the simulations in Feulner (2017) were carried out for a glacial orbital configuration rather than the circular orbit used in the aquaplanet simulations. **Meridional ocean heat transport makes global glaciation more difficult.** It is evident from Fig. 1 that AGCMs without ocean heat transport consistently show bifurcation points at higher CO\({}_{2}\) levels than models with prescribed ocean heat transport or dynamic ocean models. This is in line with earlier findings showing that ocean heat transport towards the sea-ice edge, in particular by the wind-driven ocean circulation, makes Snowball initiation harder (Poulsen and Jacob, 2004; Voigt and Abbot, 2012; Rose, 2015). **Models without sea-ice dynamics are too stable.** Studies carried out with atmospheric general circulation models (AGCMs) coupled to mixed-layer ocean models with prescribed ocean heat transport, but without dynamic sea ice tend to predict lower values for the Snowball bifurcation point (see Fig. 1). Indeed, the fact that models without sea-ice dynamics are artificially stable with respect to the Snowball bifurcation has been noted before (Lewis et al., 2003, 2007; Voigt and Abbot, 2012). **The glaciation threshold depends on sea-ice and snow albedo.** Even for models of the same design there is considerable spread in the values for the Snowball bifurcation point for similar boundary conditions (see Figure 1). Differences in cloud radiative forcing and simulated heat transport in the atmosphere and the oceans can contribute to this spread, however, the predominant causes are differences in sea-ice and snow albedo values and parametrisations (Pierrehumbert et al., 2011; Yang et al., 2012c). These have been identified as the cause of the difference between the bifurcation points derived with CCSM3 (Yang et al., 2012a) and CCSM4 (Yang et al., 2012c) for the same present-day boundary conditions, for example. \begin{table} \begin{tabular}{r r r r r} \hline \hline Age & \(S\) & \(S/S_{0}\) & Non-Snowball states & Snowball state \\ (Ma) & (W/m\({}^{2}\)) & & \(p\)CO\({}_{2}\) (ppm) & \(p\)CO\({}_{2}\) (ppm) \\ \hline 0 & 1361 & 1.000 & 277, 1, 0.9, 0.8, 0.7, 0.5, 0.3, 0.1 & 0 \\ 150 & 1343 & 0.987 & 4, 3 & 2 \\ 300 & 1327 & 0.975 & 30, 15, 12, 11 & 10 \\ 500 & 1304 & 0.958 & 60, 50, 45 & 40 \\ 700 & 1285 & 0.944 & 110, 100 & 90 \\ 900 & 1261 & 0.927 & 600, 500, 400, 350, 300, 290, 280, 270, 260, 250, 240 & 230 \\ 1050 & 1246 & 0.916 & 400, 390, 380 & 370 \\ 1200 & 1231 & 0.904 & 600, 590, 580, 570 & 560 \\ 1350 & 1217 & 0.894 & 900, 850, 830, 828, 826, 824, 822 & 820 \\ 1500 & 1203 & 0.884 & 1400, 1300, 1250, 1200, 1180, 1170, 1165, 1163, 1161 & 1160 \\ 1650 & 1190 & 0.874 & 1900, 1850, 1800, 1700, 1650, 1640, 1630 & 1620 \\ 1800 & 1176 & 0.864 & 3000, 2900, 2800, 2700, 2650, 2620 & 2610 \\ 2100 & 1149 & 0.844 & 5000, 4980, 4960, 4950, 4940 & 4920 \\ 2400 & 1125 & 0.827 & 12500, 11500, 11000, 10000, 9900 & 9800 \\ 2700 & 1100 & 0.808 & 2000, 19500, 19300, 19200 & 19100 \\ 3000 & 1078 & 0.792 & 35000, 34000, 33000, 32700, 32600 & 32500 \\ 3300 & 1055 & 0.775 & 60000, 56500, 56000, 55500, 54000, 53000 & 52000 \\ 3600 & 1034 & 0.760 & 80000, 79500, 79000, 78600, 78200, 78000, 77700 & 77300 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of simulation experiments with the age of each time slice, the value of the solar constant \(S\) used in the simulations as well as the ratio \(S/S_{0}\) to the present-day value \(S_{0}\), and the atmospheric CO\({}_{2}\) concentration of not fully glaciated and Snowball states. #### 3.1.2 Comparison with earlier modelling studies for selected time periods **Modern Snowballs.** The Snowball bifurcation point for modern boundary conditions has been quantified with an AGCM in terms of reduced CO\({}_{2}\) by Romanova et al. (2006) and with AOGCMs in terms of a reduced solar constant by Voigt and Marotzke (2010) and by Yang et al. (2012a, b, c). For pre-industrial greenhouse gas concentrations, the AOGCM studies put the bifurcation point in the ranges 91-94% (Voigt and Marotzke, 2010), 89.5-90% (Yang et al., 2012a) and 91-92% (Yang et al., 2012c) of the present-day solar constant, comparing well to about 91% of today's solar constant in our simulations (see Figure 1). For a reduction in CO\({}_{2}\) and a fixed present-day solar constant, the bifurcation point in our aquaplanet simulations is below a CO\({}_{2}\) concentration of 0.1 ppm. This agrees well with the AGCM study by Romanova et al. (2006) where their model is not in a Snowball state at their lowest CO\({}_{2}\) concentration of 1 ppm. Voigt and Marotzke (2010) used ECHAM5/MPI-OM finding a Snowball state at 0.1 ppm of CO\({}_{2}\) for present-day continents and a slightly higher value of the solar constant of Figure 1: Snowball bifurcation (in terms of CO\({}_{2}\) partial pressure assuming 1 bar total atmospheric pressure) as a function of solar luminosity for the aquaplanet simulations presented in this work (large blue circles). The open circles indicate the bifurcation points in solar luminosity for the additional model experiments at fixed levels of 10,000 ppm and 10 ppm of CO\({}_{2}\). The red line shows a fit of the function defined in Equation (1) to these bifurcation points, see the main text for details. The smaller black symbols denote earlier modelling studies with AOGCMs (squares), OGCMs coupled to a simplified atmosphere model (circles) and AGGCMs coupled to a simplified ocean model (triangles). The model results are averages between the highest CO\({}_{2}\) concentration (or solar constant) of fully glaciated states and the lowest value of states with open water, with the range between these two values indicated by the error bars unless the latter are smaller than the symbols. Note that some model studies provide upper limits (indicated by arrows) rather than ranges. In cases where contributions of other greenhouse gases (in particular methane) were included in the models we show the effective CO\({}_{2}\) concentration calculated from the combined forcing. All model results are scaled to a modern solar constant of \(1361\) W m\({}^{-2}\) (Kopp and Lean, 2011). Estimates of past atmospheric CO\({}_{2}\) levels are shown in grey; Phanerozoic CO\({}_{2}\) values are taken from Foster et al. (2017), the modern value is shown as a diamond. Global glaciations in Earth’s history (light blue shading) occurred during the Paleoproterozoic and Neoproterozoic eras. Proxy data: R95 - Rye et al. (1995), H04 - Hessler et al. (2004), S06 - Sheldon (2006), R10 - Rosing et al. (2010), D11 - Driese et al. (2011), KM15 - Kanzaki and Murakami (2015), L+20 - Lehmer et al. (2020), ST20 - Strauss and Tosca (2020). Model studies: CS00 - Chandler and Sohl (2000), L+03 - Lewis et al. (2003), P+02 - Poulsen et al. (2002), R+06 - Romanova et al. (2006), VM10 - Voigt and Marotzke (2010), P+11 - Pierrehumbert et al. (2011), V+11 - Voigt et al. (2011), K+12 - Kienert et al. (2012), VA12 - Voigt and Abbot (2012), Y+12a - Yang et al. (2012a), Y+12c - Yang et al. (2012c), C+13 - Charany et al. (2013), L+13 - Liu et al. (2013), L+14 - Le Hir et al. (2014), WT13 - Wolf and Toon (2013), FK14 - Feulner and Kienert (2014), K+14 - Kunze et al. (2014), T+14 - Teitler et al. (2014), F17 - Feulner (2017), FS17 - Fiorella and Sheldon (2017), L+17 - Liu et al. (2017), B+22 - Braun et al. (2022), H+22 - Horner et al. (2022). 1367 W m\({}^{-2}\), again in good agreement with our results given the cooling influence of continents and the generally higher susceptibility to global glaciation of their AOGCM. **Proterozoic.** For the Neoproterozoic, earlier model studies indicate critical CO\({}_{2}\) concentrations from below 40 ppm in an AGCM simulation (Chandler and Sohl, 2000) to about 100-700 ppm in AOGCMs and OGCMs with sea-ice dynamics (Poulsen et al., 2002; Voigt et al., 2011; Voigt and Abbot, 2012; Liu et al., 2013; Feulner and Kienert, 2014; Liu et al., 2017). (The reason for the higher value found by Lewis et al. (2003) remains unclear, but could be connected to the fact that surface winds are prescribed in their model.) In their studies on modern Snowballs, Yang et al. (2012a, c) find critical CO\({}_{2}\) concentrations of about 20 ppm to 100 ppm depending on model version for a solar constant representative of the Neoproterozoic, i.e. reduced by 6% compared to its modern value. In our aquaplanet configuration, the corresponding value at 700 Ma is \(95\pm 5\) ppm and thus well within the typical range of the models with ocean and sea-ice dynamics. With the exception of Chandler and Sohl (2000) and Yang et al. (2012a), our value is somewhat lower than the one derived in other studies, as one would expect for a configuration without continents or ice sheets and for the snow and sea-ice albedo values employed in our model (see also Sect. 2.1 and Sect. 3.1.1). Furthermore, there is also excellent agreement with the bifurcation point quantified using an AOGCM for an earlier time slice at 1 Ga by Fiorella and Sheldon (2017), where our values for 900 Ma and 1,200 Ma are again compatible with the lower range of their estimate, see Figure 1. **Archean.** In many ways, the situation is most complicated for the Archean where the Snowball bifurcation has been quantified in a number of model studies in the context of the Faint Young Sun Paradox (Feulner, 2012; Charnay et al., 2020). So far, there are no AOGCM studies for the Archean yet. The bifurcation points from Kienert et al. (2012), quantified with an earlier version of the model used in this work, are higher than the ones for the aquaplanet configuration, mainly caused by the higher cryosphere albedo values used in Kienert et al. (2012) and the different boundary conditions, in particular the higher rotation rate. All other model studies use AGCMs with simplified ocean components and sea-ice models without dynamics (Charnay et al., 2013; Wolf and Toon, 2013; Kunze et al., 2014; Le Hir et al., 2014; Teitler et al., 2014). The critical CO\({}_{2}\) values from these studies are generally (and in many cases significantly) below the values derived for the aquaplanets in this paper. While we cannot rule out a contribution from the simplified atmosphere component used in our model, the lack of full ocean and in particular sea-ice dynamics in the other studies is likely to be a significant factor in explaining this discrepancy. In the end, this question can only be fully answered by running AOGCM simulations with Archean boundary conditions which is beyond the scope of the present work. ### Global characteristics of aquaplanets critical states In the following, we will have a more detailed look at global characteristics of the climate states close to the Snowball bifurcation, i.e. the stationary states for all investigated solar luminosities with the lowest concentration of CO\({}_{2}\) in the atmosphere for which the system does not fall into a Snowball state. Let us first take a look at the global annual mean surface air temperature and the mean sea-ice fraction of the critical states for the different time slices shown in Figure 2, where the averages are taken over the last 100 years of each simulation. Whereas the critical CO\({}_{2}\) concentrations as a function of solar luminosity would indicate a fairly smooth change over time (see Figure 1), there is a marked shift in both the global annual mean surface air temperature and the mean sea-ice fraction: Critical states at low solar luminosities consistently have global mean temperatures of about \(-2^{\circ}\)C and average sea-ice fractions of about 33%, whereas critical states at higher solar luminosities exhibit much lower temperatures of about \(-15^{\circ}\)C and higher sea-ice fractions of 50%. In addition, there is a slight cooling trend with increasing solar luminosities for the critical states both at lower and at higher solar luminosities. Finally, the sequence of critical states with higher sea-ice fractions shows remarkably little scatter in this quantity, see Figure 2 (b). Note that the shift in critical-state properties is not related to the downturn of the bifurcation limit at higher solar luminosities discussed in Sect. 3.1. The values of 33% and 50% for the global annual mean sea-ice fraction found for the critical states can be translated into latitudes of the sea-ice margin of 42\({}^{\circ}\) and 30\({}^{\circ}\), respectively, if one assumes full ice cover over two symmetric polar caps which is a reasonable assumption for aquaplanets. Thus, for lower solar luminosities, the sea-ice margin rests within the Ferrel cell of the atmosphere's large-scale circulation, whereas for higher solar luminosities it corresponds to the Hadley-cell boundary (which is fixed at 30\({}^{\circ}\) in our simplified atmosphere model). We will therefore refer to the critical states at lower solar luminosity as _Ferrel states_ and to the ones at higher solar luminosity as _Hadley states_ for simplicity. In Figure 2, we focus on the critical states only, i.e. the coldest not fully glaciated state at each solar luminosity. While we did not find any Hadley states at low solar luminosities, the question remains whether there could be Ferrel states at higher solar luminosities. This can be investigated by looking at equilibrium states for a range of CO\({}_{2}\) concentrations for one particular time slice at higher solar luminosity. The result for the 900-Ma time slice is shown in Figure 3, where one can see that even for higher solar luminosities Ferrel states can indeed be stable states at higher atmospheric CO\({}_{2}\) levels, as already indicated by the grey shading in Figure 2. On the other hand, we can also look for transient Hadley states in simulations for Ferrel-state time slices where the system falls into the Snowball state. In these simulations, the unstable Hadley state can clearly be identified as a time period with close to 50% global and annual mean sea-ice cover \(f_{\rm seaice}\). In agreement with the findings of Figure 2, these states (defined by \(0.497<f_{\rm seaice}<0.503\)) become more stable with increasing solar luminosity: While the system spends only 70 years in the transient Hadley state at 3600 Ma, for example, this length more than quintuples to 396 years at 1350 Ma (Figure 4). In summary, we find a marked shift in the characteristics of the critical states in the time period from 1350 to 1200 Ma: While a Ferrel state with the sea-ice margin in the middle of the Ferrel cell is stable for all luminosities, a Hadley state with the ice-line at the Hadley-cell boundary can only be stable for solar luminosities above about 90% of the modern solar constant. The fact that the critical state at 1800 Ma is a Hadley state already whereas the critical states at 1650, 1500 and 1350 Ma are Ferrel states again could be due to the sensitivity of the system with respect to small changes, in particular in the transition period. ### Dynamics of the Snowball bifurcation of aquaplanets In order to understand the different regimes of the critical states we will have a more detailed look at typical Ferrel and Hadley states. To this end, we select the critical states at 3000 Ma (a Ferrel state) and 900 Ma (a Hadley state) and investigate the spatial patterns of surface air temperature, sea-ice fraction, and surface winds (Figure 5). Maps of annual mean surface air temperatures show that the Ferrel state is considerably warmer at all latitudes (Figure 5a,c). The annual mean sea-ice distributions exhibit marked differences as well: The Hadley state at 900 Ma has a very well defined, sharp sea-ice margin, whereas the Ferrel state at 3000 Ma is characterised by a much more fuzzy transition between open ocean and full ice cover (Figure 5b,d). In both critical states the wind fields show the usual pattern of the large-scale at Figure 2: (a) Global annual mean surface air temperature and (b) sea-ice fraction of critical states as a function of solar luminosity (blue circles). The right-hand vertical axis in (b) indicates the effective latitude of the critical ice line, calculated from the sea-ice fraction assuming full ice cover over two symmetric polar caps which is a reasonable approximation for the aquaplanet critical states (see Figure 5). The grey shading indicates the \(\pm 3\sigma\) ranges of the respective values. Note that the states with higher temperature and lower sea-ice fraction are also stable states at higher solar luminosities as shown for the 900-Ma time slice in Figure 3. Figure 4: Transient Hadley states with a global mean sea-ice cover of about 50% in simulations on track to a Snowball. The transient Hadley phase can be clearly seen as a plateau in all simulations. To capture the entry into and the exit from this state for all time slices, we use a range of \(0.497<f_{\rm sea,ice}<0.503\) to define the transient Hadley phase, indicated by the horizontal lines. Figure 3: Global and annual mean sea-ice fraction in equilibrium climate states at 900 Ma for various concentrations of atmospheric CO\({}_{2}\). Grey shaded areas are as in Figure 2. Note that the simulation at 230 ppm is on track to a Snowball state, but has not reached equilibrium due to numerical instability. mospheric circulation, with a high degree of symmetry about the equator as one would expect for aquaplanet boundary conditions. These findings are even more evident from the zonal averages of annual mean surface air temperature, sea-ice fraction, and the meridional component of the surface wind for the two critical states shown in Figure 6. In particular we would like to highlight (1) the steeper temperature gradient across the sea-ice margin in the Hadley state, (2) the position of the sea-ice margin in the Hadley state close to the point where the meridional surface winds change their direction, and (3) the latitude of the sea-ice margin in the Ferrel state close to the poleward maximum of the meridional surface wind field. Indeed, the specific locations of the sea-ice boundaries close to the centre of the Ferrel and at the edge of the Hadley cell immediately suggest that they are influenced by large-scale atmospheric circulation patterns as illustrated in Figure 7: The Ferrel states are stabilised by poleward winds pushing the sea-ice margin away from the equator and transporting relatively warm air towards the ice edge. The Hadley states, on the other hand, are the coldest possible not fully glaciated states because any further cooling would bring sea ice into the Hadley cell where it would be pushed towards the equator by the trade winds, which would also transport cold air towards the sea-ice margin. Since in case of the Hadley states the wind fields are a destabilising factor, these states have to be protected from global glaciation by relatively high surface temperatures within the Hadley cells causing any sea ice which enters the Hadley cells to melt rapidly. In the following, we will investigate this hypothesis in more detail, in particular with respect to the important question why the Hadley states can be stable only at higher solar luminosities. Answering this question requires, among other things, a closer look at the evolution of the energy balance of the Hadley states. In particular, we would like to understand how and why the surface temperature distribution of the Hadley states changes with increasing solar luminosity. To this end, we follow Heinemann et al. (2009), Voigt et al. (2011), and Liu et al. (2013) in applying a simple equation for the annual mean equilibrium energy balance at each latitude \(\varphi\) \[Q\left(\varphi\right)\left(1-\alpha\left(\varphi\right)\right)=Q_{\mathrm{abs }}\left(\varphi\right)=\sigma\varepsilon\left(\varphi\right)T_{\mathrm{sfc}}^ {4}\left(\varphi\right)+\mathrm{div}\,F\left(\varphi\right) \tag{2}\] in order to disentangle the contributions of changes in absorbed solar radiation, greenhouse warming, and meridional heat transport to the surface temperature evolution of the Hadley states. In this equation, \(Q\) is the incoming solar radiation flux at latitude \(\varphi\), \(\alpha\) is the top-of-atmosphere albedo, \(Q_{\mathrm{abs}}\) is the absorbed solar radiation flux (directly diagnosed from model output), \(\sigma\) is the Stefan-Boltzmann constant, \(\varepsilon\) is the effective emissivity (diagnosed from the ratio of top-of-atmosphere to surface upward long-wave fluxes), \(T_{\mathrm{sfc}}\) is the Figure 5: Maps of annual mean surface air temperatures (left-hand panels a,c) and sea-ice fractions (right-hand panels b,d) for the critical states at 3000 Ma (upper panels a,b) and 900 Ma (lower panels c,d). Only one quarter of the globe is displayed in each map since the aquaplanet simulations exhibit a high degree of symmetry. Surface wind velocity vectors are shown in the right-hand panels, with the length scaling for a wind speed of 10 m/s indicated in the bottom right corner. Figure 6: Zonal means of (a) annual mean surface air temperatures, (b) sea-ice fractions, and (c) meridional wind speed for the Ferrel state at 3000 Ma (solid red lines) and the Hadley state at 900 Ma (dashed red line). The vertical lines indicate the sea-ice margin for the Ferrel state (dashed blue line) and for the Hadley state (solid blue line). Figure 7: Schematic illustration of the differences between the critical states (a) at lower solar luminosities (“Ferrel states”) and (b) at higher solar luminosities (“Hadley states”) showing sea-ice thickness (cyan, not to scale) on the ocean (blue) and the large-scale atmospheric wind patterns (arrows), see text for discussion. The vertical cyan lines indicate the effective ice-line latitudes calculated assuming symmetric and full ice cover as in Figure 2. surface temperature, and \(\mathrm{div}\,F\) is the divergence of the total meridional heat transport (diagnosed from the net top-of-atmosphere radiation balance). For a given equilibrium simulation, the surface temperature \(T_{\mathrm{sfc},0}\) can then be derived by simply solving equation (2) for the surface temperature, showing good agreement with the surface temperature directly diagnosed from model output (not shown). The different contributions to surface temperature changes \(\Delta T_{\mathrm{sfc}}\) between a given equilibrium state and a reference state, denoted by subscript \(0\), can then be calculated as follows: \[\Delta T_{\mathrm{sfc},\mathrm{solar}}\left(\varphi\right)=\] \[\left(\frac{1}{\sigma\,\varepsilon_{0}\left(\varphi\right)} \left(Q_{\mathrm{abs}}\left(\varphi\right)-\mathrm{div}\,F_{0}\left(\varphi \right)\right)\right)^{1/4}-\,T_{\mathrm{sfc},0}\left(\varphi\right) \tag{3}\] \[\Delta T_{\mathrm{sfc},\mathrm{greenhouse}}\left(\varphi\right)=\] \[\left(\frac{1}{\sigma\,\varepsilon\left(\varphi\right)}\left(Q_{ \mathrm{abs},0}\left(\varphi\right)-\mathrm{div}\,F_{0}\left(\varphi\right) \right)\right)^{1/4}-\,T_{\mathrm{sfc},0}\left(\varphi\right) \tag{4}\] \[\Delta T_{\mathrm{sfc},\mathrm{transport}}\left(\varphi\right)=\] \[\left(\frac{1}{\sigma\,\varepsilon_{0}\left(\varphi\right)} \left(Q_{\mathrm{abs},0}\left(\varphi\right)-\mathrm{div}\,F\left(\varphi \right)\right)\right)^{1/4}-\,T_{\mathrm{sfc},0}\left(\varphi\right) \tag{5}\] The results of this exercise are presented in Figure 8 where we show the zonally averaged surface temperature differences between all Hadley critical states and the 900 Ma Hadley state together with the contributions of the three factors. It is evident from Figure 8 (a) that there is a distinct geographic pattern in the surface temperature changes of the Hadley states over time: With increasing solar luminosity (i.e. going from 1800 Ma to 0 Ma), there is a gradual cooling in areas covered by sea ice and a (less pronounced) warming in ice-free regions. These trends are driven by the different spatial characteristics of the absorbed solar radiation and the warming due to the greenhouse effect (already noted in earlier work, e.g. Yang et al. 2012a): While the radiative fluxes due to greenhouse gases are distributed relatively uniformly with latitude, the absorbed solar energy shows a marked maximum at the equator and drops off towards the poles, both due to the spatial distribution of the incoming solar radiation and due to the higher albedos of the ice-covered mid and high latitudes. Going from 1800 Ma to 0 Ma, solar luminosity increases, while the CO\({}_{2}\) concentrations of the Hadley critical states decreases, leading to the evolution of their respective contributions to the surface temperature trends shown in Figure 8 (b) and (c). These changes are only partially compensated by adjustments of the meridional heat transport, see Figure 8 (d). The surface-temperature evolution of the Hadley states described above also provides a qualitative explanation why Hadley states cannot be stable at low solar luminosities: Going back in time, the decreasing solar radiation is compensated by an increasing greenhouse warming to prevent global glaciation, but the different spatial patterns of these factors lead to progressively shallower surface temperature gradients across the Hadley-cell boundary. This effect is further enhanced by a weakening of the meridional heat transport with decreasing solar luminosity, driven by a slowdown of the trade winds leading to weaker Ekman transport in the ocean. At some point, the surface energy budget within the Hadley cell is insufficient to melt sea ice before it is pushed towards the equator by the trade winds, triggering global glaciation due to the ice-albedo feedback. This is also in line with the observed shortening of the transient Hadley phase with decreasing solar luminosity in simulations on track to a Snowball state shown in Figure 4. ## 4 Discussion and conclusions In this paper, we have investigated the Snowball bifurcation point (in terms of atmospheric CO\({}_{2}\) concentration) of aquplanets under the steady increase of solar luminosity over Earth's history in one consistent model framework. We find that until about 1 billion years ago the critical CO\({}_{2}\) concentration decreases more or less logarithmically (from \(\sim 10^{5}\) ppm 4 billion years ago to \(\sim 10^{2}\) ppm 700 million years ago), as one would expect from the logarithmic character of the CO\({}_{2}\) forcing. In more recent times, critical values decrease more strongly, mainly due to the increasing low-latitude insolation making sea-ice formation more difficult and due to the greenhouse effect of water vapour. We have also put these new simulation results in context by presenting a comprehensive synthesis of proxy CO\({}_{2}\) estimates and findings from earlier modelling studies. The second important new conclusion from our work is a regime shift in critical-state properties about 1.2 billion years ago: While the coldest not fully glaciated climate states at earlier times (and thus lower solar luminosities) are characterised by relatively high global mean temperatures of about \(-2^{\circ}\)C and a sea-ice margin close to the centre of the Ferrel cell ("Ferrel states"), critical states at later times (and thus higher solar luminosities) exhibit much lower global mean temperatures of roughly \(-15^{\circ}\)C and a sea-ice margin at the Hadley-cell boundary ("Hadley states"). These states result from the interplay of the surface energy balance and atmospheric dynamics: In the Ferrel states, the sea-ice margin is stabilised by the poleward push of the meridional winds, whereas in the Hadley states the sea-ice margin has to be maintained by a steep temperature gradient across the Hadley-cell boundary to prevent sea-ice being pushed towards the equator by the trade winds within the Hadley cells. The ultimate cause for the two distinct critical-state regimes Figure 8: (a) Total difference of zonally averaged surface temperatures as diagnosed using a one-dimensional energy balance equation (see text) between the Hadley states from 1800 Ma to 0 Ma and the one at 900 Ma. The other panels show the contributions of changes in absorbed solar radiation (b), greenhouse warming (c), and the combined atmospheric and oceanic meridional heat transport (d). The different Hadley states are indicated by increasingly darker shades of grey for increasing solar luminosities. Note the different scale of the vertical axis in panel (a). are the different spatial distributions of solar radiation and greenhouse-gas forcings: With increasing solar luminosity and decreasing CO\({}_{2}\) concentrations, this difference in the spatial distributions will result in ever steeper temperature gradients, making Hadley states stable at some point. While our investigation of the glaciation threshold through time in one consistent three-dimensional modelling framework and the regime shift from Ferrel to Hadley states are novel, aspects of our work tie in well with earlier findings. The importance of the Hadley circulation for global glaciations, for example, has been noted already by Bendtsen (2002) based on simulations with a simple coupled model. Furthermore, in their studies of modern Snowballs with CCSM3/CCSM4, Yang et al. (2012a, b, c) find two different critical positions for the ice line depending on whether they reduce the solar constant at modern CO\({}_{2}\) concentrations or the CO\({}_{2}\) concentrations at 94% of the present-day solar constant. We would like to point out that both the values for the Snowball bifurcation points for the different time slices and the solar luminosity at which the regime shift from Ferrel to Hadley states occurs will depend on model physics (e.g. the snow and cloud schemes) and model parameters (e.g. the cryosphere albedos) and will thus most likely differ between models. Furthermore, we emphasise that aspects of our study are certainly affected by model limitations like the coarse spatial resolution, the simple cloud scheme, or possible deviations of the radiative transfer at very low or very high CO\({}_{2}\) concentrations. Most importantly, however, the Hadley cells have a prescribed annual mean width of 30\({}^{\circ}\) in our simplified atmosphere model, whereas it is well known that they become narrower in colder climate states (e.g. Frierson et al., 2007). Despite these limitations, we consider the main finding of our paper, the regime shift from Ferrel states at lower solar luminosities to Hadley states at higher solar luminosities, robust. Indeed, there are indications in studies with more complex models supporting this conclusion. Most importantly and as mentioned above, Yang et al. (2012a) and Yang et al. (2012c) investigate modern Snowballs with CCSM3 and CCSM4, respectively, and find that "it is likely that there are no stable states between \(\sim\) 40% and 60% sea ice coverage during the initiation of the Snowball Earth" (Yang et al., 2012a, p. 2719) for a present-day continental configuration. Note that their values for the critical global sea-ice fractions are somewhat higher than ours (33% and 50%, see Sect. 3.2), which could partly be due to the differences in boundary conditions, but most likely reflects the fact that in cold climate states the Hadley cells will be narrower than the fixed annual-mean width of 30\({}^{\circ}\) used in our simplified atmosphere model (see above). Moreover, Yang et al. (2012a) also report different critical states in CCSM3 depending on the path to global glaciation: For a reduction of the solar constant at present-day CO\({}_{2}\) levels, their model enters a Snowball state beyond a global sea-ice fraction of \(\sim 40\)%, corresponding to a Ferrel state in our simulations. For a reduction of the atmospheric CO\({}_{2}\) concentration at 94% of the present-day solar constant (mimicking Neoproterozoic boundary conditions) on the other hand, the critical sea-ice fraction is \(\sim\) 60%, similar to a Hadley state in our simulations. Yang et al. (2012a) attribute this difference in the paths to global glaciation to the dissimilar spatial characteristics of solar radiation and CO\({}_{2}\) forcing, in line with our findings above. Looking at Figure 1, this would put the regime shift from Ferrel to Hadley states in Yang et al. (2012a) to somewhere between \(\sim 1.2\) and \(\sim 0.7\) billion years ago, in excellent agreement with our findings despite the different boundary conditions. CCSM4 is more sensitive with respect to global glaciation than CCSM3 (Yang et al., 2012c), and in this model the critical state is a Hadley state for both paths to global glaciation. This would put the transition from Ferrel to Hadley states to times earlier than \(\sim 1\) billion years ago, again in principal agreement with our results. One conclusion from our work is that the Snowball bifurcation (and thus one of the most important limits to planetary habitability) is determined by the interplay of the energy balance and internal dynamics, implying the need for knowledge of certain system parameters and for three-dimensional models in order to be able to assess planetary habitability. Note that our results cannot be easily generalised to other planets for three reasons: First, the idealised aquaplanet configuration is a rather special case. Second, our simulations have been performed for one particular orbital configuration. And third, the results will likely depend on the planet's rotation rate since the structure of the Hadley circulation, for example, changes significantly with the rotation rate (e.g. Schneider, 2006). Sampling the parameter space more completely remains to be done in future work. Finally, our results highlight once again the crucial importance of sea-ice dynamics for investigations of the Snowball bifurcation point, for example in the context of the Faint Young Sun Paradox or the Paleoproterozoic and Neoproterozoic glaciations. ### Code and data availability Model input and output files as well as the scripts used to generate the figures are available at the institutional repository of the Potsdam Institute for Climate Impact Research ([https://doi.org/10.5880/PIK_2022.003](https://doi.org/10.5880/PIK_2022.003), Feulner et al. 2022). The model source code is made available upon request. ### Author contributions G.F. designed the study; M.S.B. and G.F. performed and analysed model simulations; S.P. provided technical assistance and compiled the data archive, G.F. prepared all figures; G.F. wrote the paper with input from all co-authors. ### Competing interests The authors declare that they have no conflict of interest. Acknowledgements. The authors would like to thank Julius Eberhard, Alexey V. Eliseev, and Anna Feulner for help and discussions. We also thank the reviewers, Aiko Voigt and Yonggang Liu, for their constructive feedback which helped to improve the manuscript. The European Regional Development Fund (ERDF), the German Federal Ministry of Education and Research and the Land Brandenburg are gratefully acknowledged for supporting this project by providing resources on the high performance computer system at the Potsdam Institute for Climate Impact Research.
2305.13627
InstructAlign: High-and-Low Resource Language Alignment via Continual Crosslingual Instruction Tuning
Large language models (LLMs) that are tuned with instructions have demonstrated remarkable capabilities in various tasks and languages. However, their ability to generalize to underrepresented languages is limited due to the scarcity of available data. Additionally, directly adapting new languages to instruction-tuned LLMs can result in catastrophic forgetting, which leads to the loss of multitasking ability. To address this issue, we propose InstructAlign which uses continual crosslingual instruction tuning to enable LLMs to align new unseen languages with previously learned high-resource languages. Our results demonstrate the effectiveness of InstructAlign in enabling the model to understand low-resource languages with limited parallel data while preventing catastrophic forgetting. Our work contributes to the advancement of language adaptation methods, particularly for adapting instruction-tuned LLMs to underrepresented languages. Our code is released on https://github.com/HLTCHKUST/InstructAlign
Samuel Cahyawijaya, Holy Lovenia, Tiezheng Yu, Willy Chung, Pascale Fung
2023-05-23T02:51:34Z
http://arxiv.org/abs/2305.13627v2
Instruct-Align: Teaching Novel Languages with to LLMs through Alignment-based Cross-Lingual Instruction ###### Abstract Instruction-tuned large language models (LLMs) have shown remarkable generalization capability over multiple tasks in multiple languages. Nevertheless, their generalization towards different languages varies especially to underrepresented languages or even to unseen languages. Prior works on adapting new languages to LLMs find that naively adapting new languages to instruction-tuned LLMs will result in catastrophic forgetting, which in turn causes the loss of multitasking ability in these LLMs. To tackle this, we propose the Instruct-Align a.k.a (IA)1 framework, which enables instruction-tuned LLMs to learn cross-lingual alignment between unseen and previously learned languages via alignment-based cross-lingual instruction-tuning. Our preliminary result on BLOOMZ-560M shows that (IA)1 is able to learn a new language effectively with only a limited amount of parallel data and at the same time prevent catastrophic forgetting by applying continual instruction-tuning through experience replay. Our work contributes to the progression of language adaptation methods for instruction-tuned LLMs and opens up the possibility of adapting underrepresented low-resource languages into existing instruction-tuned LLMs. Our code will be publicly released upon acceptance. ## 1 Introduction Large language models (LLMs) demonstrate the generalization capability of solving various tasks expressed in natural language without requiring any explicit training on the corresponding task Brown et al. (2020); Smith et al. (2022); Rae et al. (2022); Thoppilan et al. (2022); Chowdhery et al. (2022); Scao et al. (2022); Zeng et al. (2022). This generalization capability is highly correlated with the scale of natural language data used during pre-training and the number of parameters of the models Kaplan et al. (2020); Hoffmann et al. (2022). Furthermore, various tuning methods, such as instruction-tuning Sanh et al. (2022); Wei et al. (2022); Chung et al. (2022); Muennighoff et al. (2022) and reinforcement-learning with human feedback Christiano et al. (2017); Ouyang et al. (2022), improve these task generalization capability even further. Despite this, LLMs often lack the generalization ability across different languages, leading to performance disparity across different languages Xue et al. (2021); Scao et al. (2022); Chowdhery et al. (2022). Moreover, these models only cover a limited number of languages, mostly on Indo-European language family as shown in Figure 1. For instance, BLOOM Scao et al. (2022), the largest community-driven open-source multilingual pre-trained LLM, only covers 46 languages during pre-training excluding some high-resource with hundreds of millions of speakers, such as German, Japanese, Korean, and Russian, as well as many more low-resource languages with millions or even tenth-millions of speakers, such as Bulgarian, Hungarian, Serbian, Finnish, Amharic, Kurdish, Sinhala, Lao, Javanese, Sundanese, etc. Increasing the language coverage of LLMs is a crucial task. Prior work Yong et al. (2022) displays that continued pretraining Chau et al. (2020); Muller et al. (2021); Ebrahimi and Kann (2021) and parameter-efficient fine-tuning (PEFT) methods such as MAD-X Pfeiffer et al. (2020) and (IA)3 Liu et al. (2022) can be utilized to efficiently inject comprehension of unseen languages into LLMs using monolingual corpora of the new languages by performing masked language modeling (MLM) Devlin et al. (2019). Nevertheless, due to catastrophic forgetting, these methods are futile when applied directly to instruction-tuned LLMs, rendering them ineffective to solve general natural language tasks after the language adaptation phase Yong et al. (2022). Footnote 3: [https://github.com/faceface/face](https://github.com/faceface/face) LLM through cross-lingual alignment with previously learned language by using instruction. (IA)1 enforces LLMs to learn cross-lingual alignments between seen and unseen languages through a diverse set of alignment-based cross-lingual instructions, allowing the model to learn new languages with only a limited amount of parallel data. To prevent catastrophic forgetting, (IA)1 incorporates a simple and yet efficient continual learning approach through experience replay (Rolnick et al., 2019), which adds a small number of past instruction-tuned data to be used during the (IA)1 tuning phase along with the new alignment-based cross-lingual instructions. Footnote 1: We use the syntactical, phonological, geographical, and, language family features in URIEL and project them into 2D with UMAP (McInnes et al., 2018). In summary, our work presents the following major contributions: * We propose (IA)1, a continual instruction-tuning approach that allows an instruction-tuned LLM to adapt to new languages with minimal performance loss of its pre-learned languages. Footnote 1: We use the syntactical, phonological, geographical, and, language family features in URIEL and project them into 2D with UMAP (McInnes et al., 2018). * We introduce alignment-based cross-lingual instructions, a set of instructions to enable LLMs effectively learn a cross-lingual alignment between two languages. * We show the effectiveness of different types of alignment-based cross-lingual instructions and compare them with the monolingual denoising task, i.e., masked language modeling (MLM), which is commonly used for learning new languages. ## 2 Related Work ### Instruction-Tuning Early works in instruction-tuning (Wei et al., 2022; Chung et al., 2022; Sanh et al., 2022; Ouyang et al., 2022) have shown the effectiveness of instruction-tuned LLMs, which significantly improves the zero-shot generalization capability over the corresponding non-instruction-tuned LLMs by a huge margin. Since then, various instruction-tuned LLMs have been released, including T0 (Sanh et al., 2022), InstructGPT (Ouyang et al., 2022), FLAN-GPT (Wei et al., 2022), FLAN-T5 (Chung et al., 2022), FLAN-PaLM (Chung et al., 2022), Figure 1: Linguistics projection of 4000+ languages across the globe obtained from URIEL (Littell et al., 2017; Malaviya et al., 2017)1. blue dots denote the language that is covered in at least one of the LLMs under consideration, i.e., GPT3 (Brown et al., 2020), BLOOM (Scao et al., 2022), mT5 (Xue et al., 2021), and PaLM (Chowdhery et al., 2022). Existing LLMs only cover a fraction of languages around the globe; most of them are within the Indo-European language family. mT0 (Muennighoff et al., 2022), BLOOMZ (Muennighoff et al., 2022), Alpaca (Taori et al., 2023), etc. Most of these models are only pre-trained on a single or few languages, with the exception of mT0 and BLOOMZ which are adapted from models pre-trained on 101 languages, i.e., mT5 (Xue et al., 2021), and pre-trained on 46 languages, i.e., BLOOM (Scao et al., 2022), respectively. In this work, we utilize BLOOMZ (Muennighoff et al., 2022) as the backbone model of our (IA)1 experiment. ### Cross-Lingual Alignment Cross-lingual alignment is a widely explored concept that allows language models (LMs) to align, commonly at a word/sentence level, across different languages. Cross-lingual alignment method allows the models to perform cross-lingual inference without requiring any tuning on the target task. Lample et al. (2018); Cao et al. (2020) introduce a word-to-word translation method that requires no parallel data by performing embedding alignment across different languages. Similarly, Lample et al. (2018) introduce an unsupervised machine translation method that enables sentence-level translation without the need for parallel data. A cross-lingual pre-training objective for building LMs, namely translation language modeling (TLM) (Conneau and Lample, 2019), has also been explored which enforces token-level alignment between languages allowing the model to learn aligned representation across multiple languages. In this work, we perform cross-lingual alignment through instruction by introducing bilingual denoising instruction which is equivalent to token-level alignment in TLM, and translation instruction which serves as sentence-level alignment across different languages. ### Continual Learning for Language Models Continual learning is a paradigm to learn various tasks gradually allowing the model to acquire new knowledge over time(Delange et al., 2021). Using a naive fine-tuning approach for continual learning causes the model to suffer from catastrophic forgetting (CF) (French, 1999). Therefore, various methods have been introduced to prevent CF. Regularization-based methods (Kirkpatrick et al., 2017; Liu et al., 2018; Aljundi et al., 2018) add a regularization in the loss function to prevent the model to be updated into a direction that causes CF. Replay-based methods (Rolnick et al., 2019; Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019) add samples from previous tasks to be incorporated during learning the new task, which helps regularize the model to avoid CF. Parameter isolation methods (Aljundi et al., 2017; Serra et al., 2018; Mallya and Lazebnik, 2018) prevent the model from CF by learning new tasks using a new set of parameters while keeping the other parameters frozen during fine-tuning. In this work, we apply experience replay (Rolnick et al., 2019), which is a simple replay-based method by adding tasks from previously learned languages when training new languages without any loss modification. ## 3 Methodology (IA)1 is a continual instruction-tuning framework that allows the model to learn new languages through alignment with the previously learned language. (IA)1 consists of two phases, i.e., 1) \begin{table} \begin{tabular}{l|l|l|l} \hline \hline **Granularity (Task)** & \((\mathbf{X},\mathbf{Y})\) & \(\widehat{\mathbf{Y}}=\mathbf{g}(\mathbf{Y})\) & \(\mathbf{P}=\mathbf{h}(\mathbf{X},\widehat{\mathbf{Y}},\mathbf{T})\) \\ \hline **Word** (Conditional & \(\mathbf{X}\): He eats two mangos & \(\widehat{\mathbf{Y}}\): Dia \(\underline{\underline{\underline{\underline{\underline{\phantom{\phantom{\phantom{\phantom{\phantom{\phantom{\phantom{\ alignment-based cross-lingual instruction generation and 2) continual instruction tuning. The first phase generates an instruction-tuning dataset from parallel corpora between unseen and seen languages which enforces the model to learn the alignment between the two languages. While the second phase performs continual instruction tuning by utilizing a continual learning approach preventing the model from catastrophic forgetting. ### Cross-Lingual Alignment through Instruction Given a parallel text pair \((x,y)\) from two languages, the goal of cross-lingual alignment is to learn a mapping function \(f(.)\) parameterized by \(\theta\) such that \(f(x,\theta)=f(y,\theta)\). The \((x,y)\) text pair commonly comes in the form of a word pair or a phrase pair [12, 13], but in theory, it should be able to generalize to a sentence pair or even a paragraph. With the goal of aligning two parallel texts from two different languages, (IA)1 defines a set of alignment-based cross-lingual instructions by exploiting multiple alignment granularity that can be achieved through a parallel sentence. Specifically, we explore three different granularities, i.e., word, span, and sentence. Footnote 1: [https://github.com/faceface](https://github.com/faceface) developing the corresponding model. These past data are completely supervised as it is used for instruction tuning. During the continual instruction tuning, (IA)1 takes only \(r\) randomly sampled data from the past instruction-tuning data. The sampled past data is used during continual-instruction tuning with a balanced sampling between the past data and new data. More formally, we define a past dataset \(\mathcal{D}^{old}\) and a newly generated cross-lingual instruction dataset \(\mathcal{D}^{cli}\). On each optimization step, (IA)1 samples data in an interleaving manner resulting in a batch data \(\mathcal{B}=\{s_{1}^{pol},s_{1}^{p^{new}},s_{2}^{pol},s_{2}^{p^{new}},\dots,s_{ n}^{pol},s_{n}^{p^{new}}\}\) with \(2n\) samples, where \(s_{i}^{\mathcal{D}^{old}}\) and \(s_{i}^{p^{new}}\) denote a sample that are taken randomly from \(\mathcal{D}^{old}\) and \(\mathcal{D}^{new}\), respectively. Since the samples are all supervised, the optimization can be done by optimizing the cross-entropy loss (Good, 1952) from all the samples in the batch. Footnote 1: [https://github.com/IndoNLP/nusa-menulis](https://github.com/IndoNLP/nusa-menulis) ## 4 Experiment Setting ### (IA)1 Dataset During the (IA)1 tuning, we train the model on 7 Indonesian local languages, i.e., Sundanese (sun), Javanese (jav), Balinese (ban), Minangkabau (min), Buginese (bug), Acehinese (ace), and Banjarese (bin). We utilize English (eng), as English covers the majority of the pre-training data in most LLMs, and Indonesian (ind), as the languages are closely related to the target languages. For the dataset, we utilize FLORES-200 dataset (Goyal et al., 2021; Team et al., 2022) as the source of the parallel data where we combine the validation and the test set producing a total of 2009 parallel sentences for each language pair. Footnote 1: [https://github.com/IndoNLP/nusa-menulis](https://github.com/IndoNLP/nusa-menulis) ### Models & Hyperparameters We utilize BLOOMZ (Muennighoff et al., 2022) as our backbone model. Specifically, we explore (IA)1 on two different sizes of BLOOMZ, i.e., BLOOMZ-560M and BLOOMZ-1.1b. For (IA)1 tuning, we run all experiments with an initial learning rate of 1e-5 with a linear learning rate decay and a batch size of 32 for a fixed optimization step of 48700. We run the (IA)1 tuning using the AdamW optimizer (Loshchilov and Hutter, 2019) and mixed-precision training (Micikevicius et al., 2018). For the number of replay examples \(k\), we explore two different settings, i.e., \(k=[1000,10000]\). Footnote 2: [https://github.com/IndoNLP/nusa-menulis](https://github.com/IndoNLP/nusa-menulis) ### Evaluation Setting After the (IA)1 tuning, the model is then evaluated in a zero-shot cross-lingual inference setting, in which the model has never seen the task on the target languages, but might have seen the task on other seen languages. We consider 6 different prompts (3 English and 3 Indonesian) for the zero-shot inference and take the average score accuracy and weighted F1 score as the evaluation metrics. We run all evaluations using 8-bit inference via LLM.int8() (Dettmers et al., 2022). Footnote 2: [https://github.com/IndoNLP/nusa-menulis](https://github.com/IndoNLP/nusa-menulis) Zero-Shot Evaluation DatasetsFor the evaluation datasets, we utilize two Indonesian local languages benchmarks, i.e., NusaX (Winata et al., 2022) and NusaWrites2. We utilize the sentiment analysis task of NusaX and NusaWrites as our eval \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**NusaWrites**} & \multicolumn{2}{c|}{**NusaX**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{2-7} & **Accuracy** & **Weighted F1** & **Accuracy** & **Weighted F1** & **Accuracy** & **Weighted F1** \\ \hline \hline \multicolumn{7}{c}{**Zero-Shot BLOOMZ-560M**} \\ \hline **Zero-Shot Baseline** & 41.08 & 40.06 & 42.00 & 33.95 & 41.90 & 36.59 \\ \hline \hline \multicolumn{7}{c}{**Monolingual-Denoising BLOOMZ-560M**} \\ \hline **Monolingual** & 43.50 & 44.13 & 36.88 & 26.85 & 37.38 & 28.63 \\ \hline \multicolumn{7}{c}{**(IA)\({}^{1}\)-Tuned BLOOMZ-560M**} \\ \hline **(IA)\({}^{1}\) r=0** & **53.83** & 55.54 & 38.38 & 28.60 & 39.17 & 34.34 \\ **(IA)\({}^{1}\) r=1000** & 53.48 & **55.87** & **43.75** & **35.98** & **47.98** & **42.34** \\ **(IA)\({}^{1}\) r=10000** & 42.27 & 40.00 & 40.50 & 31.09 & 41.17 & 34.20 \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation results on the BLOOMZ-560M backbone. uation datasets.3 NusaX covers 12 languages including English (eng), Indonesian (ind), and all 7 languages used during the (IA)1 tuning. While NusaWrites covers 11 languages, without English (eng) and Indonesian (ind), which includes 3 languages that are used during the (IA)1 tuning: Javanese (jav), Sundanese (sun), and Minangkabau (min). Both benchmarks have some unseen Indonesian local languages even after the (IA)1 tuning. These languages are incorporated during the evaluation to measure the forward transfer of the model to these languages after the (IA)1 tuning. The detail per language statistics and its seen/unseen status are shown in Table 2 and Table 3 for NusaWrites and NusaX, respectively. Footnote 3: We do not use the machine translation tasks provided in both benchmarks as it will violate the zero-shot cross-lingual inference constraint within our experiment. ### Baselines For our baselines, we conduct zero-shot prompting using four different sizes of BLOOMZ, i.e., BLOOMZ-560M, BLOOMZ-1.1B, BLOOMZ-1.7B, and BLOOMZ-3B, without any additional language adaptation phase. In addition, to compare the effectiveness of the cross-lingual alignment, we add continual instruction-tuned baselines that incorporate only monolingual denoising instructions, which is comparable to performing language adaptation using masked language modeling (MLM) (Devlin et al., 2019). The inference for all the baselines is done in the same way using the same prompts as described in the SS4.3. ## 5 Experiment Result Based on the result of (IA)1 tuning on the BLOOMZ-560M backbone shown in Table 4, the performance of all (IA)1-tuned models outperforms the monolingual denoising model. In this case, we can conclude that utilizing the cross-lingual alignment task is more beneficial for learning novel languages with a limited amount of data compared to only using the monolingual denoising task. We also observe that the best performance is achieved by the (IA)1 tuned model with \(r\)=1000, improving the performance compared to the zero-shot baseline by \(\sim\)5% accuracy and weighted F1-score. Footnote 1: We do not use the machine translation tasks provided in both benchmarks as it will violate the zero-shot cross-lingual inference constraint within our experiment. Similar to the previous work (Yong et al., 2022), we clearly see the effect of catastrophic forgetting on monolingual denoising and (IA)1 without experiment replay (\(r\)=0) which significantly hurt their overall performance. As shown in Figure 2, the performance of the **pre-trained** languages on these two methods drop significantly, and even further, the performances on the **seen** languages also drop which suggests that the multitask prompting capability for both of these methods are degraded. While the (IA)1 with experience replay shows much smaller degraded performance on the **pretrained** languages, showing the importance of the experience replay for the continual instruction-tuning in (IA)1. Footnote 1: We do not use the machine translation tasks provided in both benchmarks as it will violate the zero-shot cross-lingual inference constraint within our experiment. Interestingly, the performance of (IA)1-tuned model with \(r\)=10000 is worse compared to (IA)1 with \(r\)=1000 and even performs a slightly lower score compare to the zero-shot baseline. We conjecture that this might occur when the selected samples for the experience replay are not representative enough (contains some outlier data, etc), which suggests the importance of sample selection for the experience replay. We leave the exploration of this phenomenon for future works. Figure 2: **(left)**\(\Delta\) accuracy and **(right)**\(\Delta\) weighted F1 of various continual instruction-tuned approaches compared to the zero-shot baseline. Negative scores indicate that the model performs worse compared to the baseline. ## 6 Analysis and Discussion ### Ablation Study We conduct an ablation study to better understand the effect of each continual instruction-tuning task on the generalization ability of the adapted model. In this work, we explored a combination of word, span, and sentence-level cross-lingual alignment objectives by utilizing two tasks, i.e., conditional denoising and machine translation. In addition, we also explored monolingual denoising as a baseline. To better understand the effect of each task and their combinations, we conduct experiments on various combinations of objectives, i.e., monolingual denoising, conditional denoising, machine translation, (IA)\({}^{1}\) (conditional denoising + machine translation), and (IA)\({}^{1}\) + monolingual denoising. We show the training and validation loss curves of each task objective in Figure 3. From the training and validation curve, the loss of monolingual denoising goes up after certain steps. This suggests that the monolingual denoising objective cannot generalize well even on the same monolingual denoising task. For machine translation, the loss training loss is very low, but the validation loss is quite high. This indicates overfitting on the machine translation objective. While for conditional denoising, it produces similar training and validation loss curves, suggesting that it can generalize well. Furthermore, the (IA)\({}^{1}\) objective yields the lowest score on both training and validation, suggesting the effectiveness of combining different granularity of cross-lingual alignment tasks. While when combining (IA)\({}^{1}\) and monolingual, the resulting train loss is higher than only using (IA)\({}^{1}\), nevertheless, the validation loss does not diverge as in the monolingual's, suggesting that cross-lingual alignment indeed helps the model to generalize better on the monolingual denoising task. The results are also reflected in the downstream task performance. As shown in Table 5, the best performance is achieved by the **(IA)\({}^{1}\)** objective followed by the **(IA)\({}^{1}\) + Monolingual**. The **Conditional Denoising** outperforms the **Monolingual Denoising**, showing that the conditional denoising task is more effective for learning new languages with a limited amount of data compared to the monolingual denoising task. Interestingly, the **Machine Translation** objective performs the worst with a score far below the **Baseline**. Nevertheless, **(IA)\({}^{1}\)**, which combines conditional denoising and machine translation, yields the best result. This suggests that only sentence-level cross-lingual alignment is not enough to learn a new language, but it can be beneficial to combine it with other cross-lingual alignment objectives such as condition denoising. ### Impact of Model Scaling In most cases, the size of LLMs correlates with the downstream performance. As shown in Table 6, Figure 3: **(left) Training loss and (right) validation loss curves from various instruction-tuning tasks** \begin{table} \begin{tabular}{l|c|c} \hline \hline **Objective** & **Accuracy** & **Weighted F1** \\ \hline Baseline & 41.08 & 40.06 \\ Monolingual Denoising & 43.50 & 44.13 \\ Conditional Denoising & 47.43 & 49.30 \\ Machine Translation & 33.51 & 24.54 \\ (IA)\({}^{1}\) & 53.83 & 55.54 \\ (IA)\({}^{1}\) + Monolingual & 48.67 & 49.11 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation result from various instruction-tuning objectives in the NusaWrites benchmark. the downstream performance in both benchmarks increases significantly when we evaluate the scaled-up sizes of the BLOOMZ model despite the fact that the models never explicitly learn the languages. We conjecture that this generalization is achieved due to the vocabulary overlapping between these Indonesian local languages with the national language, Indonesian (ind), which is used during the pre-training. Despite the effectiveness of the language adaptation using (IA)[1] on BLOOMZ-560M, simply using larger-scale models even without (IA)[1] can already outperform the performance on both benchmarks. This raises the question: How effective is (IA)[1]? To this end, we conclude that exploring the generalization of (IA)[1] on larger-scale LLMs will be beneficial to prove its effectiveness. Moreover, explorations can also be done in orthogonal dimensions, such as explorations of different model architectures, explorations of distant language adaptation instead of closely-related languages, etc. Nonetheless, in this work, we cannot test the generalization due to limited computing resources. Therefore, we encourage future works to explore the generalization of (IA)[1] to different model architectures and model sizes. ### Conclusion In this work, we address the challenge of increasing the language coverage of instruction-tuned LLMs by introducing the Instruct-Align framework, also known as (IA)[1]. We demonstrate that (IA)[1] allows an instruction-tuned LLM to effectively learn unseen languages through cross-lingual alignment using a diverse set of alignment-based cross-lingual instructions. To prevent catastrophic forgetting, (IA)[1] incorporates a continual learning approach through experience replay, which retains a small amount of past instruction-tuned data. Based on our experiment results on two Indonesian local languages benchmarks, (IA)[1] effectively and efficiently improves the understanding of the 7 novel Indonesian local languages, improving the language understanding performance on these languages by \(\sim\)5% accuracy and weighted F1 score. (IA)[1] displays a better forward transfer performance to other unseen Indonesian local languages by a significant margin compared to the baseline. Lastly, we ablate (IA)[1] and demonstrate the effectiveness of various alignment-based cross-lingual instructions compared to the traditional masked language modeling (MLM) for learning novel languages with a limited amount of data. Our work contributes to the progression of language adaptation methods for instruction-tuned LLMs and opens up the possibility of adapting under-represented low-resource languages into existing instruction-tuned LLMs. ## 7 Limitation and Future Works Despite the effectiveness of (AI)[1] that we have presented, the effectiveness has not been explored for different model architectures such as encoder-decoder models. Due to a limited computing budget, we can only run our (AI)[1] experiment on the BLOOMZ-560M, we encourage future works to scale up the experiment into larger model sizes. Moreover, in terms of continual learning, we only explore a single method, i.e., experience replay (Rolnick et al., 2019), due to the minimum memory requirement required for this method. Further analysis and examination of other potential continual learning approaches, such as A-GEM (Chaudhry et al., 2019) and EWC (Liu et al., 2018), will be another potential research direction to be explored. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**NusaWrites**} & \multicolumn{2}{c|}{**NusaX**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{2-7} & **Accuracy** & **Weighted F1** & **Accuracy** & **Weighted F1** & **Accuracy** & **Weighted F1** \\ \hline \multicolumn{7}{c}{**Larger-Scale BLOOMZ**} \\ \hline **BLOOMZ-1.1B** & 60.78 & 59.69 & 50.38 & 42.42 & 53.23 & 46.49 \\ **BLOOMZ-1.7B** & 66.90 & 65.05 & 48.00 & 39.07 & 51.25 & 43.79 \\ **BLOOMZ-3B** & 69.00 & 64.59 & 56.75 & 48.60 & 62.08 & 56.39 \\ \hline \multicolumn{7}{c}{**BLOOMZ-560M**} \\ \hline **Zero-Shot Baseline** & 41.08 & 40.06 & 42.00 & 33.95 & 41.90 & 36.59 \\ **Monolingual** & 43.50 & 44.13 & 36.88 & 26.85 & 37.38 & 28.63 \\ **(IA)[1][1][1][1]**=1000** & 53.48 & **55.87** & **43.75** & **35.98** & **47.98** & **42.34** \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results on the BLOOMZ-560M and larger scale BLOOMZ. Acknowledgements We thank Bryan Wilie and Yong Zheng Xin for the fruitful discussions and suggestions. This work has been partially funded by PF20-43679 Hong Kong PhD Fellowship Scheme, Research Grant Council, Hong Kong and the Hong Kong Fellowship Scheme by the Hong Kong Research Grants Council (RGC).
2308.13803
Throughput Maximization of DNN Inference: Batching or Multi-Tenancy?
Deployment of real-time ML services on warehouse-scale infrastructures is on the increase. Therefore, decreasing latency and increasing throughput of deep neural network (DNN) inference applications that empower those services have attracted attention from both academia and industry. A common solution to address this challenge is leveraging hardware accelerators such as GPUs. To improve the inference throughput of DNNs deployed on GPU accelerators, two common approaches are employed: Batching and Multi-Tenancy. Our preliminary experiments show that the effect of these approaches on the throughput depends on the DNN architecture. Taking this observation into account, we design and implement DNNScaler which aims to maximize the throughput of interactive AI-powered services while meeting their latency requirements. DNNScaler first detects the suitable approach (Batching or Multi-Tenancy) that would be most beneficial for a DNN regarding throughput improvement. Then, it adjusts the control knob of the detected approach (batch size for Batching and number of co-located instances for Multi-Tenancy) to maintain the latency while increasing the throughput. Conducting an extensive set of experiments using well-known DNNs from a variety of domains, several popular datasets, and a cutting-edge GPU, the results indicate that DNNScaler can improve the throughput by up to 14x (218% on average) compared with the previously proposed approach, while meeting the latency requirements of the services.
Seyed Morteza Nabavinejad, Masoumeh Ebrahimi, Sherief Reda
2023-08-26T07:59:58Z
http://arxiv.org/abs/2308.13803v1
# Throughput Maximization of DNN Inference: Batching or Multi-Tenancy? ###### Abstract Deployment of real-time ML services on warehouse-scale infrastructures is on the increase. Therefore, decreasing latency and increasing throughput of deep neural network (DNN) inference applications that empower those services have attracted attention from both academia and industry. A common solution to address this challenge is leveraging hardware accelerators such as GPUs. To improve the inference throughput of DNNs deployed on GPU accelerators, two common approaches are employed: Batching and Multi-Tenancy. Our preliminary experiments show that the effect of these approaches on the throughput depends on the DNN architecture. Taking this observation into account, we design and implement _DNNScaler_ which aims to maximize the throughput of interactive AI-powered services while meeting their latency requirements. _DNNScaler_ first detects the suitable approach (Batching or Multi-Tenancy) that would be most beneficial for a DNN regarding throughput improvement. Then, it adjusts the control knob of the detected approach (batch size for Batching and number of co-located instances for Multi-Tenancy) to maintain the latency while increasing the throughput. Conducting an extensive set of experiments using well-known DNNs from a variety of domains, several popular datasets, and a cutting-edge GPU, the results indicate that _DNNScaler_ can improve the throughput by up to 14x (218% on average) compared with the previously proposed approach, while meeting the latency requirements of the services. ## 1 Introduction Deployment of interactive AI-powered services, also known as real-time ML, on warehouse-scale infrastructures is on the increase. The deep neural network (DNN) inference applications that empower these services have to meet the low-latency requirement of such real-time ML services. On the other hand, the service providers seek high throughput to serve more requests in a unit of time. They also desire high resource utilization to reduce their operational costs, and further improve their revenue. To this end, various hardware accelerators such as ASICs [39], FPGA-based accelerators [21] and GPU-based accelerators [28] are proposed for DNN inference. Since the GPUs have shown significant throughput improvement when employed for DNN inference, they are widely used in warehouse-scale infrastructures as DNN accelerators. To gain high throughput when accelerating DNN inference, a common approach is Batching, which is widely used in previous works [56, 20]. It means processing input data in the form of batches, instead of processing them one by one. Batching helps to reuse the parameters of the DNN model for several inputs and also reduce the overhead of copying input data to GPU memory. [18, 56]. Another popular alternative is Multi-Tenancy [9, 34], where several different DNNs are co-located on a single GPU. Multi-Tenancy improves the throughput by sharing the computing resources between co-located DNNs. Although previous works have used Multi-Tenancy, they have not explored the case of co-locating several instances of the same DNN, in contrast to instances of different DNNs. In this work we consider this new approach for the first time (instances of the same DNN). Both Batching and Multi-Tenancy improve throughput via increasing resource utilization. While these approaches can increase the throughput, they negatively affect the tail latency of inference requests and elongate them [34]. Therefore they should be carefully used for real-time ML services. In this paper, for the first time, we show the fact that the impact of Batching and Multi-Tenancy on the throughput depends on the DNN architecture. Based on the various features of a DNN, such as the number of parameters and computational complexity, either Batching or Multi-Tenancy can significantly improve the throughput of that DNN, while the other approach has no or negligible impact. Considering this observation, we design and implement our approach, called _DNNScaler_, which aims to maximize the throughput of real-time ML services deployed on GPU accelerators while meeting their latency requirements. _DNNScaler_ consists of two modules: Profiler and Scaler. With the help of the Profiler module, it identifies the approach (Batching or Multi-Tenancy) that would be most beneficial for a DNN. After that, it adjusts the batch size (if Batching is selected) or the number of co-located instances (if Multi-Tenancy is selected) dynamically, considering the latency constraint, to maximize the throughput. Experimental results, using several DNNs and datasets and a Tesla P40 GPU, show that _DNNScaler_ can improve the throughput by up to 14x compared to an approach that ignores the impact of Batching and Multi-Tenancy on the throughput of different DNNs. We make the following contributions in this paper: * We study the effect of Batching and Multi-Tenancy on throughput when deploying DNNs on a GPU accelerator. We examine various DNNs with varying architectures and features. By analyzing the results, for the first time, we show that the effectiveness of Batching and Multi-Tenancy highly depends on the DNN architecture. For some DNNs, Batching can significantly increase the throughput, while for others Multi-Tenancy remarkably improves throughput. In addition, to improve the throughput of a single DNN application, we suggest to deploy several instances of the same DNN, which is different from previous approaches that co-locate various DNNs on the same GPU. * We design a Profiler module that determines, at real-time, whether a DNN's throughput would benefit from Batching or Multi-Tenancy. Another designed module, Scaler, aims to maximize the throughput while maintaining latency. In the presence of the Batching approach, it tunes the batch size as a control knob to achieve its goal. The other control knob, the number of co-located DNN instances, is used by Scaler when Multi-Tenancy is chosen for improving throughput. In the Scaler module, we use machine learning to estimate the latency of the DNN for different number of co-located instances. * Combining the Profiler and the Scaler modules, we implement our _DNNScaler_ approach. The Profiler module detects the suitable approach (Batching or Multi-Tenancy), and the Scaler module adjusts the respective control knob (batch size or the number of co-located instances). Conducting an extensive set of experiments using a wide variety of DNNs with different datasets as inputs and leveraging a powerful server equipped with an Nvidia GPU, we show the superiority of _DNNScaler_ over other approaches. The rest of the paper is organized as follows: In Section 2, we discuss the impact of Batching and Multi-Tenancy approaches on the throughput of various DNNs. Then, we introduce our proposed approach, _DNNScaler_, in Section 3 and present the experimental results in Section 4. Related works are briefly discussed in Section 5, and the paper is concluded in Section 6. ## 2 DNN Inference: Batching or Multi-Tenancy? The computing power and memory capacity of cutting-edge GPU accelerators used for DNN inference are on the increase. To improve the resource utilization of these accelerators, and hence, increase the throughput of applications, two common approaches are employed: _1) Batching:_ In this approach, input data is processed in the form of batches instead of processing each individual input (e.g., each image in image classification DNNs) separately. This approach has been widely employed by previous works [18, 20, 57, 61] to increase the throughput by better utilizing the computing resources of GPUs. Since the weights of DNNs are needed at least once per each input, Batching helps to reuse them for multiple inputs and reduce the data copy to GPU memory [57, 56]. _2) Multi-Tenancy:_ Since the DNN inference graphs used for prediction usually consume less resources than the available resources of GPUs, it is possible to deploy several instances of the same graph to potentially leverage instance-level parallelism and achieve higher resource utilization and throughput. Multi-Tenancy or co-location of several workloads or kernels on a single GPU and related challenges have been studied in a large body of research [8, 9, 34, 71]. But none of them have considered multiple instances of the same DNN, as we consider in this work. We have conducted a set of experiments to understand the impact of these two approaches on throughput and latency of DNN inference. We have employed four image classification DNNs (described in Table 1) with different sizes, architectures, and computational complexity to observe their performance under Batching and Multi-Tenancy. For the input data, we have used images from the ImageNet dataset [52]. For obtaining the computational complexity of DNNs, we have used TensorFlow Profiler [62]. The GPU accelerator we have used is a Tesla P40 GPU that has 3840 CUDA cores and 24 GB GDDR5 memory. For Batching, we use batch sizes of 1 to 128 to study its impact on throughput and latency. We have conducted the experiments for bigger batch sizes (up to 1024 that is supported by our GPU), but we only show the results for up to the batch size of 128 for the sake of clarity. For Multi-Tenancy, we co-locate 1 to 8 instances of the same DNN with increments of one (e.g., one instance of Inception-V1 to eight instances of it). For Multi-Tenancy, the batch size for all the instances is one. \begin{table} \begin{tabular}{l c c} \hline \hline DNN & No. Parameters & \begin{tabular}{c} Computational Complexity \\ of Inference (Mega FLOP) \\ \end{tabular} \\ \hline Inception-V1 & 6.6 M & 13.220736 \\ Inception-V4 & 42.7 M & 91.94925 \\ Mobilenet-V1-1 & 4.2 M & 8.420224 \\ ResNetV2=152 & 60.2 M & 120.084864 \\ \hline \hline \end{tabular} \end{table} Table 1: Number of Parameters and Computational Complexity of DNNs The results are depicted in Fig. 1. As can be seen, different DNNs show different behavior under each approach. Batching (Fig. 1(a)) can significantly improve the throughput of Inception-V4 and ResNetV2-152. However, it has a negligible effect on the other two ones. On the other hand, Multi-Tenancy (Fig. 1(b)) can improve the throughput of Inception-V1 and Mobilenet-V1-1, which could not leverage Batching. But, Multi-Tenancy has almost no effect on the throughput of Inception-V4 and ResNetV2-152. We can also see the effect of Batching and Multi-Tenancy on the latency in Fig. 1(c) and (d), respectively. Tail latency is defined as the 95\({}^{th}\) percentile of the inference latency distribution in this work. Both bigger batch size and a larger number of co-located instances lead to higher latency. Since latency is an essential requirement of real-time applications, it should be factored in when designing and implementing any approach. Combining the results presented in Fig. 1 and the specifications of DNNs shown in Table 1 we conclude that: 1) Batching can significantly enhance the throughput of DNNs with a large number of parameters and high computational complexity (e.g., Inception-V4). In these networks, Batching helps to reuse the parameters for several inputs, and hence, reduce the data movement in GPU, which leads to an increase in throughput. For DNNs with a small number of parameters such as Inception-V1, however, Batching is not very effective. In these DNNs, the time needed for preparing and copying the input data to GPU dominates the time needed for copying the parameters, and hence, parameter reuse cannot improve the throughput noticeably. To observe this, we profile the share of kernels launched during execution of DNNs for Inception-V1 and Inception-V4. Results show that share of kernels related to data preparation and movement (e.g., redzone-checker, CUDA memcopy HtoD) in the total execution time is very significant in Inception-V1 (20.1% only for aforementioned kernels for BS = 16), and it becomes even more when increasing the batch size. For Inception-V4, however, those kernels do not consume a large portion of total execution time (e.g., 4.2% for BS = 16). For the profiling, we used the NVProf tool [50]. Due to large number of kernels profiled (68) and lack of space, we only presented the info for two kernels. 2) Small and simple DNNs with low computation requirements can most benefit from Multi-Tenancy. Since the GPU accelerators have large amount of computing resources, one instance of small DNNs cannot fully utilize them. Hence, the idle resources can be used by additional instances to process several inputs simultaneously. Therefore these DNNs can experience significant throughput improvement by Multi-Tenancy. The complex networks such as Inception-V4, however, utilize all or most of the computing resources of GPU by only one instance. Hence, co-locating several instances of them cannot yield throughput improvement since the instances should utilize the computing resources in a time-sharing manner. The resource utilization of GPU under the Multi-Tenancy approach (from one to four co-located DNN instances) for Mobilenet-V1-1 and Inception-V4 is depicted in Fig. 2. Considering the aforementioned conclusions derived from preliminary experiments, we design and implement our approach which is discussed in detail in the next section. ## 3 Methodology Our approach, _DNNScaler_, aims to maximize the throughput while meeting the latency constraint of real-time ML applications, by leveraging either Batching or Multi-tenancy based on the target DNN. First, we present the problem formulation and then describe the design and implementation of _DNNScaler_ in detail. The acronyms used in the paper and their meaning are listed in Table 2. ### Problem Statement and Formulation A DNN inference application is deployed on a GPU accelerator with a latency constraint stated in the form of Service Level Objective _(SLO)_. Both throughput and latency of DNN are a function of the Batch Size (BS) or the Multi-Tenancy Level (MTL), depending on which approach is chosen (Throughput \(\propto f(BS,MTL)\), Latency \(\propto g(BS,MTL)\)). By MTL, we Figure 1: Impact of Batching and Multi-Tenancy approaches on throughput and latency of different DNNs. Figure 2: Impact of co-location on streaming multiprocessors (SMs) utilization for two DNNs. mean the number of co-located instances of the DNN on the GPU. The objective function is to maximize the throughput of the DNN during its execution time \(T\), while maintaining its latency below the SLO. \[\small\begin{split}\text{\emph{Maximize}}&\frac{1}{T} \sum_{t=1}^{T}Throughput^{t}\\ \text{s.t.}&\hskip 142.26378ptLatency^{t}\leq SLO \end{split} \tag{1}\] This problem provides two control knobs for managing the throughput and latency: 1) We can use either Batching or Multi-Tenancy for increasing the throughput. 2) Depending on which one is selected, we fine-tune the batch size (for Batching) or the number of co-located instances (for Multi-Tenancy) to maintain the latency. ### DNNScaler Our proposed approach, _DNNScaler_, leverages the observations discussed in Section 2 when using the two control knobs (Batching and the batch size, or Multi-Tenancy and the number of co-located instances). Since Batching and Multi-Tenancy in this context sparks a sense of scaling-up and scaling-out of DNN inference applications, respectively, we have chosen the name _DNNScaler_ for our approach. _DNNScaler_ consists of two modules: Profiler and Scaler. The overall flow of _DNNScaler_ is shown in Fig. 3(a), and its pseudo-code is presented in Algorithm 1. In the following, we explain these two modules. #### 3.2.1 Profiler The Profiler module probes the DNN to determine which of Batching or Multi-Tenancy can better improve the throughput. To determine which approach is more suitable for a DNN, the Profiler conducts a lightweight profiling at run-time. During this profiling phase, the throughput of the DNN for batch sizes of one (BS = 1) and \(m\) (where \(m=32\) in our experiments) is measured by the Profiler. Only a few batches are needed to be executed to measure the throughput of each BS, and calculate the throughput improvement obtained by \(BS=m\) over \(BS=1\), as in (3): \[Tt_{B}=\frac{Throughput_{BS=m}-Throughput_{BS=1}}{Throughput_{BS=1}}\times 100 \tag{2}\] After that, the throughput for the case of having \(n\) co-located instances (MTL = n, where \(n=8\) in our experiments) is measured. The throughput for a single instance is not needed because it is the same as Batching with BS = 1. Then, throughput improvement of Multi-Tenancy can be calculated as (4). Comparing the throughput improvement of Batching and Multi-Tenancy (see (5)), the Profiler decides which approach is more suitable for the DNN and sends the gathered information to the next module, Scaler. The profiling is of the order of seconds, therefore its overhead on the system is negligible. \[T_{IMT}=\frac{Throughput_{MTL=n}-Throughput_{MTL-1}}{Throughput_{MTL-1}}\times 100 \tag{3}\] \[if\left\{\begin{aligned} & T_{B}>T_{IM},&&\text{ Batching}\\ & T_{B}<T_{IM},&&\text{Multi-Tenancy}\\ & T_{B}=T_{IM},&&\text{The one with lower latency}\end{aligned}\right. \tag{4}\] #### 3.2.2 Scaler The Scaler module receives the information from the Profiler that indicates which approach (Batching or Multi-Tenancy) is appropriate. Having this information, the Scaler aims to maintain the SLO of the DNN while trying to maximize its throughput. Looking again at Fig. 1, we see that for both Batching and Multi-Tenancy, increasing the batch size and the number of co-located instances can lead to higher throughput, but simultaneously leads to elongated latency. Hence, the Scaler module tries to find the largest batch size or the number of co-located instances (based on the approach proposed by the Profiler) that yields the latency below or equal to SLO. In the following, we describe how the Scaler module works with respect to the selected approach. ### Dynamic Behavior of Scaler In this section, we describe the dynamic behavior of the Scaler module of _DNNScaler_ with respect to the approach determined by the Profiler module for a job. First, we discuss how Scaler adjusts the batch size when the Batching approach is selected. Then, we describe the Scaler mechanism for the case when Multi-Tenancy is selected for a job and explain how the number of co-located instances is determined dynamically with respect to latency and throughput. #### 3.3.1 Dynamic Batch Size Adjustment When deploying a DNN on the GPU for inference, the common practice is to use a constant batch size. This constant batch size cannot be changed during the execution dynamically. In order to change it, the current instance should be terminated and a new one with another batch size should be launched, which imposes overhead on the system in the form of interruption in the service, increased latency, and reduced throughput. To address this issue, we implement dynamic \begin{table} \begin{tabular}{l l} \hline \hline Acronym & Definition \\ \hline B & Batching \\ MT & Multi-Tenancy \\ MTL & Multi-Tenancy Level \\ DNN & Deep Neural Network \\ TI & Throughput Improvement \\ SLO & Service Level Objective \\ SM & Streaming Multiprocessor \\ FLOP & Floating Point Operation \\ \hline \hline \end{tabular} \end{table} Table 2: Lists of Acronyms and Their Meaning Used Throughout the Paper batch sizing for DNN inference. The implementation imposes almost no overhead on latency or throughput, compared with a conventional constant batch size approach. Implementing the dynamic batch sizing helps us to design and implement the Scaler module more efficiently. The changes are in application side and how it interacts with the TensorFlow framework, without any need for changing TensorFlow. In the design of the Scaler for the Batching approach, we consider the observation presented in Section 2. We saw that both latency and throughput have a direct relationship with batch size. The Scaler leverages this observation and employs a pseudo binary search mechanism to efficiently search for the most suitable batch size. The time complexity of the binary search is \(O\) (_log n_), and hence, the time overhead of Scaler would be negligible. As shown in Fig. 3(b), the Scaler module for the Batching approach works as follows: it starts with a default batch size of one (BS=1) and processes a certain number of batches and measures their tail latency. If the tail latency is less than the SLO of the DNN multiplied by a \(\alpha\) coefficient (\(\mathit{SLO}\times\alpha\)), then Scaler sets the batch size equal to the value in the median of the current batch size and the largest possible batch size (BS=128 in this work). If the current batch size is the largest possible one (due to the limitation of GPU memory, the batch size cannot be larger than a certain value), then it means no further throughput improvement is possible. Otherwise, if the tail latency is greater than the SLO, the Scaler sets the batch size as the value in the median of the lowest possible batch size and the current batch size. In this case, the current batch size being the smallest one means that the SLO of the DNN cannot be met. Finally, if the latency is between SLO and \(\mathit{SLO}\times\alpha\), then Scaler does not change anything and continues with the current batch size. We use \(\alpha\) coefficient to avoid excessive batch size changes. The suitable value of \(\alpha\) can be found empirically by observing the behavior of a few DNNs under different values of it. We have \(\alpha=0.85\) in this work. The Scaler does not stop after finding a suitable batch size, but continues to monitor the latency. Detecting a tail latency above the SLO or below \(\mathit{SLO}\times\alpha\), it starts adjusting the batch size again. Readjustment is needed when the tail latency is affected by parameters such as variation in the input dataset, GPU temperature, and frequency of GPU. Even the user can decide to change the SLO during runtime. #### 3.3.2 Dynamic MT Level Adjustment Multi-tenancy shows a similar behavior to batching (for the DNNs that can benefit from it). Increasing the number of Figure 3: The overall flow of DNNScaler and its Profiler and Scaler modules are depicted in (a). The detail flow of The Scaler for Batching and Multi-Tenancy approaches is depicted in (b) and (c), respectively. co-located instances can improve the throughput (and also increases the latency). Similar to batching that we used BS = 128 as the upper bound of batch size, for multi-tenancy we have chosen MTL = 10 as the maximum number of co-located instances based on the memory capacity of our GPU. This number also can be determined for various settings (such as different GPUs) by a lightweight profiling. Therefore a similar approach to Batching (binary search) can be employed to find the best value for MTL (i.e., the number of co-located instances). However, unlike Batching where we implemented a dynamic batch sizing with negligible overhead, for Multi-Tenancy there is no such lightweight mechanism to change the number of co-located instances on the fly. Frequently launching and terminating instances imposes significant overhead. Therefore, we need an alternative approach with low overhead. One solution is to profile the latency of the DNN for all the possible number of instances (MTL = 1 to MTL = N). In the next step, to adjust the value of MTL for a specific SLO, we can simply select the largest one (to maximize throughput) that has latency lower than SLO. However, the overhead of profiling all the possible values of MTL itself leads to significant overhead, which is in contrast with our initial goal, which was to avoid overhead of frequent launching and terminating instances. To tackle this challenge, we employ a machine learning based approach called matrix completion [7] to estimate the latency for all the possible values of MTL. Using matrix completion, we need to profile results of the latency of DNN for a few values of MTL (two in our work). Since we already have this information from the profiling phase (for MTL = 1 and MTL = 8), we do not impose any further overhead to the system. Having the latency of MTL = 1 and MTL = 8, matrix completion can estimate the latency for other number of co-located instances (i.e., other values of MTL). Then, we use these estimated values to select the MTL considering the SLO. Fig. 4 shows how the matrix completion is employed to estimate the latency of different MTLs. With matrix completion we can jump to a solution immediately without changing the value of MTL frequently, in contrast to a brute force approach. Since the estimated values of matrix completion are not 100% accurate, we have devised an additive-increase-multiplicative-decrease (AIMD) scheme [15, 18] to complement it. We start the co-location with MTL suggested by matrix completion. If the latency is lower than SLO, then we start adding instances one by one until tail latency is greater than SLO. In this point, we only need to terminate the last instance to keep the tail latency below the constraint. That is the point where we can have the highest possible throughput while maintaining the SLO. If we reach to the maximum MTL, e.g., MTL = 10, (where no further instances can be deployed) before violating the SLO, we can stop adding new instances and there would be no need for terminating any instance. On the other hand, if the latency of MTL suggested by mattrix completion is higher than SLO (which means that latency is underestimated), we decrease the number of instances by steps of one and terminate them, until the latency is lower than SLO, and then stop. The flow of Scaler for Multi-tenancy approach is presented in Fig. 3(c). Note that the main difference between our work and previous Multi-Tenancy approaches, which forms one of the contributions of the paper, is that they consider DNNs co-located from different jobs and try to mitigate the impact of interference on their performance. But we consider the case where the co-located instances are from the same DNN and belong to the same job and work with each other to improve the total throughput. For the BS, we used binary search to find upper bound of it (128) that does not lead to out of memory (OOM) error by a few short experiments. For finding upper bound of MTL (10), first the minimum amount of memory needed for a DNN is determined considering its size and computational complexity, and then the upper value of MTL is calculated based on this value (for the largest DNN) and the memory capacity of GPU (and also overhead of several instances working together). **Matrix Completion**, an ML approach, is used to recover missing entries of a matrix that is partially observed. It employs Singular Value Decomposition (SVD) to reduce the dimensions of the matrix. It also needs to know the rank of the matrix of interest. The rows or columns of matrix with rank \(r\) span an \(r\)-dimensional space. Applying SVD on the matrix \(M\) yields a factorization of the form \(M=U\times\Sigma\times V^{T}\), where \(U\), \(V\) and \(\Sigma\) represent different similarity concepts features of \(M\). \[U_{n_{1},v_{2}}=\begin{bmatrix}u_{11}&\ldots&u_{1r}\\ u_{12}&\ldots&u_{2r}\\ \vdots&\ddots&\vdots\\ u_{n_{1}1}&\ldots&u_{n_{1}r}\end{bmatrix},V_{n_{2},v}=\begin{bmatrix}v_{11}& \ldots&v_{1r}\\ v_{21}&\ldots&v_{2r}\\ \vdots&\ddots&\vdots\\ v_{n_{2}1}&\ldots&v_{n_{2r}}\end{bmatrix},\] and \[\Sigma_{rsr}=\begin{bmatrix}\sigma_{1}&\ldots&0\\ \vdots&\ddots&\vdots\\ 0&\ldots&\sigma_{r}\end{bmatrix}.\] Having matrices \(U\), \(\Sigma\), and \(V\) and applying PQ-reconstruction leads to matrix \(X\) that estimates the missing values of \(M\). We use convex optimization by TFOCS (Templates for First-Order Conic Solvers) [6] tool to estimate matrix \(X\) in this work. Since _DNNScaler_ is proposed for real-time applications, it is very important that they experience little to no interrupt in their work. Both the Batching and Multi-Tenancy approaches of _DNNScaler_ assure that the applications can continue their work with no interruption. None of adjusting the batch size or the number of co-located instances prevents the DNNs from serving the new requests. Moreover, both mechanisms can quickly respond to bursty workloads and avoid violating latency constraints, as some inference workloads arrive in a burst and not uniformly [2, 5]. ## 4 Evaluation ### Experimental Setup **Platform.** We run our experiments on a dual-socket Xeon server. It has two E5-2680 v4 Xeon chips where each of the chips has 28 cores running at 2.4 GHz. The server has 128 GB of DDR4 memory. Ubuntu 16.04 with kernel 4.4 is installed on the server with Python 2.7, CUDA 11.0, and TensorFlow 1.15. The server is equipped with a PCI Express Gen3 Nvidia Tesla P40 GPU Accelerator. The Tesla P40 leverages Nvidia Pascal architecture and has 3840 CUDA cores. The total memory capacity of GPU is 24 GB GDDR5 memory, its idle power is around 50W, and its maximum power limit is 250W. **Networks and Dataset**. To show the adaptive nature of our approach, we use DNNs from different domains. Since computer vision, and in particular image classification, is a popular field and numerous DNNs are designed for this application, we employ 16 image classification networks with two datasets in our experiments. One dataset is ImageNet [52], which is a popular dataset that is widely used in previous works [41, 30, 19, 48], and the other one is Caltech-256 [25], which is collected by researchers from the California Institute of Technology. For these DNNs, throughput is defined as number of images processed per second (image/second). From the natural language processing (NLP) domain, we employ a DNN for text classification [40], which we call TextClassif in this paper. For the input data of this DNN, we use Sentiment140 [1] and IMDB Reviews [46] datasets. For this DNN, the throughput is defined as the number of sentences processed per second. DeepVS [37] is another DNN we use in our experiments that targets video saliency prediction and the throughput is defined as the number of frames processed per second. Finally, we employ DeepSpeech2 [3], which is an end-to-end DNN for speech recognition and define the throughput as the number of speech files processed per second. The selected DNNs cover a wide range of applications, as well as DNN types: from CNNs to RNNs, to LSTMs. These DNNs have varying sizes and architectures, and consequently, different computational complexity. The specifications of the networks and datasets are presented in Table 3. **System Comparison.** Clipper [18] is an approach proposed for online serving of inference requests considering a pre-defined latency SLO. Clipper employs an additive-increase-multiplicative-decrease (AIMD) scheme to find the optimal batch size that maximizes the throughput while meeting the latency SLO. It starts from the minimum batch size and additively increases it by a fixed step (four in this work) until tail latency surpasses the SLO. At this point, Clipper performs a small multiplicative back-off and reduces the BS by 10%. **Workload.** In our experiments, we have a workload consists of 30 DNN inference jobs. SLO of each job is stated in the form of 95\({}^{th}\) tail latency target in milliseconds. We measured the average latency of one input for BS = 1 and MTL = 1. Then, we considered a coefficient (\(>\) 1) of this value for SLO of each job to have tight and relaxed SLOs. The list of jobs is presented in Table 4. ### Profiling Results We have profiled the DNNs using the Profiler module to identify the proper approach for each of them. For Batching, we use BS=1 and BS=32, and for Multi-Tenancy we use MTL = 1 and MTL = 8. These values (BS = 32 and MTL= 8) are chosen based on our early observations. We have seen that BS = 32 and MTL = 8 are big enough to show which approach can give higher throughput improvement. These can be chosen differently for other GPUs or DNNs, if needed. The percentage of improvement (over base throughput of MTL = 1 and BS = 1) yield by each approach for several jobs is shown in Table 5. The DNNScaler Method column in Table 4 is filled by the results obtained from profiling. The results further emphasize our observations from preliminary Figure 4: Illustrative example to show how we employ matrix completion to estimate the latency of a DNN for different MTLs. \begin{table} \begin{tabular}{l experiments that one of the Batching or Multi-Tenancy works better for a DNN in terms of throughput improvement. For networks with a low amount of computational complexity and a low number of parameters, such as the Mobilenet, we see remarkable throughput improvement (e.g., 335% in Job 19) by Multi-Tenancy, while the same DNNs cannot benefit from Batching significantly. On the other hand, large and complex networks with a high number of parameters such as Inception-V4 can experience high throughput improvement by Batching, but not by Multi-Tenancy (see Job 3). As can be seen, the dataset also affects the performance of Batching and Multi-Tenancy, and hence, the approach selected for the DNNs. For example, in image classification DNNs, the image size is important as it should be readjusted before being fed to the network. This adjustment depends on the dataset, and affects the overall performance of DNN. Therefore, for some DNNs such as Inception-V2, the Multi-Tenancy approach yields better throughput for the ImageNet dataset, but Batching has better performance for the Caltech dataset. The length of sentences also affects the performance of TextClassif, and hence, it shows different behavior for Sentiment140 and IMDB Reviews datasets with respect to latency. The longer sentences of IMDB Reviews take more time to be processed. ### Throughput and Power Efficiency Throughput is a crucial parameter factored in when designing and deploying real-time ML services [17, 27]. Therefore we study the throughput of _DNNScaler_ to understand how much it can improve the performance of applications compared with Clipper. Fig. 5 shows the throughput of _DNNScaler_ and Clipper for all the jobs. Note that a base-10 log scale is used for the Y axis. On average, _DNNScaler_ improves the throughput by 218% compared with Clipper. For the jobs that _DNNScaler_ uses the Batching approach, the improvement is not as very significant (e.g., 1% improvement in Job 7). However, for jobs such as Jobs 1 and 2 where _DNNScaler_ employs the Multi-Tenancy approach, the throughput improvement is as significant as 14x (Job 5). We clearly see that our proposed Multi-Tenancy approach, which determines the number of co-located instances dynamically with respect to SLO, can successfully leverage the GPU resources and significantly increase the throughput, compared with Batching strategy of Clipper. These results confirm our earlier observation that wise usage of Multi-Tenancy for some DNNs can better utilize the GPU resources, and hence, yield better throughput than Batching. Another essential feature of real-time ML systems is power efficiency. Power is of high importance in warehouse-scale infrastructures and datacenters since it has a substantial share in operational costs [24, 47]. We compare _DNNScaler_ and Clipper regarding power efficiency as well. We define power efficiency as the throughput per watt achieved by each approach. Since there is not much difference between performance of _DNNScaler_ and Clipper, when _DNNScaler_ uses Batching, only the results for jobs performed using Multi-tenancy by _DNNScaler_ are shown in Table 6. Clipper employing large batches leads to high power consumption, but without expected throughput improvement, \begin{table} \begin{tabular}{l l l l l l|l l l l l l} \hline \hline \multicolumn{1}{c}{Job \#} & \multicolumn{3}{c}{\begin{tabular}{c} DNNScaler \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Steady MTL/BS \\ \end{tabular} } & \multicolumn{1}{c}{Job \#} & \multicolumn{1}{c}{DNN} & \multicolumn{1}{c}{Dataset} & \multicolumn{1}{c}{\begin{tabular}{c} SLO (ms) \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} DNNScaler \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Steady MTL/BS \\ \end{tabular} } & \multicolumn{1}{c}{Job \#} & \multicolumn{1}{c}{DNN} & \multicolumn{1}{c}{Dataset} & \multicolumn{1}{c}{\begin{tabular}{c} SLO (ms) \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} DNNScaler \\ Method \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} Steady MTL/BS \\ \end{tabular} } \\ \hline 1 & Inc-V1 & ImageNet & 35 & MT & MTL = 8 & 16 & Inc-V3 & CalTech & 322 & B & BS = 37 \\ 2 & Inc-V2 & ImageNet & 53 & MT & MTL = 9 & 17 & Inc-V4 & CalTech & 139 & B & BS = 10 \\ 3 & Inc-V4 & ImageNet & 419 & B & BS = 28 & 18 & MoV1-1 & CalTech & 89 & MT & MTL = 10 \\ 4 & Mobi-V1-05 & ImageNet & 819 & MT & MTL = 10 & 19 & MobV1-05 & CalTech & 60 & MT & MTL = 10 \\ 5 & MobV1-025 & ImageNet & 186 & MT & MTL = 10 & 20 & MobV1-025 & CalTech & 104 & MT & MTL = 10 \\ 6 & MobV2-1 & ImageNet & 81 & MT & MTL = 10 & 21 & MoV2-1 & CalTech & 129 & MT & MTL = 10 \\ 7 & NAS-Large & ImageNet & 817 & H & BS = 13 & 22 & PNAS-Large & CalTech & 524 & B & BS = 19 \\ 8 & NAS-Mob & ImageNet & 85 & MT & MTL = 10 & 23 & PNAS-Mob & CalTech & 321 & B & BS = 50 \\ 9 & PNAS-Mob & ImageNet & 82 & MT & MTL = 10 & 24 & ResV2-50 & CalTech & 31 & B & BS = 1 \\ 10 & ResV2-50 & ImageNet & 45 & MT & MTL = 6 & 25 & ResV2-101 & CalTech & 107 & B & BS = 10 \\ 11 & ResV2-101 & ImageNet & 72 & B & BS = 4 & 26 & TextClassif & Sentiment140 & 3.5 & B & BS = 102 \\ 12 & ResV2-152 & ImageNet & 206 & B & BS = 14 & 27 & TextClassif & IMDB & 3 & B & BS = 76 \\ 13 & ResV2-101 & ImageNet & 107 & B & BS = 7 & 28 & DeepSpeech & LihSpeech & 1250 & B & BS = 28 \\ 14 & Inc-V1 & CalTech & 48 & MT & MTL = 10 & 29 & DeePVS & LEDOV & 3000 & MT & MTL = 6 \\ 15 & Inc-V2 & CalTech & 116 & B & BS = 16 & 30 & DeePVS & DHF1K & 5000 & MT & MTL = 8 \\ \hline \hline \end{tabular} \end{table} Table 4: Specification of jobs used in the experiments. The “DNNScaler Method” column is filled after applying our method. The last column (“Steady MTL/BS”) shows the steady state batch size (BS) or number of multi-tenant instances (MTL) that DNNScaler has chosen for each job in the experiments. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{1}{c}{Job \#} & \multicolumn{1}{c}{\begin{tabular}{c} Base Throughput \\ (BS=1 \& MTL=1) \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Multi-Tenancy Throughput \\ \end{tabular} } & \multicolumn{1}{c}{\begin{tabular}{c} Matching Throughput \\ (\%) \\ (T\_LT,M) \\ \end{tabular} } & \multicolumn{1}{c}{ \begin{tabular}{c} Backing Throughput \\ (\%) \\ \end{tabular} } \\ \hline 1 & 118.66 & 237.28 & **95.96** & 125.67 & 5.91 \\ 2 & 104.46 & 169.58 & **62.29** & 125.33 & 19.97 \\ 3 & 36.81 & 39.61 & 7.63 & 116.41 & **216.28** \\ 9 & 48.49 & 148.28 & **205.81** & 125.44 & 158.70 \\ 10 & 103.62 & 137.43 & **32.63** & 126.55 & 22.13 \\ 11 & 62.75 & 78.63 & 25.32 & 125.99 & **100.79** \\ 15 & 102.82 & 169.31 & 64.67 & 235.05 & **128.61** \\ 19 & 241.14 & 1050.58 & 335.67 & 267.84 & 11.07 \\ 26 & 492.00 & 2163.80 & 339.80 & 7145.89 & **1352.43** \\ 29 & 15.46 & 41.27 & **166.89** & 19.82 & 28.16 \\ \hline \hline \end{tabular} \end{table} Table 5: Detailed Profiling Results of DNNs Using the Profiler Module of DNNScaler for Some Representative Jobs (The higher improvement is shown by bold blue) which leads to poor power efficiency. _DNNScaler_, on the other hand, can achieve high throughput using Multi-Tenancy, and hence, better power efficiency. While _DNNScaler_ consumes more power than Clipper (44% on average for jobs shown in Table 6), its throughput improvement (435% on average) can definitely compensate for it. Hence, _DNNScaler_ can deliver significant power efficiency improvement compared with Clipper (up to 11x in Job 5, and 288% on average). We can conclude that _DNNScaler_ can remarkably enhance the power efficiency of real-time ML infrastructures. ### DNNScaler Results in Detail To better understand the behavior of _DNNScaler_, we go deeper into the details and discuss the results for a few jobs. First, we depict the latency trace of a few jobs to show how both _DNNScaler_ and Clipper can meet the SLO. The cumulative distribution of latency of requests for four jobs is depicted in Fig. 6. As can be seen, for both _DNNScaler_ and Clipper, 95% or more of the requests have a latency smaller or equal to SLO. This emphasizes the success of both approaches in meeting the SLO of jobs. Next, we discuss the Batching results for two jobs in Fig. 7(a)(c) and Fig. 7(b)(d), respectively. Since Clipper also uses the Batching mechanism, we show its results as well and compare them with _DNNScaler_. Both _DNNScaler_ and Clipper start with BS = 1. Since the initial latency is lower than SLO, both of them increase batch size to achieve higher throughput. The pseudo-binary mechanism of _DNNScaler_ jumps to a relatively big batch size, but when it detects the significant SLO violation, it immediately reduces the batch size and finds the suitable one after trying a few ones by applying its search routine. The Clipper's AIMD mechanism tries to adjust the batch size as well, but with a slower rate than _DNNScaler_, and consequently, it reaches the stable state later than _DNNScaler_. The ability of _DNNScaler_ to quickly find the suitable batch size helps it to immediately adapt to possible changes in SLO or slowdown of GPU due to an applied power cap. Later in Section 4.5, we study the behavior of _DNNScaler_ under varying SLO. After that, we explore the _DNNScaler_ behavior for the Multi-Tenancy approach. The detailed results for two jobs are depicted in Fig. 8. For Job 2 (Fig. 8(a)), _DNNScaler_ initially employs the latency estimations of different MTLs from matrix completion and compares them with the SLO to decide the maximum number of DNN instances it should launch to maximize the throughput, while meeting the latency constraint. But it detects the SLO violation after launching the instances, meaning that the estimation was not 100% accurate (as expected). Hence, it terminates one instance and since the new latency meets the SLO, it continues with the remaining number of instances. For Job 14, _DNNScaler_ deploys the maximum number of allowed instances (MTL = 10) based on matrix completion estimation. After deployment of the instances, the latency is still below SLO, but since there is no room for adding extra instances on GPU, it continues with current ones. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Job \#} & \multicolumn{2}{c}{Power (W)} & \multicolumn{2}{c}{Throughput} & \multicolumn{2}{c}{Power Efficiency} \\ & \multicolumn{2}{c}{DNNScaler} & \multicolumn{2}{c}{Clipper} & \multicolumn{2}{c}{DNNScaler} & \multicolumn{2}{c}{Clipper} & \multicolumn{2}{c}{DNNScaler} & \multicolumn{2}{c}{Clipper} \\ \hline 1 & 87.70 & 55.04 & 241.62 & 32.88 & 2.73 & 0.60 \\ 2 & 89.82 & 57.98 & 172.26 & 54.81 & 1.92 & 0.95 \\ 4 & 74.96 & 54.61 & 1254.10 & 116.08 & 16.73 & 2.13 \\ 5 & 63.04 & 51.78 & 188.80 & 121.57 & 29.96 & 2.35 \\ 6 & 90.58 & 59.96 & 415.70 & 84.59 & 4.59 & 1.41 \\ 8 & 71.57 & 55.74 & 127.60 & 44.02 & 1.78 & 0.79 \\ 9 & 73.33 & 57.88 & 150.60 & 60.54 & 2.05 & 1.05 \\ 10 & 118.06 & 64.17 & 138.84 & 50.63 & 1.18 & 0.79 \\ 14 & 87.74 & 57.32 & 239.30 & 71.89 & 2.73 & 1.25 \\ 18 & 109.84 & 65.80 & 634.90 & 144.58 & 5.78 & 2.20 \\ 19 & 75.94 & 54.34 & 1118.60 & 151.41 & 14.73 & 2.79 \\ 20 & 63.30 & 52.41 & 1839.80 & 200.78 & 29.07 & 3.83 \\ 21 & 90.63 & 65.25 & 414.50 & 155.09 & 4.57 & 2.38 \\ 29 & 122.44 & 86.39 & 40.93 & 22.51 & 0.33 & 0.26 \\ 30 & 132.19 & 88.98 & 40.72 & 24.72 & 0.31 & 0.28 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparing the Power Efficiency and Throughput of DNNNSScaler and Clipper In both Batching and Multi-Tenancy, some short-live spikes are observed in latency that violate the SLO. They happen due to some reasons (e.g., OS processes) rather than DNNScaler settings. Thereby, they are skipped to avoid excessive changes which leads to performance degradation. For long spikes that affect the tail latency significantly, DNNScaler readjusts the control knobs to mitigate them according to Algorithm 1. ### Sensitivity Analysis During the execution of an application, any change in external and/or internal parameters of the system can affect the latency and/or the SLO of the applications. For example, the user that has submitted the job might decide to change the SLO, or a power cap might be applied on GPU, that affects its frequency, and consequently, the latency of the job. As we have mentioned earlier in Section 3.2.2, _DNNScaler_ is designed to adapt to such changes during the runtime. To evaluate the efficacy of _DNNScaler_ to adapt to such changes, we conduct another set of experiments. We consider two scenarios for each approach of _DNNScaler_ (Multi-tenancy and Batching). In one scenario, the SLO decreases during the runtime of the job, and in the other one, it increases. For Multi-Tenancy, we employ the Inception-V1, and for Batching we employ Inception-V4. The results are shown in Fig. 9 for Batching and in Fig. 10 for Multi-Tenancy. For the Batching approach, we see how the _DNNScaler_ adaptively changes the batch size to meet the new SLO. In Fig 9(a), as the SLO drops down, _DNNScaler_ significantly reduces the batch size to avoid SLO violation. On the other hand, in the presence of an increasing SLO (see Fig. 9(b)), _DNNScaler_ tries to employ a larger batch size to gain higher throughput. Note that a base-10 log scale is used for the Y axis. The left Y axis shows the latency and the right one shows the batch size. As can be seen in Fig. 10(a), when the SLO is relaxed, _DNNScaler_ creates ten instances of the DNN model and deploys them on the GPU to increase the throughput. As the SLO is reduced by almost half, _DNNScaler_ immediately de Figure 5: Comparing the throughput of DNNScaler and Clipper for all the jobs. The Y axis is shown in base-10 log scale. The B (Batching) and MT (Multi-Tenancy) on top of the bars indicate the selected approach by DNNScaler for that job. Figure 8: Detailed behavior of DNNScaler under Multi-Tenancy approach for two representative jobs. Figure 6: Cumulative distribution of the latency of requests for four jobs. The vertical dotted red line shows the SLO of each job. The X axis is shown in base-10 log scale. Figure 7: Detailed behavior of DNNScaler (Batching) and Clipper for two representative jobs. tects it and starts terminating the extra instances to meet the SLO. It eliminates five instances to meet the new tight SLO. In Fig 10(b), we see the results for _DNNScaler_ (Multi-Tenancy approach) in an increasing SLO scenario. At first, _DNNScaler_ only creates four instances for deployment on GPU. In the middle of the execution, when it detects new increased SLO, it deploys more instances to exploit the gap between SLO and latency in favor of throughput. ### Discussion **Sole Employment of Multi-Tenancy.** No previous work has used the Multi-Tenancy approach to increase the throughput of a single DNN, similar to our work. Therefore, in the experimental results section, we have no comparison with an approach that only uses Multi-Tenancy for all the jobs (i.e., we have Clipper that only uses Batching for all the jobs, but there is not an approach that only uses Multi-Tenancy). Not having such comparison makes it difficult to understand what is the impact of Multi-Tenancy on jobs such as 3 and 7 (see Table 4) and how the Batching approach selected by _DNNScaler_ can improve their throughput compared against Multi-Tenancy. To address these questions, in this section, we compare the throughput of the Batching and Multi-Tenancy approaches for 6 jobs from Table 4 that have been executed by the Batching approach (according to _DNNScaler_ decision) in earlier experiments. The Multi-Tenancy approach we have used for these jobs is exactly the Multi-Tenancy approach of _DNNScaler_ that has been used for other jobs. The result is shown in Fig. 11. _Our goal is to verify that the DNNScaler's decision of employing the Batching approach for these jobs was a correct one, and that Multi-Tenancy cannot improve their throughput more than Batching._ We see that in all the jobs, Batching yields higher throughput than Multi-Tenancy, so we conclude that _DNNScaler_ has selected the correct approach for them. **Combining Batching and Multi-Tenancy**. A question that can rise is that why not combining the two approaches to leverage benefits of both of them. For example, in the Multi-Tenancy we mentioned that we use BS = 1 for all the instances. But, what if we use larger batch size for them? To answer this question, we conduct another set of experiments. We consider two DNNs that were executed by Batching (ResV2-152 and PNAS-Large) and two ones that were executed by Multi-tenancy (MobV1-1 and MobV1-025). For ResV2-152 and PNAS-Large, we select a fixed batch size of 8 (BS=8, constant) and increase the number of co-located instances from 1 to 4 (MTL=1 to MTL=4), and measure the throughput and latency of the DNNs for each MTL. ResV2-152 experiences notable throughput improvement when going from MTL=1 to MTL=2, however, the improvement for MTL=3 and MTL=4 is negligible. PNAS-Large, on the other hand, not only experiences no throughput improvement, but even suffers from reduction in throughput. As expected, the latency of both of them increases as enlarging the MTL. For MobV1-1 and MobV1-025, we consider a fixed number of co-located instances of 5 (MTL=5, constant) and change the batch size from 1 to 8 (BS=1, BS=2, BS=4, BS=8). Again, we see that one of them (MobV1-1) can benefit from the combination of Batching and Multi-Tenancy in term of throughput, while the other one (MobV1-025) experiences no throughput improvement and only suffers from higher latency. We observe that the largest network (PNAS-Large) and the smallest one (MobV1-025) cannot benefit from the combination, while the two other ones can benefit up to a certain level. We conclude that combining the Batching and Multi-Tenancy can lead to throughput improvement in some DNNs, but for the other ones it only elongates the inference latency with no benefits in throughput. Hence, identifying the proper cases of combining Batching and Multi-Tenancy based on different aspects of the system such as the size and computational complexity of DNNs, as well as the computing and memory capacity of the GPU, can be a future research direction. ## 5 Related Work While DNNs continue to deliver state-of-the-art results for various machine learning domains such as computer vision, their extremely growing computational requirements have surpassed the growth in computing capacity of conventional CPUs [17]. Therefore, it is essential to investigate new hard Figure 11: Comparing the throughput under Batching and Multi-Tenancy. DNNScaler selects Batching for these jobs. Figure 10: Sensitivity analysis results for DNNScaler under the Multi-Tenancy approach for Inception-V1 network. Figure 9: Sensitivity analysis results for DNNScaler under Batching approach for Inception-V4 network. ware platforms, beyond traditional CPUs, to address the ever-growing computational demand of DNNs. To this end, a wide variety of DNN accelerators are designed and implemented that aim to achieve various goals such as low-latency, high-throughput, or energy-efficiency [10, 17, 23, 38, 54, 73]. Increasing throughput while achieving low-latency is specially explored to address the requirements of real-time ML services deployed on warehouse-scale infrastructures [21, 26, 27]. These accelerators employ different computing cores such as ASICs [12, 13, 39], FPGAs [43, 69, 72], and GPUs [44, 55, 67] or different computing paradigms such as processing in/near-memory [14, 22, 36, 63, 4] for accelerating DNN inference. GPU accelerators are a favorable choice for DNN accelerators due to their programability and scalability features [49]. To further improve the performance of DNN inference on GPU accelerators, various techniques such as Batching and Multi-Tenancy are proposed, among others. **Batching.** Using Batching to increase the DNN inference throughput has been studied and employed in a large body of previous works [11, 56, 57, 61, 70, 56]. Studies show that Batching can improve the throughput and energy-efficiency of DNN inference on GPU accelerators [20, 32]. However, it elongates the latency of DNN inference as well, so it should be employed carefully. Pervasive CNN (P-CNN) [57] leverages Batching to improve the throughput of CNNs on GPUs. It uses big batch sizes for background tasks to maximize throughput and reach energy-efficiency. When selecting the batch size for such tasks, P-CNN considers the GPU memory. For interactive and real-time tasks, however, P-CNN selects small batch sizes to avoid unacceptable response time. Clipper [18] forms batches of inputs from concurrent stream of prediction queries to leverage the benefits of Batching. It dynamically changes the batch size using an additive-increase-multiplicative-decrease (AIMD) scheme to find the optimal one that maximizes the throughput while meeting the latency requirement. **Multi-Tenancy.** A large body of research has focused on challenges and opportunities of Multi-Tenancy and co-location of DNN inference [66, 68, 16, 35]. PERSEUS [42] and Jain et al. [34] studied the impact of Multi-Tenancy on performance, cost, and latency of co-located DNN models. They showed that while co-location can help to improve the throughput, resource utilization, and energy efficiency of GPUs, it has a negative impact on latency. Approaches such as Baymax [9] and Laius [71] try to mitigate the impact of co-location on the latency of interactive jobs that share the GPU with throughput-oriented jobs. They aim to maximize the throughput of throughput-oriented job while meeting the latency of interactive job by reallocation of time slots [9] or computing resources [71] of GPU. These approaches usually consider the co-location of two or more different applications on a GPU and try to manage their latency or throughput with respect to some priority criteria. In our approach, however, we consider the case where a various number of instances from the same application are co-located on a GPU, and we try to improve the overall throughput of that application while meeting its latency SLO. Moreover, we first evaluate that application to see if this type of Multi-Tenancy can help to improve its throughput, and then proceed with the next steps. But the other approaches usually consider the throughput of the mixture of applications, and not a single one. ## 6 Conclusion In this paper, we performed an extensive set of analysis, revealing that DNNs can be categorized in two groups: the ones that experience high throughput from Batching and the ones that achieve high throughput by Multi-Tenancy. Based on this observation, we proposed the _DNNScaler_ approach to improve the throughput of real-time ML services with latency constraint. The _DNNScaler_ Profiler module can successfully determine the approach that is more suitable for a specific DNN with a lightweight profiling mechanism. Based on the output of the Profiler module, the Scaler module employs one of the adaptive batching (for the Batching approach) or instance co-location management (for the Multi-Tenancy approach) to maintain the latency while maximizing the throughput. The experimental results show that DNNScaler can improve the throughput by up to 14x (218% on average) compared to the Clipper approach that only leverages Batching, and not Multi-Tenancy. Furthermore, we analyzed the sensitivity of both Batching and Multi-Tenancy approaches of _DNNScaler_.
2306.05515
PeFLL: Personalized Federated Learning by Learning to Learn
We present PeFLL, a new personalized federated learning algorithm that improves over the state-of-the-art in three aspects: 1) it produces more accurate models, especially in the low-data regime, and not only for clients present during its training phase, but also for any that may emerge in the future; 2) it reduces the amount of on-client computation and client-server communication by providing future clients with ready-to-use personalized models that require no additional finetuning or optimization; 3) it comes with theoretical guarantees that establish generalization from the observed clients to future ones. At the core of PeFLL lies a learning-to-learn approach that jointly trains an embedding network and a hypernetwork. The embedding network is used to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork takes as input such descriptors and outputs the parameters of fully personalized client models. In combination, both networks constitute a learning algorithm that achieves state-of-the-art performance in several personalized federated learning benchmarks.
Jonathan Scott, Hossein Zakerinia, Christoph H. Lampert
2023-06-08T19:12:42Z
http://arxiv.org/abs/2306.05515v3
# PeFLL: A Lifelong Learning Approach ###### Abstract Personalized federated learning (pFL) has emerged as a popular approach to dealing with the challenge of statistical heterogeneity between the data distributions of the participating clients. Instead of learning a single global model, pFL aims to learn an individual model for each client while still making use of the data available at other clients. In this work, we present PeFLL, a new pFL approach rooted in lifelong learning that performs well not only on clients present during its training phase, but also on any that may emerge in the future. PeFLL learns to output client-specific models by jointly training an embedding network and a hypernetwork. The embedding network learns to represent clients in a latent descriptor space in a way that reflects their similarity to each other. The hypernetwork learns a mapping from this latent space to the space of possible client models. We demonstrate experimentally that PeFLL produces models of superior accuracy compared to previous methods, especially for clients not seen during training, and that it scales well to large numbers of clients. Moreover, generating a personalized model for a new client is efficient as no additional fine-tuning or optimization is required by either the client or the server. We also present theoretical results supporting PeFLL in the form of a new PAC-Bayesian generalization bound for lifelong learning and we prove the convergence of our proposed optimization procedure. ## 1 Introduction Federated Learning (FL) (McMahan et al., 2017) has emerged as a standard protocol for privacy-preserving machine learning in a distributed setting, where a multitude of clients, e.g., user devices, collaboratively learn prediction models without sharing their data directly with any other party. In practice, client data distributions may differ significantly from each other due to a variety of reasons including differing user behavior, location, preferences, devices etc, which can make learning a single global model sub-optimal (Li et al., 2020; Kairouz et al., 2021). Personalized Federated Learning (pFL) (Smith et al., 2017) is a means of dealing with such statistical heterogeneity, by allowing clients to learn individual models while still benefiting from each other. The key challenge of pFL lies in balancing the benefit of increased data that is available for joint training with the need to remain well adapted to each client's own distribution. Most pFL methods achieve this through a combination of global model training and some form of finetuning or training of a client-specific model. This approach, however, leads to shortcomings when not all clients we wish to predict on in a federated network are present during training. Consider, for instance, a federated learning system over millions of mobile devices (Hard et al., 2018). Only a fraction of these are likely to be seen during training. Moreover, new devices will also be entering the system constantly, for instance, whenever a person purchases a new device. Having to train or finetune on these new clients in order to obtain personalized model for all of them incurs computational costs and potentially also additional communication costs. It will typically also cause the model quality to vary depending on the amount of data available on the clients, which in pFL is typically small. Moreover, methods that just optimize parameters optimally for the clients available at training time might overfit to these, thereby producing worse models than desirable on new clients. In this work, we address these challenges and introduce a new pFL framework: PeFLL (for Personalized Federated Lifelong Learning). After its training stage, PeFLL requires only one forward pass through a deep network in order to generate a personalized model for any client (current or future) which can then be used immediately, without the need for further training or fine-tuning. Moreover, due to its root in lifelong learning (Pentina and Lampert, 2014), PeFLL generalizes well. In our experiments, it produced personalized models for new clients with accuracy comparable to the ones for the clients it has been trained on. PeFLL consists of two main components: an embedding network, which learns to generate a descriptor vector for a client when fed in that client's data, and a hypernetwork (Ha et al., 2017), that takes in a client descriptor and outputs a model for the client that produced that descriptor. PeFLL learns the embedding network and hypernetwork simultaneously by training them to output personalized models that perform well on the clients available at training time. The goal of PeFLL is to train these components in such a way that they generalize well beyond the training clients. This is achieved by constructing PeFLL's objective function to regularize the parameters and outputs of the embedding and hypernetwork. We formalize PeFLL's ability to generalize to unseen clients by providing a generalization bound in a PAC-Bayesian framework. Furthermore, we analyze the convergence behavior of PeFLL's training procedure. We conclude with an experimental evaluation of PeFLL's performance using standard pFL-benchmarks. We demonstrate that PeFLL obtains superior performance to prior approaches in an extensive range of settings. ## 2 Related Work Personalized federated learning Soon after federated learning was first proposed (McMahan et al., 2017), it was observed that it can be beneficial to personalize the learned models to individual client data distributions, thereby overcoming the statistical heterogeneity between clients. Existing approaches typically follow one of multiple blueprints: _multi-task methods_(Smith et al., 2017; Marfoq et al., 2021; Dinh et al., 2021; Li et al., 2021; Dinh et al., 2020; Hanzely and Richtarik, 2020) learn individual per-client models, while sharing information between clients, e.g. through regularization towards a central model. _Meta-learning methods_ learn a shared model, which the clients can quickly adapt or finetune to their individual data distributions, e.g. by a small number of gradient updates (Fallah et al., 2020; Jiang et al., 2019). _Decomposition-based methods_ split the learnable parameters into two groups: those that are meant to be shared between clients and those that are learned on a per-client basis. This allows, e.g. that clients learn a shared feature representation but individual classification heads (Arivazhagan et al., 2019; Collins et al., 2021), or per-client data embeddings which are processed further by a global model (Bui et al., 2019; Liang et al., 2020). _Clustering-based methods_(Ghosh et al., 2020; Mansour et al., 2020) divide the clients into a fixed number of subgroups and learn individual models for each cluster. Hypernetworks (Ha et al., 2017) have also previously been employed in the context of pFL. Ma et al. (2022) learns personalized models as weighted linear combinations of all other personalized models, where the weights for each network layer are predicted by per-client hypernetworks. Closest to our work, Shamsian et al. (2021) also uses a hypernetwork to generate each clients' personalized model. However, in contrast to our method, there the server learns client descriptors individually for each client. Such a non-parametric approach has a number of downsides. First, it leads to an undesirable stateful optimization problem, in which at any time the server has to know the client participating in training in order to retrieve their individual parameters. Second, the number of parameters stored at the server grows with the number of clients, which can cause scalability issues in large-scale applications. Third, the hypernetwork cannot immediately be evaluated for clients that are not part of the training set, as first new descriptors for those have to be inferred. This requires an optimization procedure with multiple client-server communication rounds. **Lifelong learning** The idea of learning from a number of tasks something that makes learning easier for future tasks has appeared in the machine literature under different names, such as _continual learning_(Ring, 1994), _lifelong learning_(Thrun and Mitchell, 1995), _learning to learn_(Thrun and Pratt, 1998), _inductive bias learning_(Baxter, 2000). Besides a plethora of practical algorithms (see, (Hospedales et al., 2021; Chen and Liu, 2018), for surveys), recent years have also seen a growing interest in the theoretical properties of these methods (Pentina and Lampert, 2014; Amit and Meir, 2018; Rothfuss et al., 2021; Guan and Lu, 2022; Rezazadeh, 2022). The results are typically generalization guarantees from observed to futures tasks, often the form of PAC-Bayesian bound, as we also provide in Section 4 and Appendix A. Most existing theoretical results are not applicable to the situation we study in this work, though, because their formalization of the learning process does not allow for the use of a hypernetwork to predict model parameters. There are two exceptions: one is (Pentina and Lampert, 2015), which was the first to prove bounds for learning an algorithm in a PAC-Bayesian setting. However, it does not fit our setting well, because it aims at learning a procedure for adapting to temporally changing data distributions. The second exception is (Rezzadeh, 2022). This is closest to our work, but the bound it proves is not well adapted to the federated learning setting, in which the number of clients is large but the amount of data per client might be small. We provide a more specific comparison in Section 4 and Appendix A. ## 3 Method We work in a standard supervised federated learning setting. There is a (possibly very large) number, \(n\), of clients, each of which has a data set, \(S_{i}=\{(x_{1}^{i},y_{1}^{i}),\ldots,(x_{m^{i}}^{i},y_{m^{i}}^{i})\}\subset \mathcal{X}\times\mathcal{Y}\), for \(i\in\{1,\ldots,n\}\), sampled from a client-dependent data distribution \(D_{i}\). The data distributions may differ between clients, \(D_{i}\neq D_{j}\) for \(i\neq j\). For any model, \(\theta\in\mathbb{R}^{d}\), the client can compute its training loss, \(\mathcal{L}(\theta;S_{i})=\frac{1}{m^{i}}\sum_{j=1}^{m^{i}}\ell(x_{j}^{i},y_{ j}^{i},\theta)\), where \(\ell:\mathcal{X}\times\mathcal{Y}\times\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}\) is a loss function. For simplicity of exposition, we assume the loss to be identical across clients. The goal is to learn client-specific models, \(\theta_{i}\), in a way that exploits the benefit of sharing information between clients, while adhering to the principles of federated learning. In this work, we adopt a _hypernetwork_ approach: for any client, a personalized model, \(\theta\in\mathbb{R}^{d}\), is predicted by a shared deep network, \(h:\mathbb{R}^{l}\rightarrow\mathbb{R}^{d}\), that takes as input a _client descriptor_\(v\in\mathbb{R}^{l}\). To compute client descriptors, we use an embedding network, \(\phi:\mathcal{X}\times\mathcal{Y}\rightarrow\mathbb{R}^{l}\), which takes individual data points as input, and average the embeddings: \(v(S)=\frac{1}{|S|}\sum_{(x,y)\in S}\phi(x,y)\). We denote the hypernetwork's parameters as \(\eta_{h}\) and the embedding network's parameters as \(\eta_{v}\). As shorthand, we write \(\eta=(\eta_{h},\eta_{v})\). The core challenge is to train the hypernetwork and the embedding network such that they produce good models not only for clients seen during the training step, but that they also _generalize_ to future clients. To achieve this, we take inspiration from _lifelong learning_, which studies systems that learn multiple tasks in a way that facilitates the learning of future tasks. Specifically, in Section 4 we prove a generalization bound that establishes that the test loss of models for future clients can be estimated purely from observable quantities: the training loss on the observed clients and two regularization terms, one on the hypernetwork parameters (across-client regularization), and one on the hypernetwork outputs (within-client regularization). Based on the generalization bound, we propose the _Personalized Federated Lifelong Learning (PeFLL)_ algorithm, which consists of solving the following optimization problem \[\min_{\eta_{h},\eta_{v}}\quad\lambda_{h}\|\eta_{h}\|^{2}+\lambda_{v}\|\eta_{v} \|^{2}+\sum_{i=1}^{n}\mathcal{L}\big{(}\,h(v(S_{i};\eta_{v});\eta_{h});S_{i}\, \big{)}+\lambda_{\theta}\big{\|}h(v(S_{i};\eta_{v});\eta_{h})\big{\|}^{2}, \tag{1}\] where \(\|\cdot\|\) is the \(L^{2}\)-norm. PeFLL performs this optimization in a way that is compatible with the constraints imposed by the federated learning setup. Structurally, it splits the objective (1) into several parts: a _server objective_, which consist of the first two (regularization) terms, \(f(\eta_{h},\eta_{v})=\lambda_{h}\|\eta_{h}\|^{2}+\lambda_{v}\|\eta_{v}\|^{2}\), and multiple _per-client objectives_, each of which consists of the terms inside the summation \(f_{i}(\theta_{i};S_{i})=\mathcal{L}(\theta_{i};S_{i})+\lambda_{\theta}\big{\|} \theta_{i}\|^{2}\). The terms are coupled through the identity \(\theta_{i}=h(v(S_{i};\eta_{v});\eta_{h})\). This split allows PeFLL to distribute the necessary computation efficiently, preserving the privacy of the client data, and minimizing the necessary communication overhead. Pseudocode of the specific steps is provided in Algorithms 1 and 2. ``` 0: target client with private dataset \(S\) 1:Server sends embedding network \(\eta_{v}\) to client 2:Client selects a data batch \(B\subseteq S\) 3:Client computes \(v=v(B;\eta_{v})\) 4:Client sends descriptor \(v\) to server 5:Server computes \(\theta=h(v;\eta_{h})\) 6:Server sends personalized model \(\theta\) to client ``` **Algorithm 1**PeFLL-predict We start by describing the PeFLL-predict routine (Algorithm 1), which can predict a personalized model for any target client. First, the server sends the current embedding network, \(\eta_{v}\), to the client (line 1), who evaluates it on all or a subset of its data to compute the client descriptor (line 3). Next, the client sends its descriptor to the server (line 4), who evaluates the hypernetwork on it (line 5). The resulting personalized model is sent back to the client (line 6), where it is ready for use. Overall, only two server-to-client and one client-to-server communication steps are required before the client has obtained a functioning personalized model (see Figure 1). The training routine for PeFLL (Algorithm 2) mostly adopts a standard stochastic optimization pattern in a federated setting. In each iteration the server selects a batch of available clients (line 2) and broadcasts the embedding model, \(\eta_{v}\), to all of them (line 3). Then, each client in parallel evaluates its descriptor, \(v_{i}\), (6), sends it to the server (line 7), and receives a personalized model from the server in return (line 8, 9). At this point the forward pass is over and backpropagation starts. To this end, each client performs local SGD for \(k\) steps on its personalized model and personal data (line Figure 1: Communication protocol of PeFLL-predict for generating personalized models. Figure 2: Data flow for PeFLL model generation (forward pass, left) and training (backward pass, right). The client descriptor, \(v_{i}\), and the client model \(\theta_{i}\) are small. Transmitting them and their update vectors is efficient. The hypernetwork, \(\eta_{h}\), can be large, but it remains on the server. 10). It sends the resulting update vector, \(\Delta\theta_{i}\), to the server (line 11), where it acts as a proxy for \(\frac{\partial f_{i}}{\partial\theta_{i}}\). According to the chain rule, \(\frac{\partial f_{i}}{\partial\eta_{i}}=\frac{\partial f_{i}}{\partial\theta_{i}}\frac {\partial\theta_{i}}{\partial\eta_{i}}\) and \(\frac{\partial f_{i}}{\partial v_{i}}=\frac{\partial f_{i}}{\partial\theta_{i }}\frac{\partial\theta_{i}}{\partial v_{i}}\). The server can evaluate both expressions using backpropagation (line 12), because all required expressions are available to it now. Thereby, it obtains update vectors \(\Delta\eta_{h}^{(i)}\) and \(\Delta v_{i}\), the latter of which it sends to the client (line 13) as a proxy for \(\frac{\partial f_{i}}{\partial v_{i}}\). Again based on the chain rule (\(\frac{\partial f_{i}}{\partial\eta_{i}}=\frac{\partial f_{i}}{\partial v_{i}} \frac{\partial v_{i}}{\partial\eta_{i}}\)), the client computes an update vector for the embedding network, \(\Delta\eta_{v}^{(i)}\) (line 14), and sends it back to the server (line 15). Finally, the server updates all network parameters from the average of the per-client contributions as well as the contributions from the server objective (lines 17, 18). DiscussionPeFLL has a number of desirable properties: 1) it makes efficient use of the available resources. The hypernetwork is evaluated only on the server, and its parameters are only held there. This is important, because hypernetworks can be large and computationally costly to evaluate, so one would want to avoid sending them to clients. 2) clients do not have to share their datasets. These are only required to compute the client descriptors and--during training--the gradient of the loss with respect to the model parameters. Both of these steps take place on the client devices. 3) it has low latency and communication cost. Generating a model for a client requires communicating only new and small quantities: a) the parameters of the embedding network, which is typically small, b) the client descriptors, which are low dimensional vectors, and c) the personalized models, which typically also can be kept small, because they only have to solve client-specific rather than general-purpose tasks. For the backward pass in the training phase, three additional quantities need to be communicated: a) the clients' suggested parameter updates, which are of equal size as the model parameters, b) gradients with respect to the client descriptors, which are of equal size as the descriptors, and c) the embedding network's updates, which are of the same size as the embedding network. All of these quantities are rather small compared to, e.g., the size of the hypernetwork, which PeFLL avoids sending. ### Convergence In this section, we establish the convergence of PeFLL's training procedure. Specifically, we give guarantees in the form of bounding the expected average gradient norm, as is common for deep stochastic optimization algorithms. The proof and full formulation can be found in Appendix B. **Theorem 3.1**.: _Under standard smoothness and boundedness assumptions (see appendix), PeFLL's optimization after \(T\) steps fulfills_ \[\frac{1}{T}\sum_{t=1}^{T}\mathbb{E}\left\|\nabla F(\eta_{t})\right\|^{2}\leq \frac{(F(\eta_{0})-F_{*})}{\sqrt{cT}}+\frac{L(6\sigma_{1}^{2}+4k\gamma_{G}^{2} )}{k\sqrt{cT}}+\frac{224cL_{1}^{2}b_{1}^{2}b_{2}^{2}}{T}+\frac{8b_{1}^{2} \sigma_{2}^{2}}{b}+\frac{14L_{1}^{2}b_{2}^{2}\sigma_{3}^{2}}{b}, \tag{2}\] _where \(F\) is the PeFLL objective (1), which is lower bounded by \(F_{*}\). \(\eta_{0}\) are the parameter values at initialization, \(\eta_{1},\ldots,\eta_{T}\) are the intermediate parameter values. \(L,L_{1}\) are smoothness parameters of \(F\) and the local models. \(b_{1},b_{2}\) are bounds on the norms of the gradients of the local model and the hypernetwork, respectively. \(\sigma_{1}\) is a bound on the variance of stochastic gradients of local models, and \(\sigma_{2},\sigma_{3}\) are bounds on the variance due to the clients generating models with data batches of size \(b\) instead of their whole training set. \(\gamma_{G}\) is a bound on the dissimilarity of clients, \(c\) is the number of clients participating at each round, and \(k\) is the number of local SGD steps performed by the clients._ The proof resembles convergence proofs for FedAvg with non-i.i.d. clients, but differs in three aspects: 1) due to the non-linearity of the hypernetwork, we cannot assume that gradients computed from batches are unbiased. 2) the updates to the network parameters are not simple averages, but consist of multiple gradient steps on the clients, which the server processes further using the chain rule. 3) the objective includes regularization terms that only the server can compute. DiscussionTheorem 3.1 characterizes the convergence rate of PeFLL's optimization step in terms of the number of iterations, \(T\), and some problem-specific constants. For illustration, we first discuss the case where the client descriptors are computed from the complete client dataset (\(B=S_{i}\) in Algorithm 1, line 5). In that case, \(\sigma_{2}\) and \(\sigma_{3}\) vanish, such that only three terms remain in (2). The first two are of order \(\frac{1}{\sqrt{T}}\), while the third one is of order \(\frac{1}{T}\). For sufficiently large \(T\), the first two terms dominate, resulting in the same order of convergence as FedAvg (Karimireddy et al., 2020). If clients compute their descriptors from batches, two additional variance terms emerge in the bound, which depend on the size of the batches used by the clients to compute their descriptors. It is always possible to control these terms, though: for large \(S_{i}\), one can choose \(B\) sufficiently large to make the additional terms as small as desired, and for small \(S_{i}\), setting \(B=S_{i}\) is practical, which will make the additional terms disappear completely, see above. ## 4 Generalization In this section we prove a new generalization bound for lifelong learning. Before formulating and proving our main results, we remind the reader of the PAC-Bayesian learning framework (McAllester, 1998) and its use for obtaining guarantees for lifelong learning. PAC-Bayesian learning and lifelong learningIn standard PAC-Bayesian learning, we are given a set of possible models, \(\mathcal{H}\), and a prior distribution over these \(P\in\mathcal{M}(\mathcal{H})\), where \(\mathcal{M}(\cdot)\) denotes the set of probability measures over a base set. Given a dataset, \(S\), _learning a model_ means constructing a new (posterior) distribution, \(Q\in\mathcal{M}(\mathcal{H})\), which is meant to give high probability to models with small loss. The posterior distribution, \(Q\), induces a stochastic predictor: for any input, one samples a specific model, \(f\sim Q\), and outputs the result of this model applied to the input. Note that \(Q\) can in principle be a Dirac delta distribution at a single model, resulting in a deterministic predictor. However, for large (uncountably infinite) model such a choice typically does not lead to strong generalization guarantees. For conciseness of notation, in the following we do not distinguish between the distributions over models and their stochastic predictors. The quality of a stochastic predictor, \(Q\), on a data point \((x,y)\) is quantified by its expected loss, \(\ell_{(x,y)}(Q)=\mathbb{E}_{f\sim Q}\ell(x,y,f)\). From this, we define the empirical error on a dataset, \(S\), as \(\frac{1}{[S]}\sum_{(x,y)\in S}\ell_{(x,y)}(Q)\), and its expected loss with respect to a data distribution, \(D\), as \(\mathbb{E}_{(x,y)\sim D}\ell_{(x,y)}(Q)\). Ordinary PAC-Bayesian generalization bounds provide high-probability upper bounds to the expected loss of a stochastic predictors by the corresponding empirical loss as well as some complexity terms, which typically include the Kullback-Leibler divergence between the chosen posterior distribution and the original (training-data independent) prior, \(\mathrm{KL}(Q||P)\)(McAllester, 1998). Typically, the posterior distribution is not chosen arbitrarily, but it is the result of a _learning algorithm_, \(A:(\mathcal{X}\times\mathcal{Y})^{m}\to\mathcal{M}(\mathcal{H})\), which takes as input the training data and, potentially, the prior distribution. The idea of _lifelong learning_ (sometimes also called _meta-learning_ or _learning to learn_) is to _learn the learning algorithm_(Baxter, 2000; Pentina and Lampert, 2015). To study this theoretically, we adopt the setting where \(n\) learning tasks is available, which we write as tuples, \((S_{i},D_{i})\), for \(i\in\{1,\dots,n\}\), each with a data set \(S_{i}\subset\mathcal{X}\times\mathcal{Y}\) that is sampled from a corresponding data distribution \(D_{i}\in\mathcal{M}(\mathcal{X}\times\mathcal{Y})\). For simplicity, we assume that all datasets are of the same size, \(m\). We assume tasks are sampled i.i.d. from a _task environment_, \(\mathcal{T}\), which is simply a data distribution over such tuples. Again adopting the PAC-Bayesian framework, we assume that a data-independent (meta-)prior distribution over learning algorithms is available, \(\mathcal{P}\in\mathcal{M}(\mathcal{A})\), where \(\mathcal{A}\) is the set of possible algorithms, and the goal is use the observed task data to construct a (meta-)posterior distribution, \(\mathcal{Q}\in\mathcal{M}(\mathcal{A})\). As before, the resulting procedure is stochastic: at every invocation, the system samples an algorithm \(A\sim\mathcal{Q}\). It applies this to the training data, obtaining a posterior distribution \(A(S)\), and it makes predictions by sampling models accordingly, \(f\sim A(S)\). Analogously to above situation, we define two measures of quality for such a stochastic algorithms. Its _empirical loss on the data of the observed clients_: \[\widehat{\text{er}}(\mathcal{Q}):=\frac{1}{n}\sum_{i=1}^{n}\mathop{\mathbb{E} }_{A\sim\mathcal{Q}}\frac{1}{m}\sum_{j=1}^{m}\ell(x_{j}^{i},y_{j}^{i},A(S_{i} )), \tag{3}\] and its _expected loss on future clients_, \[\text{er}(\mathcal{Q}):=\mathop{\mathbb{E}}_{(D,S)\sim\mathcal{T}}\mathop{ \mathbb{E}}_{A\sim\mathcal{Q}}\mathop{\mathbb{E}}_{(x,y)\sim D}\ell(x,y,A(S)). \tag{4}\] The following theorem provides a connection between both quantities. **Theorem 4.1**.: _Let \(\mathcal{P}\in\mathcal{M}(\mathcal{A})\) and \(P_{1},\ldots,P_{n}\in\mathcal{M}(\mathcal{H})\) be a meta-prior and prior distributions, respectively, which are chosen independently of the observed training data, \(S_{1},\ldots,S_{n}\). Assume that the loss function is bounded in \([0,M]\). Then, for all \(\delta\geq 0\) it holds with probability at least \(1-\delta\) over the sampling of the datasets, that for all distributions \(\mathcal{Q}\in\mathcal{M}(\mathcal{A})\) over algorithms,_ \[\begin{split}\text{er}(\mathcal{Q})\leq\widehat{\text{er}}( \mathcal{Q})+M\sqrt{\frac{\operatorname{KL}(\mathcal{Q}||\mathcal{P})+\log( \frac{2\sqrt{n}}{\delta})}{2n}}+M\underset{A\sim\mathcal{Q}}{\mathbb{E}}\sqrt {\frac{\sum_{i=1}^{n}\operatorname{KL}(A(S_{i})||P_{i})+\log(\frac{8mn}{\delta })+1}{2mn}}\end{split} \tag{5}\] where \(\operatorname{KL}\) denotes the Kullback-Leibler divergence. We provide the proof in Appendix A **Relation to previous work** A similar generalization bound as the one underlying Theorem 4.1 appeared in (Rezzadeh, 2022, Theorem 5.2), where it is formulated for the problem of learning hyperparameters. The bound there, however, is not well suited to the federated setting. First, it contains a term of order \(\frac{(n+m)\log m\sqrt{n}}{nm}\), which is not necessarily small for large \(n\) (clients) but small \(m\) (samples per client). In contrast, the corresponding terms in our bound, \(\frac{\log\sqrt{n}}{n}\) and \(\frac{\log nm}{nm}\), are both small in this regime. Second, when applying the bound from (Rezzadeh, 2022) an additional term of order \(\frac{KL(\mathcal{Q}||\mathcal{P})}{m}\) would appear, which can be large in the case where the dimensionality of the network parameters is large but \(m\) is small. **PeFLL's objective** Now by choosing specific prior and posterior distributions we provide a version of the bound that motivates PeFLL's learning objective. Let the learning algorithm be parameterized by the hypernetwork weights, \(\eta_{h}\), and the embedding networks weights, \(\eta_{v}\). As meta-posterior we use a Gaussian distribution, \(\mathcal{Q}=\mathcal{Q}_{h}\times\mathcal{Q}_{v}\) for \(\mathcal{Q}_{h}=\mathcal{N}(\eta_{h};\alpha_{h}\text{Id})\), and \(\mathcal{Q}_{v}=\mathcal{N}(\eta_{v};\alpha_{v}\text{Id})\), where \(\eta_{h}\) and \(\eta_{v}\) are learnable and \(\alpha_{v}\) and \(\alpha_{h}\) are fixed. For any \((\bar{\eta}_{h},\bar{\eta}_{v})\sim\mathcal{Q}\) and training set \(S\), the learning algorithm produces a posterior distribution \(Q=\mathcal{N}(\theta;\alpha_{\theta}\text{Id})\), where \(\theta=h(v;\eta_{h})\) with \(v=\frac{1}{|S|}\sum_{(x,y)}\phi(x,y;\eta_{v})\). As prior, we use \(\mathcal{N}(0;\alpha_{\theta}\text{Id})\). With these choices, we have \(\operatorname{KL}(\mathcal{Q},\mathcal{P})=\alpha_{h}\|\eta_{h}\|^{2}+\alpha _{v}|\eta_{v}\|^{2}\) and \(\operatorname{KL}(Q_{i},P)=\alpha_{\theta}\|\theta\|^{2}\). Inserting these into Theorem 4.1, we get the following. **Theorem 4.2**.: _For all \(\delta>0\) the following statement holds with probability at least \(1-\delta\) over the clients. For all parameter vectors, \(\eta=(\eta_{h},\eta_{v})\):_ \[\begin{split}&\underset{(D,S)\sim\mathcal{T}}{\mathbb{E}}\sum _{(x,y)\sim D}\underset{\begin{subarray}{c}\bar{\eta}_{h}\sim\mathcal{Q}_{h} \\ \bar{\eta}_{v}\sim\mathcal{Q}_{v}\end{subarray}}{\mathbb{E}}\ell\big{(}x,y,h(v(S; \bar{\eta}_{v});\bar{\eta}_{h})\big{)}\leq\frac{1}{n}\sum_{i=1}^{n}\ \frac{1}{m}\underset{(x,y)\in S_{i}}{\sum_{\begin{subarray}{c}\bar{\eta}_{h} \sim\mathcal{Q}_{h}\\ \bar{\eta}_{v}\sim\mathcal{Q}_{v}\end{subarray}}}\underset{\begin{subarray}{c} \bar{\eta}_{h}\sim\mathcal{Q}_{h}\\ \bar{\eta}_{v}\sim\mathcal{Q}_{v}\end{subarray}}{\mathbb{E}}\ell\big{(}x,y,h(v (S_{i};\bar{\eta}_{v});\bar{\eta}_{h})\big{)}\\ &+\sqrt{\frac{\frac{1}{2\alpha_{h}}\|\eta_{h}\|^{2}+\frac{1}{2 \alpha_{v}}\|\eta_{v}\|^{2}+\log(\frac{2\sqrt{n}}{\delta})}{2n}}+\underset{ \begin{subarray}{c}\bar{\eta}_{h}\sim\mathcal{Q}_{h}\\ \bar{\eta}_{v}\sim\mathcal{Q}_{v}\end{subarray}}{\mathbb{E}}\sqrt{\frac{\frac{1} {2\alpha_{\theta}}\sum_{i=1}^{n}\|h(v(S_{i};\bar{\eta}_{v});\bar{\eta}_{h})\|^ {2}+\log(\frac{8mn}{\delta})+1}{2mn}}.\end{split} \tag{6}\] **Discussion** Theorem 4.2 states that the expected loss of the learned models on future clients (which is the real value of interest) can be controlled by the empirical loss on the observed clients' data (which we can compute) plus two terms that act as regularizers. The first term penalizes extreme values in the parameters of the hypernetwork and the embedding network. Thereby, it prevents overfitting for the part of the learning process that accumulates information across clients. The second term penalizes extreme values in the output of the hypernetwork, which are the parameters of the per-client models. By this, it prevents overfitting on each client. Because the guarantee holds uniformly over all choices of parameters, we can optimize the right hand side with respect to \(\eta\) and the guarantee will still be fulfilled for the minimizer. The PeFLL-train step mirrors this optimization in simplified form: we drop constant terms and use just the mean vectors of the network parameters instead of sampling them stochastically. Also, we drop the square roots from the regularization terms to make them numerically better behaved. ## 5 Experiments In this section we report on our experimental evaluation. The values reported in every table and plot are given as the mean together with the standard deviation across three random seeds. DatasetsFor our experiments, we use three datasets that are standard benchmarks for federated learning: CIFAR10 (Krizhevsky, 2009), CIFAR100 (Krizhevsky, 2009) and FEMNIST (Caldas et al., 2018). Following prior pFL works, for CIFAR10 and CIFAR100 we simulate statistically heterogeneous clients by randomly assigning a fixed fraction of the total number of \(C\) classes to each of \(n\) clients, for \(n\in\{100,500,1000\}\). The clients then receive test and train samples from only these classes. For CIFAR10 each client has \(2\) of the \(C=10\) classes and for CIFAR100 each client has \(10\) of the \(C=100\) classes. FEMNIST is a federated dataset for handwritten character recognition, with \(C=62\) classes (digits and lower/upper case letters) and 817,851 samples. We keep its natural partition into 3597 clients based on writer identity. We randomly partition the clients into 90% seen and 10% unseen. The seen clients are used for training while the unseen clients do not contribute to the initial training and are only used to assess each method's performance on new clients as described below. BaselinesWe evaluate and report results for the following pFL methods, for which we build on the _FL-Bench_ repository 1: Per-FedAvg (Fallah et al., 2020), which optimizes the MAML (Finn et al., 2017) objective in a federated setting; FedRep (Collins et al., 2021), which trains a global feature extractor and per-client classifier heads; pFedMe (Dinh et al., 2020), which trains a personal model per client using a regularization term to penalize differences from a global model; kNN-Per (Marfoq et al., 2022), which trains a single global model which each client uses individually to extract features of their data for use in a \(k\)-nearest-neighbor-based classifier; pFedHN (Shamsian et al., 2021) which jointly trains a hypernetwork and per client embedding vectors to output a personalized model for each client. For reference, we also include results of (non-personalized) FedAvg (McMahan et al., 2017) and of training a local model separately on each client. Footnote 1: [https://github.com/KarhouTam/FL-bench](https://github.com/KarhouTam/FL-bench) Constructing models for unseen clientsWe are interested in the performance of PeFLL not just on the training clients but also on clients not seen at training time. As described in Algorithm 1 inference on new clients is simple and efficient for unseen clients as it does not require any model training, by either the client or the server. With the exception of kNN-Per all other methods require some form of finetuning in order to obtain personalized models for new clients. Per-FedAvg and pFedMe obtain personal models by finetuning the global model locally at each client for some small number of gradient steps. FedRep freezes the global feature extractor and optimizes a randomly initialized head locally at each new client. pFedHN freezes the trained hypernetwork and optimizes a new embedding vector for each new client, which requires not just local training but also several communication rounds with the server. The most efficient baseline for inference on a new client is kNN-Per, which requires only a single forward pass through the trained global model and the evaluation of a \(k\)-nearest-neighbor-based predictor. ModelsFollowing prior works in pFL the personalized model used by each client is a LeNet-style model (Lecun et al., 1998) with two convolutional layers and three fully connected layers. For fair comparison we use this model for PeFLL as well as all reported baselines. PeFLL makes use of an embedding network and a hypernetwork to generate this personalized client model. For our experiments the hypernetwork is a three layer fully connected network which takes as input a client descriptor vector, \(v\in\mathbb{R}^{l}\), and outputs the parameters of the client model, \(\theta\in\mathbb{R}^{d}\). Note that the final layer of the client model predicts across all classes in the dataset. In case that a client knows which classes it wishes to predict for (e.g. from its training data) its can select only the outputs for those classes. For the embedding network we tested two options, a single linear projection which takes as input a one-hot encoded label vector, and a LeNet-type ConvNet with the same architecture as the client models except that its input is extended by \(C\) extra channel that encode the label in one-hot form. The choice of such small models for our embedding network is consistent with the fact that the embedding network must be transmitted to the client. We find that for CIFAR10 and FEMNIST the ConvNet embedding network produces the best results while for CIFAR100 the linear embedding is best, and these are the results we report. Training DetailsWe train all methods, except Local, for 5000 rounds with partial client participation. For CIFAR10 and CIFAR100 client participation is set to \(5\%\) per round. For FEMNIST we fix the number of clients participating per round to 5. The Local baseline trains on each client independently for 200 epochs. The hyperparameters for all methods are tuned using validation data that was held out from the training set (10,000 samples for CIFAR10 and CIFAR100, spread across the clients, and 10% of each client's data for FEMNIST). The optimizer used for training at the client is SGD with a batch size of 32, a learning rate chosen via grid search and momentum set to 0.9. More details of the hyperparameter selection for each method are provided in Appendix C. ### Results Table 1 shows the results for PeFLL and the baseline methods. In all cases, we report the test set accuracy on the clients that were used for training (Table 0(a)) and on new clients that were not part of the training process (Table 0(b)). The results show that PeFLL achieves the best results in all cases, and often by a large margin. The improvements over previous methods are most prominent for the models produced for previously unseen clients, where PeFLL produces results of almost identical accuracy as for the clients used for training. This is especially remarkable in light of the fact that several of the other methods have computationally more expensive procedures for generating models in this setting than PeFLL, in particular requiring on-client or even federated training to produce the personalized models. We see this result as a strong evidence that PeFLL successful generalizes, as predicted by Theorem 4.2. Comparing PeFLL's result to the most similar baseline, pFedHN, one observes that the latter's performance decreases noticeably when the number of clients increases and the number of samples per client decrease accordingly. We attribute this to the fact that pFedHN learns independent client descriptors for each client, which can become unreliable if only few training examples are available for each client. Similarly, for Per-FedAvg, pFedMe and FedRep, which construct personalized models by local finetuning, the model accuracy drops when the amount of data per client decreases, especially in the more challenging CIFAR100 setup. kNN-Per maintains good generalization from train to unseen clients, however, performance also drops when the number of samples per client drops due to the kNN based predictor having less client data available. In contrast, PeFLL's performance remains stable, which we attribute to the fact that it learns a shared embedding and hypernetwork from all available data, and does not need to use the new client data for finetuning or prediction, but rather just to generate a client descriptor. Client DescriptorsOur work relies on the hypothesis that clients with similar data distributions should obtain similar descriptors, such that the hypernetwork then produces similar models for them. To study this hypothesis empirically, we create clients of different similarity to each other in the \begin{table} \end{table} Table 1: Experimental results on standard pFL benchmarks. In all settings, PeFLL achieves clearly higher accuracy than previous methods, with no or almost no drop in accuracy between clients used for training (top table) and previously unseen clients (bottom table). following way. Let \(C\) denote the number of classes and \(n\) the number of clients. Then, for each client \(i\) we sample a vector of class proportions, \(\pi_{i}\in\Delta^{C}\), from a Dirichlet distribution \(\text{Dir}(\mathbf{\alpha})\) with parameter vector \(\mathbf{\alpha}=(0.1,\ldots,0.1)\), where \(\Delta^{C}\) is the unitary simplex of dimension \(C\). Each client then receives a dataset, \(S_{i}\), of samples from each of the \(C\) classes according to its class proportion vector \(\pi_{i}\). We randomly split the clients into train and unseen clients, and we run PeFLL on the train clients. Throughout training we periodically use the embedding network to compute all client descriptors in order to track how they evolve. For any pair of such clients, we compute two similarity measures: \(d_{ij}^{\pi}=\|\pi_{i}-\pi_{j}\|\), which provides a ground truth notion of similarity between client distributions, and \(d_{ij}^{v}=\|v(S_{i})-v(S_{j})\|\), which measures the similarity between client descriptors. To measure the extent to which \(d^{v}\) reflects \(d^{\pi}\), we use their average rank correlation in the following way. For any unseen client \(i\) we a form vector of similarities \(d_{i}^{v}=(d_{i1}^{v},\ldots,d_{in}^{v})\in\mathbb{R}^{n}\), and compute its Spearman's rank correlation coefficient (Spearman, 1904) to the corresponding ground truth vector \(d_{i}^{\pi}=(d_{i1}^{\pi},\ldots,d_{in}^{\pi})\in\mathbb{R}^{n}\). Figure 3 shows the average of this value across the unseen clients after different numbers of training steps (CIFAR 10 dataset, 100, 500 or 1000 client in total). One can see that the rank correlation increases over the course of training, reaching very high correlation values of 0.85 to 0.93. This indicates that the embedding network indeed learns to organize the descriptor space according to client distributional similarity, with similar clients being mapped to similar descriptors. Moreover, because these results are obtained from unseen clients, it is clear that the effect does not reflect potential overfitting of the embedding network to the training clients, but that the learned similarity indeed generalizes well. **Client Extrapolation** For all experiments so far, clients at training time and new clients were related in the sense that they followed the same (meta)-distribution over clients. In this section, we examine how well PeFLL is able to generalize beyond this, by studying its _extrapolation_ performance, where the new clients come from a different client distribution than the train clients. We follow the same procedure as in the previous section to simulate heterogeneous clients and use sampled Dirichlet class proportions to assign classes. For the train clients we again use \(\mathbf{\alpha}_{\text{train}}=(0.1,\ldots,0.1)\) to generate each clients class proportion vector. However, for the new clients we use a different Dirichlet parameter \(\mathbf{\alpha}_{\text{new}}=(\alpha,\ldots,\alpha)\). For each \(\alpha\in\{0.1,0.2,0.3,\ldots,1.0\}\) we generate a group of new clients using this parameter. We run PeFLL on the train clients then use the trained embedding and hypernetworks to generate a model for each of the new clients. Figure 4 shows the resulting accuracy values for PeFLL and the best performing baselines of Table 1. As reference we also include the result of purely local training on the new clients. Note that as \(\alpha\) Figure 3: Correlation between _client descriptor similarity_ obtained from the embedding network and _ground truth similarity_ over the course of training (CIFAR 10 dataset). increases so does the difficulty of the client problem, as illustrated by the fact that the accuracy of purely local training decreases. Despite this increased task difficulty and distributional difference PeFLL still obtains strong results. Even at \(\alpha=1\), PeFLL produces models that are more accurate than those learned by the other methods for smaller values of \(\alpha\), and far superior to purely local training on the new clients. ## 6 Conclusion In this work, we presented PeFLL, a new lifelong learning approach to personalized federated learning. By means of an embedding network that creates inputs to a hypernetwork it efficiently generates personalized models, both for clients present during training and new clients that appear later. PeFLL has several desirable properties for real-world usability: it is stateless and does not require additional training to obtain models for new clients, therefore making it practical to use in large scale applications with many clients each possessing little data. It is efficient in terms of computation, with the most computationally costly operations being performed by the server, as well as communication, as it avoids transmitting the large hypernetwork model. It avoids delays as clients can immediately utilize their models without further training or fine-tuning, and it stands on solid theoretical foundation in terms of convergence and generalization. LimitationsDespite the promising theoretical and experimental results, PeFLL also has some remaining limitations. In particular, even though PeFLL's training procedure avoids the need to transmit large scale models between clients and server it does require multiple messages to be exchanged per round. Also, our analysis did not focus on formal privacy guarantees, as they could be achieved, e.g., by the integration of differential privacy. We believe that PeFLL is a natural candidate for this, as a common technique for achieving differential privacy is by adding suitably scaled amounts of randomness to intermediate results, which is also the mechanism underlying the generalization guarantees of Theorem 4.2.
2307.12340
Visco-elastic damped wave models with time-dependent coefficient
In this paper, we study the following Cauchy problem for linear visco-elastic damped wave models with a general time-dependent coefficient $g=g(t)$: \begin{equation} \label{EqAbstract} \tag{$\star$} \begin{cases} u_{tt}- \Delta u + g(t)(-\Delta)u_t=0, &(t,x) \in (0,\infty) \times \mathbb{R}^n, \\ u(0,x)= u_0(x),\quad u_t(0,x)= u_1(x), &x \in \mathbb{R}^n. \end{cases} \end{equation} We are interested to study the influence of the damping term $g(t)(-\Delta)u_t$ on qualitative properties of solutions to \eqref{EqAbstract} as decay estimates for energies of higher order and the parabolic effect. The main tools are related to WKB-analysis. We apply elliptic as well as hyperbolic WKB-analysis in different parts of the extended phase space.
Halit Sevki Aslan, Michael Reissig
2023-07-23T14:25:12Z
http://arxiv.org/abs/2307.12340v1
# Visco-elastic damped wave models with time-dependent coefficient ###### Abstract In this paper, we study the following Cauchy problem for linear visco-elastic damped wave models with a general time-dependent coefficient \(g=g(t)\): \[\begin{cases}u_{tt}-\Delta u+g(t)(-\Delta)u_{t}=0,&(t,x)\in(0,\infty)\times \mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n}.\end{cases}\] ( \[\star\] ) We are interested to study the influence of the damping term \(g(t)(-\Delta)u_{t}\) on qualitative properties of solutions to (\(\star\) *> 2) as decay estimates for energies of higher order and the parabolic effect. The main tools are related to WKB-analysis. We apply elliptic as well as hyperbolic WKB-analysis in different parts of the extended phase space. keywords: wave equation; visco-elastic damping; higher order energies; WKB-analysis; parabolic effect. Msc: [2020] 35L30, 35B40, 35L15, 35L05. + Footnote †: journal: Elsevier ## 1 Introduction ### Historical remarks It is well-known from the theory of visco-elasticity that visco-elastic materials undergoing deformation exhibit dual properties of viscosity and elasticity, which can keep the memory of their entire history and show natural damping. Therefore, the study of visco-elastic mechanical equations has wide application in the natural sciences and has become an important area of research. Let us first recall some historical background for the following Cauchy problem with a time-dependent structural damping: \[\begin{cases}u_{tt}-\Delta u+g(t)(-\Delta)^{\delta}u_{t}=0,&(t,x)\in(0,\infty) \times\mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n},\end{cases} \tag{1}\] with \(\delta\in(0,1)\). This model was recently studied by several authors, in particular, see [13; 14] for \(\delta=0\). In [9], the authors studied the Cauchy problem (1) with a decreasing time-dependent coefficient \(g(t)=\mu(1+t)^{-\alpha}\), where \(\mu>0\) and \(\alpha\in[0,1]\). They studied the decay behavior of the energies of higher order of solutions and determined decay rates depending on the order of the energy. If higher order energies decay faster with increasing order, then the authors called this effect as "parabolic effect". Later, in [11] the author studied the Cauchy problem (1) with strictly increasing in time coefficients \(g(t)=\mu(1+t)^{\alpha}\), where \(\mu>0\) and \(\alpha\in(0,1]\). In [5], the authors considered wave models of (1) with a scale-invariant coefficient \(g=g(t)\), that is, \(g(t)=\mu(1+t)^{2\delta-1}\) with \(\mu>0\) and \(\delta\in(0,1]\). They proved the optimality of their decay estimates and studied smoothing effects for solutions to structurally damped waves. Let us mention briefly also some recent contributions related to the following Cauchy problem, the so-called _structurally damped \(\sigma\)-evolution equation with time-dependent dissipation_: \[\begin{cases}u_{tt}+(-\Delta)^{\sigma}u+g(t)(-\Delta)^{\delta}u_{t}=0,&(t,x) \in(0,\infty)\times\mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n},\end{cases} \tag{2}\] with \(\sigma>1\) and \(\delta\in(0,\sigma)\). The authors in [6; 7; 8] developed WKB-analysis to derive an explicit representation formula based on Fourier multipliers for solutions when the dissipation coefficient \(g=g(t)\) is supposed to be a monotonous function. Using the obtained representation formula, they derived some \(L^{2}-L^{2}\) decay estimates and \(L^{p}-L^{q}\) estimates on the conjugate line for the energies of higher order. Moreover, some qualitative properties of energy solutions such as parabolic and smoothing effects were explained in detail. Coming back to the particular case of \(g(t)=\mu(1+t)^{-\alpha}\) with a constant \(\mu>0\) and \(\alpha\in(-1,1)\), one can see that a classification between effective damping and noneffective damping, which strongly depends on parameters \(\sigma\), \(\delta\) and \(\alpha\), is introduced in [1]. Their main goal is to study the asymptotic profile of solutions to (2) and simultaneously to clarify that in the effective damping case, a diffusion phenomenon occurs. Physical motivations for the role played by a time-dependent structural damping term can be found in [4]. For more details about the qualitative properties of solutions to wave models with visco-elastic damping and constant coefficients, we refer to Section 14.3 of [3] and pioneering works [10; 12]. ### Main purpose of the paper This paper is concerned with studying the following Cauchy problem for the linear visco-elastic damped wave equation with a time-dependent coefficient \(g=g(t)\): \[\begin{cases}u_{t}-\Delta u+g(t)(-\Delta)u_{t}=0,&(t,x)\in(0,\infty)\times \mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n}.\end{cases} \tag{3}\] Here \(g=g(t)\) is a time-dependent coefficient satisfying some appropriate conditions like positivity, continuity, monotonicity, and some behavior of derivatives. The role played by the visco-elastic damping in (3) varies with the choices of the coefficient \(g=g(t)\) and different approaches are required to study its influence on the asymptotic profile of the solution as \(t\to\infty\). To the best of the authors' knowledge, it seems that wave models with general time-dependent coefficient in the visco-elastic damping term have not been studied before. Our main goal is to study decay rates of energies of higher order for solutions to the Cauchy problem (3) with a general time-dependent coefficient in the visco-elastic damping. These estimates rely on the structural properties of representations of solutions. Our approach is based on asymptotic representations combined with an extended phase space analysis under some assumptions, mostly adapted from the treatment of problems in WKB-analysis. We divide our considerations into the following cases of the time-dependent coefficients: * Models with increasing time-dependent coefficient \(g=g(t)\) in Section 2. * Models with integrable and decreasing time-dependent coefficient \(g=g(t)\) in Section 3. * Models with non-integrable and decreasing time-dependent coefficient \(g=g(t)\) in Section 4. * Models with non-integrable and slowly increasing time-dependent coefficient \(g=g(t)\) in Section 5. We will treat separately all four cases. After explaining how WKB-analysis should be applied to each of the cases, we sometimes shorten calculations and refer to older papers where the necessary calculations are given in detail to treat other models. In this way we try to shorten the paper without loosing readability. **Notations** * We write \(f\lesssim g\) when there exists a constant \(C>0\) such that \(f\leq Cg\), and \(f\approx g\) when \(g\lesssim f\lesssim g\). * By \(f\sim g\) we denote \(\lim_{t\to\infty}\dfrac{f(t)}{g(t)}=1\), that is, \(f\) and \(g\) have the same asymptotic behavior. * As usual, the spaces \(H^{a}\) and \(H^{a}\) with \(a\geq 0\) stand for Bessel and Riesz potential spaces based on the \(L^{2}\) space. Here \(\langle D\rangle^{a}\) and \(|D|^{a}\) denote the pseudo-differential operator with symbol \(\langle\xi\rangle^{a}\) and the fractional Laplace operator with symbol \(|\xi|^{a}\), respectively. * (\(|\,\cdot\,|\)) denotes the matrix of absolute values of its entries for a given matrix. ### Our approach We apply the partial Fourier transformation with respect to spatial variables to the Cauchy problem (3). So we get that \(\hat{u}(t,\xi)=\mathcal{F}_{x\to\xi}\left(u(t,x)\right)\) solves \[\begin{cases}\hat{u}_{tt}+|\xi|^{2}\hat{u}+g(t)|\xi|^{2}\hat{u}_{t}=0,&(t,\xi) \in(0,\infty)\times\mathbb{R}^{n},\\ \hat{u}(0,\xi)=\hat{u}_{0}(\xi),\quad\hat{u}_{t}(0,\xi)=\hat{u}_{1}(\xi),&\xi \in\mathbb{R}^{n}.\end{cases} \tag{4}\] We apply the change of variables \[\hat{u}(t,\xi)=\exp\bigg{(}-\frac{1}{2}\int_{0}^{t}g(\tau)|\xi|^{2}d\tau\bigg{)} \nu(t,\xi)\] to arrive at \[\begin{cases}v_{tt}+|\xi|^{2}\bigg{(}1-\frac{g(t)^{2}|\xi|^{2}}{4}-\frac{g^{ \prime}(t)}{2}\bigg{)}\nu=0,\quad(t,\xi)\in(0,\infty)\times\mathbb{R}^{n},\\ v(0,\xi)=v_{0}(\xi),\quad v_{t}(0,\xi)=v_{1}(\xi),\qquad\xi\in\mathbb{R}^{n}, \end{cases} \tag{5}\] where \[v_{0}(\xi)=\hat{u}_{0}(\xi)\quad\text{and}\quad v_{1}(\xi)=\frac{g(0)}{2}|\xi |^{2}\hat{u}_{0}(\xi)+\hat{u}_{1}(\xi).\] Examples for \(g=g(t)\) are \[\begin{array}{llll}\bullet&g(t)=e^{t},&\bullet&g(t)=e^{-t},&\bullet&g(t)=(C_ {d}+t)^{d}\ \ \text{with}\ \ d\in\mathbb{R},\\ \bullet&g(t)=(1+t)\log(e+t),&\bullet&g(t)=\mu(1+t)\ \ \text{with}\ \ \mu>0,\\ \bullet&g(t)=e^{e^{t}},&\bullet&g(t)=e^{-e^{t}}.\end{array}\] To study (5) we analyze \[1-\frac{g(t)^{2}|\xi|^{2}}{4}-\frac{g^{\prime}(t)}{2}=0.\] This equation divides the extended phase space \((0,\infty)\times\mathbb{R}^{n}\) into two regions, the hyperbolic region \(\Pi_{hyp}\) and the elliptic region \(\Pi_{ell}\), as follows: \[\Pi_{hyp}=\bigg{\{}(t,\xi):1-\frac{g(t)^{2}|\xi|^{2}}{4}-\frac{g^{\prime}(t)}{ 2}>0\bigg{\}},\qquad\Pi_{ell}=\bigg{\{}(t,\xi):1-\frac{g(t)^{2}|\xi|^{2}}{4}- \frac{g^{\prime}(t)}{2}<0\bigg{\}}.\] The assumptions for \(g=g(t)\) are organized in such a way that we can define a separating line \[t_{\xi}=t(|\xi|)=\bigg{\{}(t,\xi)\in(0,\infty)\times\mathbb{R}^{n}:1-\frac{g( t)^{2}|\xi|^{2}}{4}-\frac{g^{\prime}(t)}{2}=0\bigg{\}}.\] We are going to consider the following two cases: _Application of elliptic WKB-analysis_: We assume that \[1-\frac{g^{\prime}(t)}{2}=-h(t)^{2}g(t)^{2}\ \ \text{for all}\ \ t>0,\] where \(h=h(t)\) is a positive function for all \(t>0\). This function has to satisfy usual properties to carry out steps from WKB-analysis. In this case the equation (5) becomes \[v_{tt}-g(t)^{2}|\xi|^{2}\bigg{(}h(t)^{2}+\frac{|\xi|^{2}}{4}\bigg{)}\nu=0. \tag{6}\] So, we have only to apply tools from elliptic WKB-analysis to get WKB-representations of solutions. **Example 1.1**.: _Let us choose \(g(t)=3e^{t}\). We study (6) with the function \(h=h(t)=\frac{\sqrt{3e^{t}-2}}{3\sqrt{2e^{t}}}\)._ **Example 1.2**.: _Let us choose \(g(t)=3e^{e^{t}}\). We study (6) with the function \(h=h(t)=\frac{\sqrt{3\log(e+t)+3\frac{1+t}{\mu\mu}-2}}{3\sqrt{2}(1+t)\log(e+t)}\)._ **Example 1.3**.: _Let us choose \(g(t)=(C_{d}+t)^{d}\) with \(d>1\) and \(C_{d}=4^{\frac{1}{d}}\) for example. We study (6) with the function \(h=h(t)=\frac{\sqrt{(C_{d}+t)^{d}-2}}{\sqrt{2}(C_{d}+t)^{d}}\)._ **Example 1.4**.: _Let us choose \(g(t)=3(1+t)\log(e+t)\). We study (6) with the function \(h=h(t)=\frac{\sqrt{3\log(e+t)+3\frac{1+t}{\mu\mu}-2}}{3\sqrt{2}(1+t)\log(e+t)}\)._ **Example 1.5**.: _Let us choose \(g(t)=\mu(1+t)\) with \(\mu>2\). We study (6) with the function \(h=h(t)=\sqrt{\frac{\mu-2}{2\mu^{2}}}\frac{1}{1+t}\)._ _Application of elliptic and hyperbolic WKB-analysis_: We assume that \[1-\frac{g^{\prime}(t)}{2}=h(t)^{2}g(t)^{2}\ \ \mbox{for all}\ \ t>0,\] where \(h=h(t)\) is a positive function for all \(t>0\). This function has to satisfy usual properties to carry out steps from WKB-analysis. In this case the equation (5) becomes \[v_{tt}+g(t)^{2}|\xi|^{2}\bigg{(}h(t)^{2}-\frac{|\xi|^{2}}{4}\bigg{)}v=0. \tag{7}\] So, we have to apply tools from elliptic WKB-analysis and from hyperbolic WKB-analysis as well to get WKB-representations of solutions. The hyperbolic and elliptic regions can be easily described by \[\Pi_{hyp}=\bigg{\{}(t,\xi):h(t)>\frac{|\xi|}{2}\bigg{\}},\qquad\Pi_{ell}= \bigg{\{}(t,\xi):h(t)<\frac{|\xi|}{2}\bigg{\}}.\] **Example 1.6**.: _Let us choose \(g(t)=e^{-t}\). We study (7) with the function \(h=h(t)=e^{t^{\prime}\frac{\sqrt{e^{t}-2}}{\sqrt{2}}}\). The separating line is defined by \(|\xi|=2h(t)\) for all \(t>0\). So, the hyperbolic region is very large. The elliptic region is large._ **Example 1.7**.: _Let us choose \(g(t)=e^{-t^{\prime}}\). We study (7) with the function \(h=h(t)=e^{t^{\prime}\frac{\sqrt{e^{t}-e^{\prime}+2}}{\sqrt{2}}}\). The separating line is defined by \(|\xi|=2h(t)\) for all \(t>0\). So, the hyperbolic region is very large. The elliptic region is large._ **Example 1.8**.: _Let us choose \(g(t)=\mu(1+t)\) with \(\mu\in(0,2)\). We study (7) with the function \(h=h(t)=\sqrt{\frac{2-\mu}{2\mu^{2}}}\frac{1}{1+t}\)._ **Example 1.9**.: _Let us choose \(g(t)=(1+t)^{d}\) with \(d<1\). We study (7) with the function \(h=h(t)=\frac{\sqrt{2-d(1+t)^{d-1}}}{\sqrt{2}(1+t)^{d}}\). The separating line is defined by \(|\xi|=2h(t)\) for all \(t>0\). If \(d\in(0,1]\), then the hyperbolic region is small, the elliptic region is very large. If \(d<0\), then the hyperbolic region is very large. The elliptic region is large._ **Example 1.10**.: _We choose \(g(t)=((1+t)\log(e+t))^{-1}\) and study (7) with \(h=h(t)=\frac{\sqrt{\frac{1+t(e+e+2\frac{1+t}{\mu\mu}-2)}{\sqrt{2}(1+t)\log(e+t) ^{d}}+2}}{\sqrt{2}(1+t)\log(e+t)^{d}}\). The separating line is defined by \(|\xi|=2h(t)\) for all \(t>0\). So, the hyperbolic region is very large. The elliptic region is large._ ## 2 Models with increasing time-dependent coefficient \(g=g(t)\) We assume the following properties of the function \(g=g(t)\): **(A1)**: \(g(t)>0\) and \(g^{\prime}(t)>0\) for all \(t\in[0,\infty)\), **(A2)**: \(\frac{1}{g}\in L^{1}(0,\infty)\), **(A3)**: \(|d_{t}^{k}g(t)|\leq C_{k}g(t)\Big{(}\frac{g(t)}{G(t)}\Big{)}^{k}\) for all \(t\in[0,\infty)\), \(k=1,2\), where \(G(t):=\frac{1}{2}\int_{0}^{t}g(\tau)d\tau\) and \(C_{1}\), \(C_{2}\) are positive constants. **Theorem 2.1**.: _Let us consider the Cauchy problem_ \[\begin{cases}u_{n}-\Delta u+g(t)(-\Delta)u_{t}=0,&(t,x)\in(0,\infty) \times\mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n}.\end{cases}\] _We assume that the coefficient \(g=g(t)\) satisfies the conditions **(A1)** to **(A3)** and \((u_{0},u_{1})\in\dot{H}^{|\beta|}\times\dot{H}^{|\beta|-2}\) with \(|\beta|\geq 2\). Then, we have the following estimates for Sobolev solutions:_ \[\left\|\left|D\right|^{|\beta|}u(t,\cdot)\right\|_{L^{2}} \lesssim\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot{H}^{|\beta| -2}},\] \[\left\|\left|D\right|^{|\beta|-2}u_{t}(t,\cdot)\right\|_{L^{2}} \lesssim g(t)\Big{(}\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot {H}^{|\beta|-2}}\Big{)}.\] **Remark 2.2**.: _The statements of Theorem 2.1 imply that we do not have any parabolic effect, that is, higher order energies do not decay faster with increasing order. Theorem 2.1 can be applied to Examples 1.1, 1.2 and 1.3._ Proof of Theorem 2.1.: We write equation (6) in the form \[D_{t}^{2}v+\frac{g(t)^{2}}{4}|\xi|^{4}v+\Big{(}\frac{g^{\prime}( t)}{2}-1\Big{)}|\xi|^{2}v=0. \tag{8}\] The influence of the term \(\frac{g^{\prime}(t)}{2}|\xi|^{2}v\) is dominant to the influence of the term \(-|\xi|^{2}v\). We divide the extended phase space \([0,\infty)\times\mathbb{R}^{n}\) into zones as follows: * pseudodifferential zone: \[Z_{\rm pd}(N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}: G(t)|\xi|^{2}\leq N\Big{\}},\] * elliptic zone: \[Z_{\rm ell}(N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}: G(t)|\xi|^{2}\geq N\Big{\}},\] where \(N>0\) is sufficiently large. The separating line \(t_{\xi}=n(|\xi|)\) is defined by \[t_{\xi}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:G(t)| \xi|^{2}=N\Big{\}}.\] Figure 1: Sketch of the zones for the case \(g=g(t)\) is increasing ### Considerations in the elliptic zone \(Z_{\text{ell}}(N)\) Let us introduce the following family of symbol classes in the elliptic zone \(Z_{\text{ell}}(N)\). **Definition 2.3**.: _A function \(f=f(t,\xi)\) belongs to the elliptic symbol class \(S^{\ell}_{\text{ell}}[m_{1},m_{2}]\) if it holds_ \[|D^{k}_{t}f(t,\xi)|\leq C_{k}(|\xi|^{2}g(t))^{m_{1}}\Big{(}\frac{g(t)}{G(t)} \Big{)}^{m_{2}+k}\] _for all \((t,\xi)\in Z_{\text{ell}}(N)\) and all \(k\leq\ell\)._ Some useful rules of the symbolic calculus are collected in the following proposition. **Proposition 2.4**.: _The following statements are true:_ * \(S^{\ell}_{\text{ell}}[m_{1},m_{2}]\) _is a vector space for all nonnegative integers_ \(\ell\)_;_ * \(S^{\ell}_{\text{ell}}[m_{1},m_{2}]\cdot S^{\ell}_{\text{ell}}[m^{\prime}_{1}, m^{\prime}_{2}]\hookrightarrow S^{\ell}_{\text{ell}}[m_{1}+m^{\prime}_{1},m_{2}+m^{ \prime}_{2}]\)_;_ * \(D^{k}_{t}S^{\ell}_{\text{ell}}[m_{1},m_{2}]\hookrightarrow S^{\ell-k}_{\text{ell }}\{m_{1},m_{2}+k\}\) _for all nonnegative integers_ \(\ell\) _with_ \(k\leq\ell\)_;_ * \(S^{0}_{\text{ell}}\{-1,2\}\hookrightarrow L^{\infty}_{\xi}L^{1}_{t}(Z_{\text{ell }}(N))\)_._ Proof.: We only verify the integrability statement. Indeed, if \(f=f(t,\xi)\in S^{0}_{\text{ell}}\{-1,2\}\), then it holds \[\int_{t_{\xi}}^{\infty}|f(\tau,\xi)|d\tau\lesssim\int_{t_{\xi}}^{\infty}\frac {1}{|\xi|^{2}g(\tau)}\Big{(}\frac{g(\tau)}{G(\tau)}\Big{)}^{2}d\tau\leq\frac{ C}{G(t_{\xi})|\xi|^{2}}=\frac{C}{N},\] where we used the definition of the separating line \(t_{\xi}\). With \(\gamma=\gamma(t,\xi):=\frac{g(t)}{2}|\xi|^{2}\), we define the micro-energy \(V(t,\xi):=\big{(}\gamma(t,\xi)v(t,\xi),D_{t}v(t,\xi)\big{)}^{\text{T}}\). Then, by (8) we obtain that \(V=V(t,\xi)\) satisfies the following system of first order: \[D_{t}V=\underbrace{\left[\left(\begin{array}{cc}0&\frac{g(t)}{2}|\xi|^{2} \\ -\frac{g(t)}{2}|\xi|^{2}&0\end{array}\right)+\left(\begin{array}{cc}\frac{D_ {t}g(t)}{g(t)}&0\\ -\frac{g^{\prime}(t)-2}{g(t)}&0\end{array}\right)\right]}_{A_{V}}V \tag{9}\] with the initial condition \(V(0,\xi)=\big{(}\gamma(0,\xi)v(0,\xi),D_{t}v(0,\xi)\big{)}^{\text{T}}\). We want to estimate the fundamental solution \(E_{V}=E_{V}(t,s,\xi)\) to the system (9), namely, the solution to \[D_{t}E_{V}(t,s,\xi)=A_{V}(t,\xi)E_{V}(t,s,\xi),\quad E_{V}(s,s,\xi)=I\quad \text{for any}\quad t\geq s\geq t_{\xi}.\] **Step 1.**_Diagonalization procedure_ We denote by \(M\) the matrix consisting of eigenvectors of the first matrix on the right-hand side and its inverse matrix \[M=\left(\begin{array}{cc}i&-i\\ 1&1\end{array}\right),\qquad M^{-1}=\frac{1}{2}\left(\begin{array}{cc}-i&1 \\ i&1\end{array}\right).\] Then, defining \(V^{(0)}:=M^{-1}V\) we get the system \[D_{t}V^{(0)}=\big{(}\mathcal{D}(t,\xi)+\mathcal{R}(t)\big{)}V^{(0)},\] where \[\mathcal{D}(t,\xi)=\left(\begin{array}{cc}-i\frac{g(t)}{2}|\xi|^{2}&0\\ 0&i\frac{g(t)}{2}|\xi|^{2}\end{array}\right)\qquad\text{and}\qquad\mathcal{R}( t)=\frac{1}{2}\left(\begin{array}{cc}\frac{D_{t}g(t)}{2g(t)}-i\frac{g^{ \prime}(t)-2}{2g(t)}&-\frac{D_{t}g(t)}{2g(t)}+i\frac{g^{\prime}(t)-2}{2g(t)}\\ -\frac{D_{t}g(t)}{2g(t)}-i\frac{g^{\prime}(t)-2}{2g(t)}&\frac{D_{t}g(t)}{2g(t)}+ i\frac{g^{\prime}(t)-2}{2g(t)}\end{array}\right),\] where \(\mathcal{D}(t,\xi)\in S^{2}_{\rm ell}[1,0]\) and \(\mathcal{R}(t)\in S^{1}_{\rm ell}[0,1]\). Let us introduce \(F_{0}(t)={\rm diag}\,\mathcal{R}(t)\). Now we carry out the next step of diagonalization procedure. The difference of the diagonal entries of the matrix \(\mathcal{D}(t,\xi)+F_{0}(t)\) is \[i\delta(t,\xi):=g(t)|\xi|^{2}+\frac{g^{\prime}(t)-2}{g(t)}\sim g(t)|\xi|^{2}\] for \(t\geq t_{\xi}\) if we choose the zone constant \(N\) sufficiently large and apply condition **(A3)**. Now we choose a matrix \(N^{(1)}=N^{(1)}(t,\xi)\) such that \[N^{(1)}(t,\xi)=\left(\begin{array}{cc}0&-\dfrac{\mathcal{R}_{12}}{\delta(t, \xi)}\\ \dfrac{\mathcal{R}_{21}}{\delta(t,\xi)}&0\end{array}\right)\sim\left(\begin{array} []{cc}0&i\dfrac{D_{t}g(t)}{4g^{2}(t)|\xi|^{2}}-\dfrac{g^{\prime}(t)-2}{4g^{2 }(t)|\xi|^{2}}\\ i\dfrac{D_{t}g(t)}{4g^{2}(t)|\xi|^{2}}+\dfrac{g^{\prime}(t)-2}{4g^{2}(t)|\xi|^{ 2}}&0\end{array}\right).\] Taking into consideration the rules of the symbolic calculus we have \[N^{(1)}(t,\xi)\in S^{1}_{\rm ell}[-1,1]\qquad\text{and}\qquad N_{1}(t,\xi)=I+N ^{(1)}(t,\xi)\in S^{1}_{\rm ell}[0,0].\] For a sufficiently large zone constant \(N\) and all \(t\geq t_{\xi}\) the matrix \(N_{1}=N_{1}(t,\xi)\) is invertible with uniformly bounded inverse \(N_{1}^{-1}=N_{1}^{-1}(t,\xi)\). Indeed, in the elliptic zone \(Z_{\rm ell}(N)\) it holds \[|N_{1}(t,\xi)-I|\leq\frac{C}{g(t)|\xi|^{2}}\frac{g(t)}{G(t)}=\frac{C}{G(t)|\xi |^{2}}\leq\frac{C}{N}.\] Let \[B^{(1)}(t,\xi) =D_{t}N^{(1)}(t,\xi)-(\mathcal{R}(t)-F_{0}(t,\xi))N^{(1)}(t,\xi),\] \[\mathcal{R}_{1}(t,\xi) =-N_{1}^{-1}(t,\xi)B^{(1)}(t,\xi)\in S^{0}_{\rm ell}[-1,2],\] where \(N_{1}(t,\xi)=I+N^{(1)}(t,\xi)\). Then, we have the following operator identity: \[\big{(}D_{t}-\mathcal{D}(t,\xi)-\mathcal{R}(t)\big{)}N_{1}(t,\xi)=N_{1}(t,\xi )\big{(}D_{t}-\mathcal{D}(t,\xi)-F_{0}(t)-\mathcal{R}_{1}(t,\xi)\big{)}.\] **Step 2.**_Construction of the fundamental solution_ **Proposition 2.5**.: _The fundamental solution \(E^{V}_{ell}=E^{V}_{ell}(t,s,\xi)\) to the transformed operator_ \[D_{t}-\mathcal{D}(t,\xi)-F_{0}(t)-\mathcal{R}_{1}(t,\xi)\] _can be estimated by_ \[\big{(}|E^{V}_{ell}(t,s,\xi)|\big{)}\lesssim\frac{g(t)}{g(s)}\exp\left(\frac{ |\xi|^{2}}{2}\int_{s}^{t}g(\tau)d\tau\right)\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right),\] _with \((t,\xi),(s,\xi)\in Z_{ell}(N)\), \(t_{\xi}\leq s\leq t\)._ Proof.: To prove this proposition we can follow the proof to Theorem 15 of [14]. Now let us come back to \[V(t,\xi)=E_{V}(t,s,\xi)V(s,\xi),\quad\text{that is,}\quad\left(\begin{array}[] {c}\gamma(t,\xi)v(t,\xi)\\ D_{t}v(t,\xi)\end{array}\right)=E_{V}(t,s,\xi)\left(\begin{array}{c}\gamma(s,\xi)v(s,\xi)\\ D_{t}v(s,\xi)\end{array}\right)\quad\text{for}\quad t_{\xi}\leq s\leq t. \tag{10}\] Therefore, from Proposition 2.5, the backward transformations to the above used transformations and (10) we may conclude the following estimates for \(t_{\xi}\leq s\leq t\): \[\gamma(t,\xi)|v(t,\xi)| \lesssim\frac{g(t)}{g(s)}\exp\left(\frac{|\xi|^{2}}{2}\int_{s}^{t} g(\tau)d\tau\right)\big{(}\gamma(s,\xi)|v(s,\xi)|+|v_{r}(s,\xi)|\big{)},\] \[|v_{r}(t,\xi)| \lesssim\frac{g(t)}{g(s)}\exp\left(\frac{|\xi|^{2}}{2}\int_{s}^{t} g(\tau)d\tau\right)\big{(}\gamma(s,\xi)|v(s,\xi)|+|v_{r}(s,\xi)|\big{)}.\] Using the backward transformation \(v(t,\xi)=\exp\left(\frac{|\xi|^{2}}{2}\int_{0}^{t}g(\tau)d\tau\right)\!\hat{u}(t,\xi)\), we arrive immediately at the following result. **Corollary 2.6**.: _We have the following estimates in the elliptic zone \(Z_{\rm{e}l}(N)\) for \(t_{\xi}\leq s\leq t\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim|\xi|^{|\beta|}|\hat{u}(s, \xi)|+\frac{1}{g(s)}|\xi|^{|\beta|-2}|\hat{u}_{t}(s,\xi)|\quad\text{for}\quad| \beta|\geq 2,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim g(t)|\xi|^{|\beta|+2}| \hat{u}(s,\xi)|+\frac{g(t)}{g(s)}|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\quad\text{ for}\quad|\beta|\geq 0.\] ### Considerations in the pseudo-differential zone \(Z_{\rm{e}l}(N)\) We define the micro-energy \(U=\left(\gamma(t,\xi)\hat{u},D_{t}\hat{u}\right)^{\rm{T}}\) with \(\gamma(t,\xi):=\frac{g(t)}{2}|\xi|^{2}\). Then, the Cauchy problem (4) leads to the system of first order \[D_{t}U=\underbrace{\left(\begin{array}{cc}\frac{D_{t}\gamma(t, \xi)}{\gamma(t,\xi)}&\gamma(t,\xi)\\ \frac{|\xi|^{2}}{\gamma(t,\xi)}&ig(t)|\xi|^{2}\end{array}\right)}_{A(t,\xi)}U. \tag{11}\] We are interested in the fundamental solution \(E_{\rm{pd}}=E_{\rm{pd}}(t,s,\xi)\) to the system (11), that is, the solution of \[D_{t}E_{\rm{pd}}(t,s,\xi)=A(t,\xi)E_{\rm{pd}}(t,s,\xi),\quad E_{\rm{pd}}(s,s, \xi)=I,\] for all \(0\leq s\leq t\) and \((t,\xi),(s,\xi)\in Z_{\rm{pd}}(N)\). Thus, the solution \(U=U(t,\xi)\) is represented as \[U(t,\xi)=E_{\rm{pd}}(t,s,\xi)U(s,\xi).\] We will use the auxiliary function \[\delta=\delta(t,\xi)=\exp\left(\frac{|\xi|^{2}}{2}\int_{0}^{t}g(\tau)d\tau \right)=\exp\left(|\xi|^{2}G(t)\right)\lesssim 1.\] The entries \(E_{\rm{pd}}^{(k\ell)}(t,s,\xi)\), \(k,\ell=1,2\), of the fundamental solution \(E_{\rm{pd}}(t,s,\xi)\) satisfy the following system for \(\ell=1,2\): \[D_{t}E_{\rm{pd}}^{(1\ell)}(t,s,\xi) = \frac{D_{t}\gamma(t,\xi)}{\gamma(t,\xi)}E_{\rm{pd}}^{(1\ell)}(t,s,\xi)+\gamma(t,\xi)E_{\rm{pd}}^{(2\ell)}(t,s,\xi),\] \[D_{t}E_{\rm{pd}}^{(2\ell)}(t,s,\xi) = \frac{|\xi|^{2}}{\gamma(t,\xi)}E_{\rm{pd}}^{(1\ell)}(t,s,\xi)+ig( t)|\xi|^{2}E_{\rm{pd}}^{(2\ell)}(t,s,\xi).\] Then, by straight-forward calculations (with \(\delta_{k\ell}=1\) if \(k=\ell\) and \(\delta_{k\ell}=0\) otherwise), we get \[E_{\rm{pd}}^{(1\ell)}(t,s,\xi) = \frac{\gamma(t,\xi)}{\gamma(s,\xi)}\delta_{1\ell}+i\gamma(t,\xi) \int_{s}^{t}E_{\rm{pd}}^{(21)}(\tau,s,\xi)d\tau,\] \[E_{\rm{pd}}^{(2\ell)}(t,s,\xi) = \frac{\delta^{2}(s,\xi)}{\delta^{2}(t,\xi)}\delta_{2\ell}+\frac{ i|\xi|^{2}}{\delta^{2}(t,\xi)}\int_{s}^{t}\frac{1}{\gamma(\tau,\xi)}\delta^{2}( \tau,\xi)E_{\rm{pd}}^{(12)}(\tau,s,\xi)d\tau.\] To complete the proof of Proposition 2.8 the following lemma is useful. **Lemma 2.7** (Gronwall's inequality).: _Let \(f\) and \(h\) be continuous and nonnegative functions defined on \(J=[a,b]\) and let \(d\) be a continuous, positive and nondecreasing function defined on \(J\). Then, the inequality_ \[f(t)\leq d(t)+\int_{a}^{t}h(r)f(r)dr,\quad t\in J,\] _implies that_ \[f(t)\leq d(t)\exp\left(\int_{a}^{t}h(r)dr\right),\quad t\in J.\] **Proposition 2.8**.: _We have the following estimates in the pseudo-differential zone:_ \[(|E_{pd}(t,s,\xi)|)\lesssim\frac{g(t)}{g(s)}\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right)\] _with \((s,\xi),(t,\xi)\in Z_{pd}(N)\) and \(0\leq s\leq t\leq t_{\xi}\)._ Proof.: To prove this proposition we can follow the proof to Lemma 3.10 of [9]. Now let us come back to \[U(t,\xi)=E(t,0,\xi)U(0,\xi)\quad\text{for all}\quad 0\leq t\leq t_{\xi}. \tag{12}\] Because of (12) and Proposition 2.8, the following statement can be concluded. **Corollary 2.9**.: _In the pseudo-differential zone \(Z_{pd}(N)\) the following estimates hold for all \(0\leq t\leq t_{\xi}\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim|\xi|^{|\beta|}|\hat{u}_{0 }(\xi)|+|\xi|^{|\beta|-2}|\hat{u}_{1}(\xi)|\quad\text{for}\quad|\beta|\geq 2,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim g(t)|\xi|^{|\beta|+2}| \hat{u}_{0}(\xi)|+g(t)|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\quad\text{for}\quad| \beta|\geq 0.\] ### Conclusion From the statements of Corollaries 2.6 and 2.9 we derive the following estimates for \(t>0\): \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim|\xi|^{|\beta|}|\hat{u}_{0 }(\xi)|+|\xi|^{|\beta|-2}|\hat{u}_{1}(\xi)|\quad\text{for}\quad|\beta|\geq 2,\] \[|\xi|^{|\beta|}|u_{t}(t,\xi)|\lesssim g(t)|\xi|^{|\beta|+2}|\hat{ u}_{0}(\xi)|+g(t)|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\quad\text{for}\quad|\beta| \geq 0.\] This completes the proof of Theorem 2.1. ## 3 Models with integrable and decaying time-dependent coefficient \(g=g(t)\) We write (7) in the form \[D_{t}^{2}v+\frac{g(t)^{2}}{4}|\xi|^{4}v-\bigg{(}1-\frac{g^{\prime}(t)}{2} \bigg{)}|\xi|^{2}v=0.\] The influence of the term \(-|\xi|^{2}v\) is dominant to the influence of the term \(\frac{g^{\prime}(t)}{2}|\xi|^{2}v\). Examples for this case are given in Examples 1.6, 1.7 and 1.9 with \(d<-1\). Here let us recall that damped wave models with integrable and decaying in time speed of propagation have been studied in [2]. For this reason, in the language of the paper [2], Example 1.7 is non-effectively damped and Example 1.9 with \(d<-1\) is effectively damped. In this case the extended phase space \([0,\infty)\times\mathbb{R}^{n}\) is divided into zones as follows: * hyperbolic zone: \[Z_{\text{hyp}}(\varepsilon)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n }:g(t)|\xi|\leq\varepsilon\Big{\}},\] * reduced zone: \[Z_{\text{red}}(\varepsilon,N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n }:\varepsilon\leq g(t)|\xi|\leq N\Big{\}},\] * elliptic zone: \[Z_{\text{ell}}(N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:g(t)|\xi| \geq N\Big{\}},\] where \(\varepsilon>0\) is sufficiently small and \(N>0\) is sufficiently large. We denote the separating line between elliptic and reduced zone as \(t_{\xi_{1}}\) and that between hyperbolic zone and reduced zone as \(t_{\xi_{2}}\). The blue dashed line denotes the separating line between the hyperbolic and the elliptic region. ### Sub-decaying integrable time-dependent coefficient \(g=g(t)\) We assume that the coefficient \(g=g(t)\) satisfies the following conditions: **(B1)**: \(g(t)>0\) and \(g^{\prime}(t)<0\) for all \(t\in[0,\infty)\), **(B2)**: \(g\in L^{1}(0,\infty)\), **(B3)**: \(|g^{\prime}(t)|\leq C_{1}g(t)\) and \(|g^{\prime\prime}(t)|\leq C_{2}g(t)\) for all \(t\in[0,\infty)\) with positive constants \(C_{1}\) and \(C_{2}\). Examples for this case are \(g(t)=(1+t)^{-d}\) with \(d>1\), and \(g(t)=e^{-Ct}\) with \(C>0\). **Theorem 3.1**.: _Let us consider the Cauchy problem_ \[\begin{cases}u_{tt}-\Delta u+g(t)(-\Delta)u_{t}=0,&(t,x)\in(0,\infty)\times \mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n}.\end{cases}\] _We assume that the coefficient \(g=g(t)\) satisfies the conditions **(B1)** to **(B3)** and \((u_{0},u_{1})\in\dot{H}^{|\beta|}\times\dot{H}^{|\beta|-2}\) with \(|\beta|\geq 2\). Then, we have the following estimates for Sobolev solutions:_ \[\left\|D|^{|\beta|}u(t,\cdot)\right\|_{L^{2}} \lesssim\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot{H}^{|\beta| -2}},\] \[\left\|D|^{|\beta|-2}u_{t}(t,\cdot)\right\|_{L^{2}} \lesssim\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot{H}^{|\beta| -2}}.\] **Remark 3.2**.: _Theorem 3.1 implies that we do not have any parabolic effect._ Proof of Theorem 3.1.: #### 3.1.1 Considerations in the hyperbolic zone \(Z_{hyp}(\varepsilon)\) Let us turn to the equation (4) in the following form: \[D_{t}^{2}\hat{u}-|\xi|^{2}\hat{u}-ig(t)|\xi|^{2}D_{t}\hat{u}=0. \tag{13}\] We define the micro-energy \(U=\left(|\xi|\hat{u},D_{t}\hat{u}\right)^{\mathrm{T}}\). Then, the equation (13) leads to the system of first order \[D_{t}U=\underbrace{\left(\begin{array}{cc}0&|\xi|\\ |\xi|&ig(t)|\xi|^{2}\\ \end{array}\right)}_{A(t,\xi)}U.\] Figure 2: Sketch of the zones for the case \(g\in L^{1}(0,\infty)\) **Proposition 3.3**.: _The fundamental solution \(E_{U}=E_{U}(t,s,\xi)\) corresponding to this system of first order satisfies the following estimate in \(Z_{\text{hyp}}(\varepsilon)\):_ \[(|E_{U}(t,s,\xi)|)\lesssim\exp\bigg{(}-\frac{3}{8}|\xi|^{2}\int_{s}^{t}g(\tau)d \tau\bigg{)}\bigg{(}\begin{array}{cc}1&1\\ 1&1\end{array}\bigg{)}\] _with \((t,\xi),(s,\xi)\in Z_{\text{hyp}}(\varepsilon)\) and \(s\leq t\)._ Proof.: **Step 1.**_Diagonalization procedure_ Let us carry out the first step of diagonalization. For this reason we set \[M=\left(\begin{array}{cc}1&-1\\ 1&1\end{array}\right)\qquad\text{and}\qquad M^{-1}=\frac{1}{2}\left( \begin{array}{cc}1&1\\ -1&1\end{array}\right).\] We define \(U^{(0)}:=M^{-1}U\). Then, we arrive at the system \[D_{t}U^{(0)}=(\mathcal{D}(\xi)+\mathcal{R}(t,\xi))U^{(0)},\] where \[\mathcal{D}(\xi)=\left(\begin{array}{cc}\tau_{1}&0\\ 0&\tau_{2}\end{array}\right)=\left(\begin{array}{cc}-|\xi|&0\\ 0&|\xi|\end{array}\right)\qquad\text{and}\qquad\mathcal{R}(t,\xi)=\frac{1}{2} \left(\begin{array}{cc}ig(t)|\xi|^{2}&-ig(t)|\xi|^{2}\\ -ig(t)|\xi|^{2}&ig(t)|\xi|^{2}\end{array}\right).\] Let \(F_{0}=F_{0}(t,\xi)\) be the diagonal part of \(\mathcal{R}=\mathcal{R}(t,\xi)\). To carry out the second step of diagonalization procedure, we introduce \[N^{(1)}(t,\xi)=\left(\begin{array}{cc}0&\dfrac{\mathcal{R}_{12}}{\tau_{1}- \tau_{2}}\\ \dfrac{\mathcal{R}_{21}}{\tau_{2}-\tau_{1}}&0\end{array}\right)=\frac{1}{4} \left(\begin{array}{cc}0&ig(t)|\xi|\\ -ig(t)|\xi|&0\end{array}\right),\] and \(N_{1}(t,\xi)=I+N^{(1)}(t,\xi)\). For all \((t,\xi)\in Z_{\text{hyp}}(\varepsilon)\) the matrix \(N_{1}=N_{1}(t,\xi)\) is invertible with uniformly bounded inverse \(N_{1}^{-1}=N_{1}^{-1}(t,\xi)\). Indeed, in \(Z_{\text{hyp}}(\varepsilon)\) it holds \[|N^{(1)}(t,\xi)|\leq\frac{g(t)|\xi|}{4}\leq\frac{\varepsilon}{4}.\] We set \[B^{(1)}(t,\xi)=D_{t}N^{(1)}(t,\xi)-(\mathcal{R}(t,\xi)-F_{0}(t,\xi))N^{(1)}(t,\xi)=\frac{1}{8}\left(\begin{array}{cc}g^{2}(t)|\xi|^{3}&2g^{\prime}(t)|\xi |\\ -2g^{\prime}(t)|\xi|&-g^{2}(t)|\xi|^{3}\end{array}\right),\] and, then \[\mathcal{R}_{1}(t,\xi)=-N_{1}^{-1}(t,\xi)B^{(1)}(t,\xi).\] Thus, we may conclude \[(D_{t}-\mathcal{D}(\xi)-\mathcal{R}(t,\xi))N_{1}(t,\xi)=N_{1}(t,\xi)(D_{t}- \mathcal{D}(\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi)).\] **Step 2.**_Construction of the fundamental solution_ We turn to \(U^{(1)}=U^{(1)}(t,\xi)\) as the solution to the system \[(D_{t}-\mathcal{D}(\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi))U^{(1)}(t,\xi)=0.\] We can write \(U^{(1)}(t,\xi)=E_{U,1}(t,s,\xi)U^{(1)}(s,\xi)\). Here \(E_{U,1}=E_{U,1}(t,s,\xi)\) is the fundamental solution to the system \[(D_{t}-\mathcal{D}(\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi))E_{U,1}(t,s,\xi)=0,\quad E_{U,1}(s,s,\xi)=I.\] The solution \(E_{0}=E_{0}(t,s,\xi)\) of the "principal diagonal part" satisfies \[D_{t}E_{0}(t,s,\xi)=(\mathcal{D}(\xi)+F_{0}(t,\xi))E_{0}(t,s,\xi),\quad E_{0}(s,s,\xi)=I,\] with \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\mathrm{hyp}}(\varepsilon)\). Consequently, we have \[E_{0}(t,s,\xi)=\exp\Big{(}i\int_{s}^{t}\big{(}\mathcal{D}(\xi)+F_{0}(\tau,\xi) \big{)}d\tau\Big{)},\] and we can estimate \[|E_{0}(t,s,\xi)|\lesssim\exp\bigg{(}-\frac{1}{2}|\xi|^{2}\int_{s}^{t}g(\tau)d \tau\bigg{)}.\] Let us set \[\mathcal{R}_{2}(t,s,\xi) =E_{0}^{-1}(t,s,\xi)\mathcal{R}_{1}(t,\xi)E_{0}(t,s,\xi),\] \[Q(t,s,\xi) =I+\sum_{k=1}^{\infty}i^{k}\int_{s}^{t}\mathcal{R}_{2}(t_{1},s, \xi)\int_{s}^{t_{1}}\mathcal{R}_{2}(t_{2},s,\xi)\cdots\int_{s}^{t_{k-1}} \mathcal{R}_{2}(t_{k},s,\xi)dt_{k}\cdots dt_{2}dt_{1}.\] Then, \(Q=Q(t,s,\xi)\) solves the Cauchy problem \[D_{t}Q(t,s,\xi)=\mathcal{R}_{2}(t,s,\xi)Q(t,s,\xi),\quad Q(s,s,\xi)=I.\] The fundamental solution \(E_{U,1}=E_{U,1}(t,s,\xi)\) is representable in the form \(E_{U,1}(t,s,\xi)=E_{0}(t,s,\xi)Q(t,s,\xi)\). Furthermore, we see that \[|Q(t,s,\xi)|\leq\exp\bigg{(}\int_{s}^{t}|\mathcal{R}_{1}(\tau,\xi)|d\tau\bigg{)} \lesssim\exp\bigg{(}\frac{1}{8}\int_{s}^{t}\big{(}g^{2}(\tau)|\xi|^{3}-2g^{ \prime}(\tau)|\xi|\big{)}d\tau\bigg{)}\lesssim\exp\bigg{(}\frac{1}{8}\int_{s} ^{t}g^{2}(\tau)|\xi|^{3}d\tau\bigg{)}\] by using the definition of \(Z_{\mathrm{hyp}}(\varepsilon)\). Therefore, we get \[|E_{U,1}(t,s,\xi)| \leq|E_{0}(t,s,\xi)||Q(t,s,\xi)|\] \[\leq\exp\bigg{(}-\frac{1}{2}|\xi|^{2}\int_{s}^{t}g(\tau)d\tau+ \frac{1}{8}\int_{s}^{t}g^{2}(\tau)|\xi|^{3}d\tau\bigg{)}\] \[\leq\exp\bigg{(}-\frac{1}{2}|\xi|^{2}\int_{s}^{t}g(\tau)\bigg{(}1 -\frac{1}{4}g(\tau)|\xi|\bigg{)}d\tau\bigg{)}\] \[\leq\exp\bigg{(}-\frac{4-\varepsilon}{8}|\xi|^{2}\int_{s}^{t}g( \tau)d\tau\bigg{)},\] where we used the definition of \(Z_{\mathrm{hyp}}(\varepsilon)\). The backward transformation leads to the same estimate for \(E_{U}=E_{U}(t,s,\xi)\) for all \((t,\xi),(s,\xi)\in Z_{\mathrm{hyp}}(\varepsilon)\) and \(t\geq s\). This completes the proof. **Corollary 3.4**.: _We have the following estimates for \(s\leq t\), \((s,\xi),(t,\xi)\in Z_{hyp}(\varepsilon)\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{3}{8}| \xi|^{2}\int_{s}^{t}g(\tau)d\tau\bigg{)}\big{(}|\xi|^{|\beta|}|\hat{u}(s,\xi)| +|\xi|^{|\beta|-1}|\hat{u}_{t}(s,\xi)|\big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{3}{8 }|\xi|^{2}\int_{s}^{t}g(\tau)d\tau\bigg{)}\big{(}|\xi|^{|\beta|+1}|\hat{u}(s, \xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\big{)}\quad\text{for}\quad|\beta|\geq 0.\] #### 3.1.2 Considerations in the reduced zone \(Z_{\text{red}}(\varepsilon,N)\) We define the micro-energy \(U=(|\xi|\hat{u},D_{t}\hat{u})^{\mathrm{T}}\). Then, the Cauchy problem (4) leads to the system of first order \[D_{t}U=\underbrace{\left(\begin{array}{cc}0&|\xi|\\ |\xi|&ig(t)|\xi|^{2}\\ \end{array}\right)}_{A(t,\xi)}U. \tag{14}\] We can define for all \((t,\xi)\in Z_{\rm red}\) the following energy of the solutions to (14): \[\mathcal{E}(t,\xi):=\frac{1}{2}(|\xi|^{2}|\hat{u}(t,\xi)|^{2}+|\hat{u}_{t}(t,\xi )|^{2}).\] If we differentiate the energy \(\mathcal{E}=\mathcal{E}(t,\xi)\) with respect to \(t\) and use our equation (4), it follows \[\frac{d}{dt}\mathcal{E}(t,\xi)=-g(t)|\xi|^{2}|\hat{u}_{t}(t,\xi)|^{2}\leq 0.\] Consequently, \(\mathcal{E}=\mathcal{E}(t,\xi)\) is monotonically decreasing in \(t\). Therefore, we have \(\mathcal{E}(t,\xi)\leq\mathcal{E}(s,\xi)\) for \(s\leq t\) and \((s,\xi),(t,\xi)\in Z_{\rm red}(\varepsilon,N)\). This implies \[|\xi||\hat{u}(t,\xi)| \leq\sqrt{\mathcal{E}(s,\xi)}\leq|\xi||\hat{u}(s,\xi)|+|\hat{u}_{ t}(s,\xi)|,\] \[|\hat{u}_{t}(t,\xi)| \leq\sqrt{\mathcal{E}(s,\xi)}\leq|\xi||\hat{u}(s,\xi)|+|\hat{u}_{ t}(s,\xi)|.\] Taking into consideration, that the solution \(U=U(t,\xi)\) is represented as \(U(t,\xi)=E_{\rm red}(t,s,\xi)U(s,\xi)\), we arrive at the following result. **Corollary 3.5**.: _We have the following estimates in \(Z_{red}(\varepsilon,N)\) with \((s,\xi),(t,\xi)\in Z_{red}(\varepsilon,N)\) and \(s\leq t\) :_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \lesssim|\xi|^{|\beta|}|\hat{u}(s,\xi)|+|\xi|^{|\beta|-1}|\hat{u}_ {t}(s,\xi)|\;\;\text{for}\;\;|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim|\xi|^{|\beta|+1}|\hat{u}(s,\xi)|+|\xi|^{|\beta|}|\hat{u}_ {t}(s,\xi)|\;\;\text{for}\;\;|\beta|\geq 0.\] #### 3.1.3 Considerations in the elliptic zone \(Z_{\rm elt}(N)\) We write the equation (5) in the following form: \[D_{t}^{2}v+\underbrace{\frac{g(t)^{2}}{4}|\xi|^{4}-|\xi|^{2}}_{=:d^{2}(t, \xi)}\Big{)}v+\underbrace{\frac{g^{\prime}(t)}{2}|\xi|^{2}}_{=:m(t,\xi)}v=0. \tag{15}\] **Remark 3.6**.: _We have the following inequalities:_ \[d^{2}(t,\xi)\leq\frac{1}{4}g^{2}(t)|\xi|^{4}\qquad\text{and}\qquad d^{2}(t, \xi)\geq\Big{(}\frac{1}{4}-\frac{1}{N^{2}}\Big{)}g^{2}(t)|\xi|^{4} \tag{16}\] _with \(N\) sufficiently large. Therefore, we get \(d(t,\xi)\approx g(t)|\xi|^{2}\). Moreover, it holds_ \[|d_{t}(t,\xi)|=\left|\frac{1}{2}\frac{g^{\prime}(t)g(t)|\xi|^{4}}{\sqrt{\frac {g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}}\right|\lesssim-\frac{g^{\prime}(t)g(t)|\xi| ^{4}}{g(t)|\xi|^{2}}=-Cg^{\prime}(t)|\xi|^{2}.\] _On the other hand, we have_ \[d_{t}^{2}(t,\xi)=\frac{1}{2}\frac{\Big{(}g^{\prime\prime}(t)g(t)|\xi|^{4}+(g^ {\prime}(t))^{2}|\xi|^{4}\Big{)}\sqrt{\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}} }{\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}-\frac{1}{8}\frac{\big{(}g^{\prime}(t )g(t)|\xi|^{4}\big{)}^{2}}{\Big{(}\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}\Big{)} \sqrt{\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}}.\] _Employing condition **(B3)** and estimates (16), we arrive at_ \[|d_{t}^{2}(t,\xi)|\leq\frac{1}{2}\frac{|g^{\prime\prime}(t)|g(t)|\xi|^{4}+|g^ {\prime}(t)|^{2}|\xi|^{4}}{d(t,\xi)}+\frac{1}{8}\frac{\big{(}|g^{\prime}(t)|g( t)|\xi|^{4}\big{)}^{2}}{d^{3}(t,\xi)}\leq\left(\frac{C_{2}+C_{1}^{2}}{\Big{(}1- \frac{4}{N^{2}}\Big{)}^{\frac{1}{2}}}+\frac{C_{1}^{2}}{\Big{(}1-\frac{4}{N^{2 }}\Big{)}^{\frac{3}{2}}}\right)g(t)|\xi|^{2}\leq C_{N}g(t)|\xi|^{2}.\] It is reasonable to introduce the micro-energy \[V=V(t,\xi):=(d(t,\xi)v,D_{t}v)^{\mathrm{T}}\qquad\text{with}\qquad d(t,\xi):= \sqrt{\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}.\] Thus, we have to apply tools from elliptic WKB-analysis. Transformation to a system of first order from (15) leads to \[D_{t}V=\left(\begin{array}{cc}0&d(t,\xi)\\ -d(t,\xi)&0\end{array}\right)V+\left(\begin{array}{cc}\frac{D_{t}d(t,\xi)}{ d(t,\xi)}&0\\ -\frac{m(t,\xi)}{d(t,\xi)}&0\end{array}\right)V.\] Using \(V=MV^{(0)}\), \(M=\begin{pmatrix}i&1\\ -i&1\end{pmatrix}\), then after the first step of diagonalization we obtain \[D_{t}V^{(0)}=(\mathcal{D}(t,\xi)+\mathcal{R}(t,\xi))V^{(0)},\] where \[\mathcal{D}(t,\xi)=\left(\begin{array}{cc}-id(t,\xi)&0\\ 0&id(t,\xi)\end{array}\right)\qquad\text{and}\qquad\mathcal{R}(t,\xi)=\frac{1} {2}\left(\begin{array}{cc}\frac{D_{t}d(t,\xi)}{d(t,\xi)}-i\frac{m(t,\xi)}{d( t,\xi)}&-\frac{D_{t}d(t,\xi)}{d(t,\xi)}+i\frac{m(t,\xi)}{d(t,\xi)}\\ -\frac{D_{t}d(t,\xi)}{d(t,\xi)}-i\frac{m(t,\xi)}{d(t,\xi)}&\frac{D_{t}d(t,\xi) }{d(t,\xi)}+i\frac{m(t,\xi)}{d(t,\xi)}\end{array}\right).\] From Remark 3.6 we find the estimates \[\left|\frac{D_{t}d(t,\xi)}{d(t,\xi)}\right|\lesssim-\frac{g^{\prime}(t)}{g(t)} \qquad\text{and}\qquad\left|\frac{m(t,\xi)}{d(t,\xi)}\right|\lesssim-\frac{g^ {\prime}(t)}{g(t)}.\] Let us introduce \(F_{0}(t,\xi)=\operatorname{diag}\mathcal{R}(t,\xi)\). Now we carry out the next step of diagonalization procedure. The difference of the diagonal entries of the matrix \(\mathcal{D}(t,\xi)+F_{0}(t,\xi)\) is \[i\delta(t,\xi):=2d(t,\xi)+\frac{m(t,\xi)}{d(t,\xi)}\sim d(t,\xi)\] for \(t\leq t_{\xi_{1}}\) if we choose the zone constant \(N\) sufficiently large and apply condition **(B3)**. Now we choose a matrix \(N^{(1)}=N^{(1)}(t,\xi)\) such that \[N^{(1)}(t,\xi)=\left(\begin{array}{cc}0&-\frac{\mathcal{R}_{12}}{\delta(t, \xi)}\\ \frac{\mathcal{R}_{21}}{\delta(t,\xi)}&0\end{array}\right)\sim\left(\begin{array} []{cc}0&i\frac{D_{t}d(t,\xi)}{2d^{2}(t)}-\frac{m(t,\xi)}{2d^{2}(t,\xi)}\\ i\frac{D_{t}d(t,\xi)}{2d^{2}(t,\xi)}+\frac{m(t,\xi)}{2d^{2}(t,\xi)}&0\end{array} \right).\] We put \(N_{1}=N_{1}(t,\xi):=I+N^{(1)}(t,\xi)\). For a sufficiently large zone constant \(N\) and all \(t\leq t_{\xi_{1}}\) the matrix \(N_{1}=N_{1}(t,\xi)\) is invertible with uniformly bounded inverse \(N_{1}^{-1}=N_{1}^{-1}(t,\xi)\). Indeed, in the elliptic zone \(\operatorname{Z_{\mathrm{ell}}}(N)\), for large \(N\) it holds \[|N_{1}(t,\xi)-I|\leq\frac{1}{|\xi|^{2}g(t)}\frac{-g^{\prime}(t)}{g(t)}\leq- \frac{1}{N^{2}}g^{\prime}(t)\leq\frac{1}{2}.\] Let \[B^{(1)}(t,\xi) =D_{t}N^{(1)}(t,\xi)-(\mathcal{R}(t,\xi)-F_{0}(t,\xi))N^{(1)}(t, \xi),\] \[\mathcal{R}_{1}(t,\xi) =-N_{1}^{-1}(t,\xi)B^{(1)}(t,\xi).\] Consequently, we have the following operator identity: \[\big{(}D_{t}-\mathcal{D}(t,\xi)-\mathcal{R}(t,\xi)\big{)}N_{1}(t,\xi)=N_{1}(t, \xi)\big{(}D_{t}-\mathcal{D}(t,\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi)\big{)}.\] **Step 2.**_Construction of the fundamental solution_ **Proposition 3.7**.: _The fundamental solution \(E^{V}_{\text{ell}}=E^{V}_{\text{ell}}(t,s,\xi)\) to the transformed operator_ \[D_{t}-\mathcal{D}(t,\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi)\] _can be estimated by_ \[(|E^{V}_{\text{ell}}(t,s,\xi)|)\lesssim\frac{g(t)}{g(s)}\exp\left(\frac{|\xi|^{2 }}{2}\int_{s}^{t}g(\tau)d\tau\right)\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right),\] _with \((t,\xi),(s,\xi)\in Z_{\text{ell}}(N)\) and \(s\leq t\)._ Proof.: We transform the system for \(E^{V}_{\text{ell}}=E^{V}_{\text{ell}}(t,s,\xi)\) to an integral equation for a new matrix-valued function \(\mathcal{Q}_{\text{ell}}=Q_{\text{ell}}(t,s,\xi)\). Following the same idea as in the proof of Proposition 2.5, we obtain that \(E^{V}_{\text{ell}}=E^{V}_{\text{ell}}(t,s,\xi)\) satisfies the following integral equation: \[E^{V}_{\text{ell}}(t,s,\xi) =\exp\bigg{\{}i\int_{s}^{t}\big{(}\mathcal{D}(\tau,\xi)+F_{0}( \tau,\xi)\big{)}d\tau\bigg{\}}E^{V}_{\text{ell}}(s,s,\xi)\] \[\quad+i\int_{s}^{t}\exp\bigg{\{}i\int_{0}^{t}\big{(}\mathcal{D}( \tau,\xi)+F_{0}(\tau,\xi)\big{)}d\tau\bigg{\}}\mathcal{R}_{1}(\theta,\xi)E^{V} _{\text{ell}}(\theta,s,\xi)\,d\theta.\] We define \[\mathcal{Q}_{\text{ell}}(t,s,\xi):=\exp\bigg{\{}-\int_{s}^{t}\beta(\tau,\xi)d \tau\bigg{\}}E^{V}_{\text{ell}}(t,s,\xi),\] where \(\beta=\beta(t,\xi)\) is chosen from the main entries of the diagonal matrix \(i\mathcal{D}(t,\xi)+iF_{0}(t,\xi)\) as follows: \[\beta(t,\xi)=d(t,\xi)+\frac{d_{t}(t,\xi)}{2d(t,\xi)}+\frac{m(t,\xi)}{2d(t,\xi )}.\] It satisfies the new integral equation \[\mathcal{Q}_{\text{ell}}(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\big{(}i\mathcal{D}(\tau,\xi)+iF_{0}( \tau,\xi)-\beta(\tau,\xi)I\big{)}d\tau\bigg{\}}\] \[\quad+\int_{s}^{t}\exp\bigg{\{}\int_{\theta}^{t}\big{(}i\mathcal{ D}(\tau,\xi)+iF_{0}(\tau,\xi)-\beta(\tau,\xi)I\big{)}d\tau\bigg{\}}\mathcal{R}_{1} (\theta,\xi)\mathcal{Q}_{\text{ell}}(\theta,s,\xi)\,d\theta.\] Using our conditions **(B1)** to **(B3)** and Remark 3.6, one may see that \(\mathcal{R}_{1}=\mathcal{R}_{1}(\theta,\xi)\) is uniformly integrable over the elliptic zone. It follows \[H(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\big{(}i\mathcal{D}(\tau,\xi)+iF_{0}( \tau,\xi)-\beta(\tau,\xi)I\big{)}d\tau\bigg{\}}\] \[=\text{diag}\left(1,\exp\bigg{\{}\int_{s}^{t}\bigg{(}-2d(\tau,\xi )-\frac{m(\tau,\xi)}{d(\tau,\xi)}\bigg{)}d\tau\bigg{\}}\right)\to\left( \begin{array}{cc}1&0\\ 0&0\end{array}\right)\] as \(t\to\infty\). Hence, the matrix \(H=H(t,s,\xi)\) is uniformly bounded for \((s,\xi),(t,\xi)\in Z_{\text{ell}}(N)\). So, the representation of \(\mathcal{Q}_{\text{ell}}=\mathcal{Q}_{\text{ell}}(t,s,\xi)\) by a Neumann series gives \[\mathcal{Q}_{\text{ell}}(t,s,\xi)=H(t,s,\xi)+\sum_{k=1}^{\infty}t^{k}\int_{s}^ {t}H(t,t_{1},\xi)\mathcal{R}_{1}(t_{1},\xi)\int_{s}^{t_{1}}H(t_{1},t_{2},\xi) \mathcal{R}_{1}(t_{2},\xi)\cdots\int_{s}^{t_{k-1}}H(t_{k-1},t_{k},\xi) \mathcal{R}_{1}(t_{k},\xi)dt_{k}\cdots dt_{2}dt_{1}.\] Then, convergence of this series is obtained from the symbol estimates, since \(\mathcal{R}_{1}=\mathcal{R}_{1}(t,\xi)\) is uniformly integrable over \(Z_{\text{ell}}(N)\). Hence, from the last considerations we may conclude \[E^{V}_{\text{ell}}(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\beta(\tau,\xi)d\tau\bigg{\}}Q_{\text{ell }}(t,s,\xi)\] \[=\exp\bigg{\{}\int_{s}^{t}\bigg{(}d(\tau,\xi)+\frac{\partial_{ \tau}d(\tau,\xi)}{2d(\tau,\xi)}+\frac{m(\tau,\xi)}{2d(\tau,\xi)}\bigg{)}d\tau \bigg{\}}Q_{\text{ell}}(t,s,\xi)\] \[\leq\frac{d(t,\xi)}{d(s,\xi)}\exp\bigg{(}\int_{s}^{t}d(\tau,\xi)\,d \tau\bigg{)}Q_{\text{ell}}(t,s,\xi),\] where we used \(m(t,\xi)\leq\partial_{t}d(t,\xi)\) and \(Q_{\rm ell}=Q_{\rm ell}(t,s,\xi)\) is a uniformly bounded matrix. Then, it follows \[(|E_{\rm ell}^{V}(t,s,\xi)|)\lesssim\frac{g(t)}{g(s)}\exp\left(|\xi|^{2}\int_{s} ^{t}\frac{g(\tau)}{2}d\tau\right)\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right).\] This completes the proof. Using the backward transformation we arrive at the following result. **Corollary 3.8**.: _In \(Z_{ell}(N)\) we have the following estimates for \((s,\xi),(t,\xi)\in Z_{ell}(N)\) and \(0\leq s\leq t\):_ \[\frac{g(t)}{2}|\xi|^{\beta}|\hat{u}(t,\xi)| \lesssim\frac{g(t)}{g(s)}\Big{(}g(s)|\xi|^{\beta}|\hat{u}(s,\xi) |+|\xi|^{|\beta|-2}|\hat{u}_{t}(s,\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 2,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim\frac{g(t)}{g(s)}\Big{(}g(s)|\xi|^{|\beta|+2}|\hat{u}(s, \xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] #### 3.1.4 Conclusions From the statements of Corollaries 3.4, 3.5 and 3.8 we derive our desired statements. _Case 1:_\(t\leq t_{\xi_{1}}\). Due to Corollary 3.8 we have \[\frac{g(t)}{2}|\xi|^{\beta}|\hat{u}(t,\xi)| \lesssim g(t)\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|-2}|\hat{u}_{1}(\xi)|\Big{)},\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim g(t)\Big{(}|\xi|^{|\beta|+2}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|}|\hat{u}_{1}(\xi)|\Big{)}.\] _Case 2:_\(t_{\xi_{1}}\leq t\leq t_{\xi_{2}}\). In this case we apply Corollaries 3.5 and 3.8 to get \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \leq|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{t}(t_{\xi_{1}},\xi)|\lesssim|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+|\xi|^ {|\beta|-2}|\hat{u}_{1}(\xi)|,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \leq|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|}| \hat{u}_{t}(t_{\xi_{1}},\xi)|\lesssim|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+|\xi| ^{|\beta|-1}|\hat{u}_{1}(\xi)|.\] _Case 3:_\(t\geq t_{\xi_{2}}\). In this case we use Corollaries 3.4, 3.5 and 3.8. Then, it holds \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau )d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{\xi_{2}},\xi)|+|\xi|^{|\beta|-1 }|\hat{u}_{t}(t_{\xi_{2}},\xi)|\Big{)}\] \[\leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau )d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|-1 }|\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\] \[\lesssim\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g( \tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|-2}| \hat{u}_{1}(\xi)|\Big{)},\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau )d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{2}},\xi)|+|\xi|^{|\beta|} |\hat{u}_{t}(t_{\xi_{2}},\xi)|\Big{)}\] \[\lesssim\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau )d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|} |\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\] \[\lesssim\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g( \tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{1}(\xi)|\Big{)}.\] Consequently, the proof of Theorem 3.1 is completed. ### Super-decaying integrable time-dependent coefficient \(g=g(t)\) We assume that the coefficient \(g=g(t)\) satisfies the following conditions: **(C1)**: \(g(t)>0\) and \(g^{\prime}(t)<0\) for all \(t\in[0,\infty)\), **(C2)**: \(g\in L^{1}(0,\infty)\). Examples for this case are \(g(t)=e^{-t^{\prime}}\) and \(g(t)=e^{-t^{\prime}}\). **Theorem 3.9**.: _Let us consider the Cauchy problem_ \[\begin{cases}u_{tt}-\Delta u+g(t)(-\Delta)u_{t}=0,&(t,x)\in(0,\infty)\times \mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n}.\end{cases}\] _We assume that the coefficient \(g=g(t)\) satisfies the conditions **(C1)** and **(C2)**, and \((u_{0},u_{1})\in\dot{H}^{|\beta|+\kappa+1}\times\dot{H}^{|\beta|+\kappa-1}\) with \(|\beta|\geq 1\) and arbitrarily small \(\kappa>0\). Then, we have the following estimates for Sobolev solutions:_ \[\left\|\left|D\right|^{|\beta|}\mathbf{u}(t,\cdot)\right\|_{L^{2}} \lesssim\left\|u_{0}\right\|_{\dot{H}^{|\beta|+\kappa+1}}+\left\|u_{1} \right\|_{\dot{H}^{|\kappa-1}},\] \[\left\|\left|D\right|^{|\beta|-1}u_{t}(t,\cdot)\right\|_{L^{2}} \lesssim\left\|u_{0}\right\|_{\dot{H}^{|\beta|+\kappa+1}}+\left\|u_{1} \right\|_{\dot{H}^{|\kappa-1}}.\] **Remark 3.10**.: _Theorem 3.9 implies that we do not have any parabolic effect._ Proof of Theorem 3.9.: #### 3.2.1 Considerations in the hyperbolic zone \(Z_{\text{typ}}(\varepsilon)\) We have the same statement from Proposition 3.3 in Subsection 3.1. **Corollary 3.11**.: _We have the following estimates for \(s\leq t\), \((s,\xi),(t,\xi)\in Z_{\text{typ}}(\varepsilon)\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{3}{8}| \xi|^{2}\int_{s}^{t}g(\tau)d\tau\bigg{)}\bigg{(}|\xi|^{|\beta|}|\hat{u}(s,\xi) |+|\xi|^{|\beta|-1}|\hat{u}_{t}(s,\xi)|\bigg{)}\quad\text{for}\quad|\beta| \geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{3}{8 }|\xi|^{2}\int_{s}^{t}g(\tau)d\tau\bigg{)}\bigg{(}|\xi|^{|\beta|+1}|\hat{u}(s, \xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\bigg{)}\quad\text{for}\quad|\beta| \geq 0.\] #### 3.2.2 Considerations in the reduced zone \(Z_{\text{red}}(\varepsilon,N)\) We have the same estimates from Corollary 3.5 in Subsection 3.1. **Corollary 3.12**.: _The following estimates hold in the reduced zone \(Z_{\text{red}}(\varepsilon,N)\) with \((s,\xi),(t,\xi)\in Z_{\text{red}}(\varepsilon,N)\) and \(s\leq t\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim|\xi|^{|\beta|}|\hat{u}(s, \xi)|+|\xi|^{|\beta|-1}|\hat{u}_{t}(s,\xi)|\;\;\text{for}\;\;|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim|\xi|^{|\beta|+1}|\hat{ u}(s,\xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\;\;\text{for}\;\;|\beta|\geq 0.\] #### 3.2.3 Considerations in the elliptic zone \(Z_{\text{rel}}(N)\) Let us write the equation (5) in the following form: \[D_{t}^{2}v+\bigg{(}\underbrace{\frac{g(t)^{2}}{4}|\xi|^{4}-|\xi|^{2}}_{=:\dot {H}^{2}(t,\xi)}\bigg{)}v+\underbrace{\frac{g^{\prime}(t)}{2}|\xi|^{2}}_{=:m(t, \xi)}v=0.\] **Remark 3.13**.: _We have the following inequalities with sufficiently large \(N\):_ \[d^{2}(t,\xi)\leq\frac{1}{4}g^{2}(t)|\xi|^{4}\qquad\text{and}\qquad d^{2}(t, \xi)\geq\Big{(}\frac{1}{4}-\frac{1}{N^{2}}\Big{)}g^{2}(t)|\xi|^{4}.\] _Therefore, we get \(d(t,\xi)\approx g(t)|\xi|^{2}\). Furthermore, it holds_ \[|d_{t}(t,\xi)|=\left|\frac{1}{4}\cdot\frac{g^{\prime}(t)g(t)|\xi|^{4}}{\sqrt{ \frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}}\right|\leq-\frac{1}{2\sqrt{1-\frac{4} {N^{2}}}}g^{\prime}(t)|\xi|^{2}.\] _Finally, we have_ \[-\frac{g^{\prime}(t)}{g(t)}d(t,\xi)\leq-m(t,\xi)\leq-\frac{1}{\sqrt{1-\frac{4} {N^{2}}}}\frac{g^{\prime}(t)}{g(t)}d(t,\xi).\] We introduce the micro-energy \[V=V(t,\xi):=\left(d(t,\xi)v,D_{t}v\right)^{\mathrm{T}}\qquad\mathrm{with}\qquad d (t,\xi):=\sqrt{\frac{g^{2}(t)}{4}|\xi|^{4}-|\xi|^{2}}.\] Thus, we have to apply tools from elliptic WKB-analysis as we did in Subsection 3.1 with only one step of diagonalization procedure. Transforming (15) to a system of first order for \(V=V(t,\xi)\) gives \[D_{t}V=\left(\begin{array}{cc}0&d(t,\xi)\\ -d(t,\xi)&0\end{array}\right)V+\left(\begin{array}{cc}\frac{D_{t}d(t,\xi)}{ d(t,\xi)}&0\\ -\frac{m(t,\xi)}{d(t,\xi)}&0\end{array}\right)V.\] Using \(V=MV^{(0)}\), \(M=\begin{pmatrix}i&1\\ -i&1\end{pmatrix}\), then after the first step of diagonalization we obtain \[D_{t}V^{(0)}=\left(\mathcal{D}(t,\xi)+\mathcal{R}(t,\xi)\right)V^{(0)},\] where \[\mathcal{D}(t,\xi)=\left(\begin{array}{cc}-id(t,\xi)&0\\ 0&id(t,\xi)\end{array}\right)\qquad\mathrm{and}\qquad\mathcal{R}(t,\xi)=\frac{ 1}{2}\left(\begin{array}{cc}\frac{D_{t}d(t,\xi)}{d(t,\xi)}-i\frac{m(t,\xi)}{ d(t,\xi)}&-\frac{D_{t}d(t,\xi)}{d(t,\xi)}+i\frac{m(t,\xi)}{d(t,\xi)}\\ -\frac{D_{t}d(t,\xi)}{d(t,\xi)}-i\frac{m(t,\xi)}{d(t,\xi)}&\frac{D_{t}d(t,\xi) }{d(t,\xi)}+i\frac{m(t,\xi)}{d(t,\xi)}\end{array}\right).\] From Remark 3.13 we find the estimates \[\Big{|}\frac{D_{t}d(t,\xi)}{d(t,\xi)}\Big{|}\leq-\frac{1}{\sqrt{1-\frac{4}{N^{ 2}}}}\frac{g^{\prime}(t)}{g(t)}\qquad\mathrm{and}\qquad-\frac{g^{\prime}(t)}{ g(t)}\leq-\frac{m(t,\xi)}{d(t,\xi)}\leq-\frac{1}{\sqrt{1-\frac{4}{N^{2}}}} \frac{g^{\prime}(t)}{g(t)}. \tag{17}\] Let us introduce \(F_{0}(t,\xi):=\mathrm{diag}\,\mathcal{R}(t,\xi)\) and \(\mathcal{R}_{1}(t,\xi):=\mathrm{antidiag}\,\mathcal{R}(t,\xi)\). **Step 2.**_Construction of the fundamental solution_ **Proposition 3.14**.: _The fundamental solution \(E_{\mathrm{ell}}^{V}=E_{\mathrm{ell}}^{V}(t,s,\xi)\) to the transformed operator_ \[D_{t}-\mathcal{D}(t,\xi)-F_{0}(t,\xi)-\mathcal{R}_{1}(t,\xi)\] _can be estimated by_ \[(|E_{\mathrm{ell}}^{V}(t,s,\xi)|)\lesssim\Big{(}\frac{g(s)}{g(t)}\Big{)}^{ \frac{1}{\sqrt{1-\frac{4}{N^{2}}}}-1}\exp\Big{(}\frac{|\xi|^{2}}{2}\int_{s}^{t }g(\tau)d\tau\Big{)}\Bigg{(}\begin{array}{cc}1&1\\ 1&1\end{array}\Bigg{)},\] _with \(s\leq t\) and \((t,\xi),(s,\xi)\in Z_{\mathrm{ell}}(N)\)._ Proof.: We transform the system for \(E_{\mathrm{ell}}^{V}=E_{\mathrm{ell}}^{V}(t,s,\xi)\) to an integral equation for a new matrix-valued function \(\mathcal{Q}_{\mathrm{ell}}=\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)\) as in the proof of Proposition 2.5. We define \[\mathcal{Q}_{\mathrm{ell}}(t,s,\xi):=\exp\Big{\{}-\int_{s}^{t}\beta(\tau,\xi) d\tau\Big{\}}E_{\mathrm{ell}}^{V}(t,s,\xi),\] where \(\beta=\beta(t,\xi)\) is chosen from the main entries of the diagonal matrix \(i\mathcal{D}(t,\xi)+iF_{0}(t,\xi)\) as follows: \[\beta(t,\xi)=d(t,\xi)+\frac{d_{t}(t,\xi)}{2d(t,\xi)}+\frac{m(t,\xi)}{2d(t,\xi)}.\] The standard construction of \(\mathcal{Q}_{\mathrm{ell}}=\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)\) in terms of a Peano-Baker series implies \[|\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)|\leq\exp\bigg{\{}\int_{s}^{t} \big{(}|\mathcal{R}_{1}(\tau,\xi)|d\tau\big{)} \leq\exp\bigg{\{}\int_{s}^{t}\Big{(}|\frac{D_{\tau}d(\tau,\xi)}{2d (\tau,\xi)}\Big{|}+\Big{|}\frac{m(\tau,\xi)}{2d(\tau,\xi)}\Big{|}\Big{)}d\tau \bigg{\}}\] \[\leq\exp\bigg{\{}\int_{s}^{t}\bigg{(}-\frac{1}{\sqrt{1-\frac{4}{N ^{2}}}}\frac{g^{\prime}(\tau)}{g(\tau)}\bigg{)}d\tau\bigg{)}\leq\Big{(}\frac{g( s)}{g(t)}\Big{)}^{\frac{1}{\sqrt{1-\frac{4}{N^{2}}}}}, \tag{18}\] where we used estimates in (17). Hence, from the last considerations we may conclude \[E_{\mathrm{ell}}^{V}(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\beta(\tau,\xi)d\tau\bigg{\}}\mathcal{Q }_{\mathrm{ell}}(t,s,\xi)\] \[=\exp\bigg{\{}\int_{s}^{t}\bigg{(}d(\tau,\xi)+\frac{\partial_{ \tau}d(\tau,\xi)}{2d(\tau,\xi)}+\frac{m(\tau,\xi)}{2d(\tau,\xi)}\bigg{)}d\tau \bigg{\}}\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)\] \[\leq\exp\bigg{\{}\int_{s}^{t}\bigg{(}d(\tau,\xi)+\frac{\partial_{ \tau}d(\tau,\xi)}{2d(\tau,\xi)}+\frac{g^{\prime}(\tau)}{2g(\tau)}\bigg{)}d\tau \bigg{\}}\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)\] \[\leq\sqrt{\frac{d(t,\xi)g(t)}{d(s,\xi)g(s)}}\exp\bigg{(}\int_{s}^{ t}d(\tau,\xi)d\tau\bigg{)}\mathcal{Q}_{\mathrm{ell}}(t,s,\xi).\] Then, using the estimate of \(\mathcal{Q}_{\mathrm{ell}}=\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)\) from (18), it follows \[(|E_{\mathrm{ell}}^{V}(t,s,\xi)|) \lesssim\frac{g(t)}{g(s)}\exp\bigg{(}|\xi|^{2}\int_{s}^{t}\frac{g( \tau)}{2}d\tau\bigg{)}\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right)|\mathcal{Q}_{\mathrm{ell}}(t,s,\xi)|\] \[\lesssim\Big{(}\frac{g(s)}{g(t)}\Big{)}^{\frac{1}{\sqrt{1-\frac{4} {N^{2}}}}-1}\exp\bigg{(}|\xi|^{2}\int_{s}^{t}\frac{g(\tau)}{2}d\tau\bigg{)} \left(\begin{array}{cc}1&1\\ 1&1\end{array}\right).\] This completes the proof. Using the backward transformation we arrive at the following result. **Corollary 3.15**.: _In \(Z_{ell}(N)\) we have the following estimates for \((s,\xi),(t,\xi)\in Z_{ell}(N)\) and \(0\leq s\leq t\):_ \[\frac{g(t)}{2}|\xi|^{|\beta|}|\hat{u}(t,\xi)| \lesssim\Big{(}\frac{g(s)}{g(t)}\Big{)}^{\kappa}\Big{(}g(s)|\xi| ^{|\beta|}|\hat{u}(s,\xi)|+|\xi|^{|\beta|-2}|\hat{u}_{t}(s,\xi)|\Big{)}\quad \text{for}\quad|\beta|\geq 2,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim\Big{(}\frac{g(s)}{g(t)}\Big{)}^{\kappa}\Big{(}g(s)|\xi| ^{|\beta|+2}|\hat{u}(s,\xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\Big{)}\quad \text{for}\quad|\beta|\geq 0,\] _where \(\kappa=\frac{1}{\sqrt{1-\frac{4}{N^{2}}}}-1\) is an arbitrarily small exponent for arbitrarily large \(N=N(\kappa)\) for all \(\kappa>0\)._ #### 3.2.4 Conclusion From the statements of Corollaries 3.11, 3.12 and 3.15 we derive our desired statements. _Case 1: \(t\leq t_{\xi_{1}}\)._ Due to Corollary 3.15 and \(g(t_{\xi_{1}})|\xi|=N\), we have \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \lesssim\frac{1}{g(t)^{\kappa+1}}\Big{(}|\xi|^{|\beta|}|\hat{u}_{ 0}(\xi)|+|\xi|^{|\beta|-2}|\hat{u}_{1}(\xi)|\Big{)}\lesssim|\xi|^{|\beta|+ \kappa+1}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|+\kappa-1}|\hat{u}_{1}(\xi)|,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim\frac{1}{g(t)^{\kappa}}\Big{(}|\xi|^{|\beta|+2}|\hat{u}_{ 0}(\xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\lesssim|\xi|^{|\beta|+\kappa+ 2}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|+\kappa}|\hat{u}_{1}(\xi)|.\] _Case 2: \(t_{\xi_{1}}\leq t\leq t_{\xi_{2}}\)._ In this case we apply Corollary 3.12 and use \(g(t_{\xi_{1}})|\xi|=N\). Then, we get \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \leq|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{t}(t_{\xi_{1}},\xi)|\lesssim|\xi|^{|\beta|+\kappa+1}|\hat{u}_{0}(\xi)|+| \xi|^{|\beta|+\kappa-1}|\hat{u}_{1}(\xi)|,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \leq|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|}| \hat{u}_{t}(t_{\xi_{1}},\xi)|\lesssim|\xi|^{|\beta|+\kappa+2}|\hat{u}_{0}(\xi)|+| \xi|^{|\beta|+\kappa}|\hat{u}_{1}(\xi)|.\] _Case 3:_\(t\geq t_{\xi_{2}}\). In this case we apply Corollary 3.11. It holds \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau)d \tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{\xi_{2}},\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{t}(t_{\xi_{2}},\xi)|\Big{)}\] \[\leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau)d \tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\] \[\leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau) d\tau\Big{)}\Big{(}|\xi|^{|\beta|+\kappa+1}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|+\kappa-1}| \hat{u}_{1}(\xi)|\Big{)},\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \leq\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g(\tau )d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{2}},\xi)|+|\xi|^{|\beta| }|\hat{u}_{t}(t_{\xi_{2}},\xi)|\Big{)}\] \[\lesssim\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g( \tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{| \beta|}|\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\] \[\lesssim\exp\Big{(}-\frac{3}{8}|\xi|^{2}\int_{t_{\xi_{2}}}^{t}g( \tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+\kappa+2}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|+\kappa}|\hat{u}_{1}(\xi)|\Big{)}.\] Thus, the proof of Theorem 3.9 is completed. ## 4 Models with non-integrable and decreasing time-dependent coefficient \(g=g(t)\) We assume that function \(g=g(t)\) satisfies the following conditions: **(D1)**: \(g(t)>0\), \(g^{\prime}(t)\leq 0\) and \(g^{\prime\prime}(t)\geq 0\) for all \(t\in[0,\infty)\), **(D2)**: \(g\notin L^{1}(0,\infty)\), **(D3)**: \(|d_{t}^{k}g(t)|\leq C_{k}g(t)\Big{(}\frac{1}{1+t}\Big{)}^{k}\) for all \(t\in[0,\infty)\), \(k=1,2\) and \(C_{1}\), \(C_{2}\) are positive constants. **Theorem 4.1**.: _Let us consider the Cauchy problem (3), where the coefficient \(g=g(t)\) satisfies the conditions **(D1)** to **(D3)**. Then, we have the following estimates for Sobolev solutions:_ \[\Big{\|}|D|^{|\beta|}u(t,\cdot)\Big{\|}_{L^{2}} \lesssim\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{-\frac{\beta \beta}{2}}\|u_{0}\|_{H^{\beta k}}+\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{- \frac{\beta\beta-1}{2}}\|u_{1}\|_{H^{\beta k-1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\Big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\Big{\|}_{L^{2}} \lesssim\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{-\frac{\beta \beta+1}{2}}\|u_{0}\|_{H^{\beta k+1}}+\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^ {-\frac{\beta\beta}{2}}\|u_{1}\|_{H^{\beta k}}\quad\text{for}\quad|\beta|\geq 0.\] **Remark 4.2**.: _The statements of Theorem 4.1 show that we have the parabolic effect. This means that the energies of higher order of Sobolev solutions decay faster and faster with increasing order._ **Example 4.3**.: _Let us choose \(g(t)=(1+t)^{-\gamma}\), \(\gamma\in(0,1]\). Then, \(g=g(t)\) satisfies the assumptions of Theorem 4.1. Consequently, the following estimates for Sobolev solutions hold: \(\gamma\in(0,1)\):_ \[\Big{\|}|D|^{|\beta|}u(t,\cdot)\Big{\|}_{L^{2}} \lesssim(1+t)^{-\frac{\beta\beta(1-\gamma)}{2}}\|u_{0}\|_{H^{ \beta k}}+(1+t)^{-\frac{\beta\beta(1-\gamma)}{2}}\|u_{1}\|_{H^{\beta k-1}}\quad \text{for}\quad|\beta|\geq 1,\] \[\Big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\Big{\|}_{L^{2}} \lesssim(1+t)^{-\frac{\beta\beta+1(1-\gamma)}{2}}\|u_{0}\|_{H^{ \beta k+1}}+(1+t)^{-\frac{\beta\beta(1-\gamma)}{2}}\|u_{1}\|_{H^{\beta k}}\quad \text{for}\quad|\beta|\geq 0;\] \(\gamma=1\)_:_ \[\Big{\|}|D|^{|\beta|}u(t,\cdot)\Big{\|}_{L^{2}} \lesssim\Big{(}\log(e+t)\Big{)}^{-\frac{\beta\beta}{2}}\|u_{0}\|_{H^ {\beta k}}+\Big{(}\log(e+t)\Big{)}^{-\frac{\beta\beta}{2}}\|u_{1}\|_{H^{\beta k -1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\Big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\Big{\|}_{L^{2}} \lesssim\Big{(}\log(e+t)\Big{)}^{-\frac{\beta\beta+1}{2}}\|u_{0}\|_{H^ {\beta k+1}}+\Big{(}\log(e+t)\Big{)}^{-\frac{\beta\beta}{2}}\|u_{1}\|_{H^{\beta k }}\quad\text{for}\quad|\beta|\geq 0.\] **Example 4.4**.: _Let us consider \(g(t)=\left((e^{2}+t)\log(e^{2}+t)\right)^{-1}\). Then, \(g=g(t)\) satisfies the assumptions of Theorem 4.1 and the following estimates for Sobolev solutions are given as follows:_ \[\left\||D|^{|\beta|}u(t,\cdot)\right\|_{t^{2}} \lesssim\Big{(}\log\big{(}\log(e^{2}+t)\big{)}^{-\frac{\beta \beta}{2}}\|u_{0}\|_{H^{\beta\beta}}+\Big{(}\log\big{(}\log(e^{2}+t)\big{)}^{- \frac{\beta\beta}{2}}\|u_{1}\|_{H^{\beta\beta-1}}\quad\text{for}\quad|\beta| \geq 1,\] \[\left\||D|^{|\beta|}u_{t}(t,\cdot)\right\|_{t^{2}} \lesssim\Big{(}\log\big{(}\log(e^{2}+t)\big{)}^{-\frac{\beta \beta+1}{2}}\|u_{0}\|_{H^{\beta\beta+1}}+\Big{(}\log\big{(}\log(e^{2}+t) \big{)}^{-\frac{\beta\beta}{2}}\|u_{1}\|_{H^{\beta\beta}}\quad\text{for}\quad| \beta|\geq 0.\] **Example 4.5**.: _Let us consider \(g(t)=\dfrac{\log(e^{2}+t)}{e^{2}+t}\). Then, \(g=g(t)\) satisfies the assumptions of Theorem 4.1 and the following estimates for Sobolev solutions are given as follows:_ \[\left\||D|^{|\beta|}u(t,\cdot)\right\|_{t^{2}} \lesssim\Big{(}\log(e^{2}+t)^{2}\Big{)}^{-\frac{\beta\beta}{2}} \|u_{0}\|_{H^{\beta\beta}}+\Big{(}\log(e^{2}+t)^{2}\Big{)}^{-\frac{\beta\beta -1}{2}}\|u_{1}\|_{H^{\beta\beta-1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\left\||D|^{|\beta|}u_{t}(t,\cdot)\right\|_{t^{2}} \lesssim\Big{(}\log(e^{2}+t)\Big{)}^{-\frac{\beta\beta+1}{2}}\|u_ {0}\|_{H^{\beta\beta+1}}+\Big{(}\log(e^{2}+t)\big{)}^{2}\Big{)}^{-\frac{\beta \beta}{2}}\|u_{1}\|_{H^{\beta\beta}}\quad\text{for}\quad|\beta|\geq 0.\] **Remark 4.6**.: _Let us consider the visco-elastic damped case in the paper [1] with \(g(t)=(1+t)^{-\gamma}\), \(\gamma\in(-1,1)\). Then, the dissipation is non-effective and there exist is a parabolic effect in this case. One may also see that the regularity of the data in this paper is given by \((u_{0},u_{1})\in H^{|\beta|}\times H^{|\beta|-2}\) (see the decay estimate and more details in Remark 20 of [1])._ Proof of Theorem 4.1.: We write the equation in (7) in the following form: \[D_{t}^{2}v+\frac{g(t)^{2}}{4}|\xi|^{4}v-\Big{(}1-\frac{g^{\prime}(t)}{2}\Big{)} |\xi|^{2}v=0.\] We introduce \(h=h(t)=1-\dfrac{g^{\prime}(t)}{2}\geq 1\). The function \(h\) is bounded if \(t\) tends to infinity. We consider \[D_{t}^{2}v+\frac{g(t)^{2}}{4}|\xi|^{4}v-h(t)|\xi|^{2}v=0.\] Due to condition **(D1)** the function \(f=f(t)=\dfrac{g(t)^{2}}{h(t)}\) is monotonically decreasing. Thus, we have a separating line \(t_{\xi}\) as solution of the implicit equation \(\dfrac{g(t)^{2}}{4h(t)}|\xi|^{2}=1\). Let us divide the extended phase space into the following zones: * hyperbolic zone: \[Z_{\text{hyp}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:1-\frac{g(t) ^{2}|\xi|^{2}}{4h(t)}\geq\frac{1}{4}\Big{\}},\] * reduced zone: \[Z_{\text{red}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:-\frac{1}{4} \leq 1-\frac{g(t)^{2}|\xi|^{2}}{4h(t)}\leq\frac{1}{4}\Big{\}},\] * elliptic zone: \[Z_{\text{ell}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:1-\frac{g(t)^{ 2}|\xi|^{2}}{4h(t)}\leq-\frac{1}{4}\Big{\}}.\] We denote the separating line between elliptic and reduced zone as \(t_{\xi_{1}}\) and that between hyperbolic zone and reduced zone as \(t_{\xi_{2}}\). The blue dashed line denotes the separating line between the hyperbolic and the elliptic region. ### Considerations in the hyperbolic zone \(Z_{\text{hyp}}\) **Proposition 4.7**.: _The following estimates hold for all \(t_{\xi_{2}}\leq s\leq t\), where \(t_{\xi_{2}}=0\) for small frequencies:_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \lesssim\exp\bigg{(}-\frac{|\xi|^{2}}{2}\int_{s}^{t}g(\tau)d\tau \bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(s,\xi)|+|\xi|^{|\beta|-1}|\hat{u}_{t}(s, \xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim\exp\bigg{(}-\frac{|\xi|^{2}}{2}\int_{s}^{t}g(\tau)d\tau \bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(s,\xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] Proof.: Let us consider the equation \[v_{n}+\bigg{(}\underbrace{h(t)|\xi|^{2}-\frac{g(t)^{2}}{4}|\xi|^{4}}_{\coloneqq p ^{2}(t,\xi)}\bigg{)}v=0.\] We define the micro-energy \[V(t,\xi)=\big{(}p(t,\xi)v,D_{t}v\big{)}^{\text{T}},\qquad p(t,\xi):=\sqrt{h(t )|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}},\qquad p(t,\xi)\approx\sqrt{h(t)}|\xi| \ \ \text{for}\ \ (t,\xi)\in Z_{\text{hyp}}.\] Thus, we have to apply tools from hyperbolic WKB-analysis. Transformation to a system of first order from (5) gives \[D_{t}V=\left(\begin{array}{cc}0&p(t,\xi)\\ p(t,\xi)&0\end{array}\right)V+\left(\begin{array}{cc}\frac{D_{t}p(t,\xi)}{p( t,\xi)}&0\\ 0&0\end{array}\right)V.\] Using \(V=MV^{(0)}\) with \(M=\begin{pmatrix}1&-1\\ 1&1\end{pmatrix}\), then after the first step of diagonalization we obtain \[D_{t}V^{(0)}=\big{(}\mathcal{D}(t,\xi)+\mathcal{R}(t,\xi)\big{)}V^{(0)},\] where \[\mathcal{D}(t,\xi)=\left(\begin{array}{cc}p(t,\xi)&0\\ 0&p(t,\xi)\end{array}\right)\qquad\text{and}\qquad\mathcal{R}(t,\xi)=\frac{1}{ 2}\left(\begin{array}{cc}\frac{D_{t}p(t,\xi)}{p(t,\xi)}&-\frac{D_{t}p(t,\xi) }{p(t,\xi)}\\ -\frac{D_{t}p(t,\xi)}{p(t,\xi)}&\frac{D_{t}p(t,\xi)}{p(t,\xi)}\end{array} \right).\] Figure 3: Sketch of the zones for the case \(g=g(t)\) is non-integrable and decreasing After the first step of diagonalization procedure the entries of the matrix \(\mathcal{R}(t,\xi)\) are uniformly integrable over the hyperbolic zone \(Z_{\mathrm{hyp}}\). We can write \(V^{(1)}(t,\xi)=E_{1}(t,s,\xi)V^{(1)}(s,\xi)\), where \(E_{1}=E_{1}(t,s,\xi)\) is the fundamental solution, that is, the solution of the system \[D_{t}E_{1}(t,s,\xi)=(\mathcal{D}(t,\xi)+\mathcal{R}(t,\xi))E_{1}(t,s,\xi), \quad E_{1}(s,s,\xi)=I,\] for all \(t\geq s\) and \((s,\xi)\in Z_{\mathrm{hyp}}\). Straightforward calculations imply (see Proposition 3.1 of [7]) \[|E_{1}(t,s,\xi)|\leq C\quad\text{for all}\quad t\geq s\quad\text{and}\quad(s, \xi)\in Z_{\mathrm{hyp}}.\] Finally, we obtain the following estimate for the transformed micro-energy \(V^{(1)}(t,\xi)\) in the hyperbolic zone: \[|V^{(1)}(t,\xi)|\lesssim|V^{(1)}(s,\xi)|,\qquad\left|\left(\begin{array}{c} p(t,\xi)v(t,\xi)\\ D_{t}v(t,\xi)\end{array}\right)\right|\lesssim\left|\left(\begin{array}{c}p( s,\xi)v(s,\xi)\\ D_{t}v(s,\xi)\end{array}\right)\right|\] uniformly for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\mathrm{hyp}}\). From the backward transformation \[\hat{u}(t,\xi)=\exp\bigg{(}-\frac{|\xi|^{2}}{2}\int_{0}^{t}g(\tau)d\tau\bigg{)} v(t,\xi),\] and the equivalence \(p(t,\xi)\approx\sqrt{h(t)}|\xi|\), where \(h=h(t)\) is bounded for large time \(t\), gives the desired estimates. ### Considerations in the reduced zone \(Z_{\mathrm{red}}\) **Proposition 4.8**.: _The following estimates hold for all \((t,\xi),(s,\xi)\in Z_{\mathrm{red}}\) with \(s\leq t\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{2 }}{6}\int_{s}^{t}g(\tau)d\tau\bigg{)}\big{(}|\xi|^{|\beta|}|\hat{u}(s,\xi)|+| \xi|^{|\beta|-1}|\hat{u}_{t}(s,\xi)|\big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{| \xi|^{2}}{6}\int_{s}^{t}g(\tau)d\tau\bigg{)}\big{(}|\xi|^{|\beta|+1}|\hat{u}(s, \xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\big{)}\quad\text{for}\quad|\beta| \geq 0.\] Proof.: In the reduced zone we have \(\sqrt{h(t)}|\xi|\approx\frac{g(t)}{2}|\xi|^{2}\). Employing the transformed equation \[v_{tt}+\bigg{(}h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\bigg{)}v=0,\] we can estimate \[\Big{|}h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\Big{|}\leq\frac{g^{2}(t)}{12 }|\xi|^{4}\qquad\text{taking account of}\qquad\frac{g^{2}(t)}{5}|\xi|^{4}\leq h(t )|\xi|^{2}\leq\frac{g^{2}(t)}{3}|\xi|^{4}.\] Thus, we define the micro-energy \[V(t,\xi)=\Big{(}\frac{g(t)}{4}|\xi|^{2}v,D_{t}v\Big{)}^{\mathrm{T}}\quad\text {for all}\quad t\geq t_{\xi_{2}}\quad\text{and}\quad(t,\xi)\in Z_{\mathrm{ red}}.\] Then, we get the following system of first order: \[D_{t}V(t,\xi)=\underbrace{\left(\begin{array}{cc}\frac{D_{t}g(t)}{g(t)}& \frac{g(t)}{4}|\xi|^{2}\\ \frac{h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}}{\frac{g(t)}{4}|\xi|^{2}}&0 \end{array}\right)}_{A_{V}(t,\xi)}V(t,\xi). \tag{19}\] To estimate the entries of this matrix we will use \[\frac{\Big{|}h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\Big{|}}{\frac{g(t)}{4} |\xi|^{2}}\leq\frac{g(t)}{3}|\xi|^{2}.\] **Corollary 4.9**.: _The fundamental solution \(E=E(t,s,\xi)\) to (19) for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\text{red}}\) satisfies_ \[|E(t,s,\xi)|\leq\exp\left(\frac{|\xi|^{2}}{3}\int_{s}^{t}g(\tau)d\tau\right).\] From the backward transformation and the equivalence \(\frac{g(t)}{2}|\xi|^{2}\approx h(t)|\xi|\) in \(Z_{\text{red}}\), we may conclude the desired statements of the proposition. ### Considerations in the elliptic zone \(Z_{\text{ell}}\) To estimate \(-g^{\prime}(t)\) we use the definition of the elliptic zone. We get \[-\frac{g^{\prime}(t)}{2}\leq 1-\frac{g^{\prime}(t)}{2}=h(t)\leq\frac{g(t)^{2}| \xi|^{2}}{5}.\] To estimate \(g^{\prime\prime}(t)\) we use assumption **(D3)**. Let us define the following classes of symbols related to the properties of \(g=g(t)\) and \(Z_{\text{ell}}\). **Definition 4.10**.: _A function \(f=f(t,\xi)\) belongs to the elliptic symbol class \(S^{\ell}_{\text{ell}}\{m_{1},m_{2},m_{3}\}\) if it holds_ \[|D^{k}_{t}f(t,\xi)|\leq C_{k}|\xi|^{m_{1}}g(t)^{m_{2}}\bigg{(}\frac{1}{1+t} \bigg{)}^{m_{3}+k}\] _for all \((t,\xi)\in Z_{\text{ell}}\) and all \(k\leq\ell\)._ The further considerations are basing on the following rules of the symbolic calculus. **Proposition 4.11**.: _The following statements are true:_ * \(S^{\ell}_{\text{ell}}\{m_{1},m_{2},m_{3}\}\) _is a vector space for all nonnegative integers_ \(\ell\)_;_ * \(S^{\ell}_{\text{ell}}\{m_{1},m_{2}+k,m_{3}\}\hookrightarrow S^{\ell}_{\text{ell }}\{m_{1},m_{2},m_{3}\}\) _for_ \(k\geq 0\)_;_ * \(S^{\ell}_{\text{ell}}\{m_{1},m_{2},m_{3}\}\cdot S^{\ell}_{\text{ell}}\{m^{ \prime}_{1},m^{\prime}_{2},m^{\prime}_{3}\}\hookrightarrow S^{\ell}_{\text{ell }}\{m_{1}+m^{\prime}_{1},m_{2}+m^{\prime}_{2},m_{3}+m^{\prime}_{3}\}\)_;_ * \(D^{k}_{t}S^{\ell}_{\text{ell}}\{m_{1},m_{2},m_{3}\}\hookrightarrow S^{\ell-k} _{\text{ell}}\{m_{1},m_{2},m_{3}+k\}\) _for all nonnegative integers_ \(\ell\) _with_ \(k\leq\ell\)_._ Let us turn to the equation (4) in the following form: \[D^{2}_{t}u-|\xi|^{2}u-ig(t)|\xi|^{2}D_{t}u=0. \tag{20}\] If we introduce the micro-energy \(U=U(t,\xi)\) in \(Z_{\text{ell}}\) by \(U=(|\xi|\hat{u},D_{t}\hat{u})^{\text{T}}\), then the corresponding first-order system of (20) leads to \[D_{t}U=\underbrace{\left(\begin{array}{cc}0&|\xi|\\ |\xi|&ig(t)|\xi|^{2}\end{array}\right)}_{A(t,\xi)}U. \tag{21}\] **Proposition 4.12**.: _The following estimates hold for the solutions to (21) for all \(t\in[0,t_{\xi_{1}}]\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-C\int_{0}^{t} \frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-C\int_{0} ^{t}\frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+| \xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}+\exp\bigg{(}-\frac{|\xi|^{2}}{2}\int_{ 0}^{t}g(\tau)d\tau\bigg{)}|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\quad\text{for} \quad|\beta|\geq 0.\] Proof.: The proof is divided into two steps. **Step 1.**_A straight-forward estimate for the fundamental solution \(E=E(t,s,\xi)\)_ **Proposition 4.13**.: _The fundamental solution \(E=E(t,s,\xi)\) to (21) satisfies for all \(t\geq s\) and \((t,\xi)\), \((s,\xi)\in Z_{\text{ell}}\) the following estimates:_ \[\left(\begin{array}{cc}|E^{(11)}(t,s,\xi)|&|E^{(12)}(t,s,\xi)|\\ |E^{(21)}(t,s,\xi)|&|E^{(22)}(t,s,\xi)|\end{array}\right)\lesssim\exp\bigg{(}- C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\bigg{)}\left(\begin{array}{cc}1& \frac{1}{g(s)|\xi|}\\ g(t)|\xi|&\frac{g(t)}{g(s)}\end{array}\right),\] _where the constant \(C\) is independent of \((s,\xi),(t,\xi)\in Z_{\text{ell}}\)._ Proof.: Let us carry out the first step of diagonalization. The eigenvalues of the matrix \(A=A(t,\xi)\) are \[\lambda_{k}(t,\xi)=\frac{ig(t)|\xi|^{2}+(-1)^{k-1}i\sqrt{g^{2}(t)|\xi|^{4}-4| \xi|^{2}}}{2},\quad k=1,2.\] In the further calculations we use the following properties of \(\lambda_{1}(t,\xi)\) and \(\lambda_{2}(t,\xi)\). **Lemma 4.14**.: _It holds_ 1. \(\Im\lambda_{1}(t,\xi)+\Im\lambda_{2}(t,\xi)=g(t)|\xi|^{2},\quad\Im\lambda_{1 }(t,\xi)\Im\lambda_{2}(t,\xi)=|\xi|^{2},\)__ 2. \(\Im\lambda_{1}(t,\xi)\geq\Im\lambda_{2}(t,\xi)\geq 0,\quad|\lambda_{1}(t,\xi )|\geq|\lambda_{2}(t,\xi)|,\)__ 3. \(\frac{g(t)}{2}|\xi|^{2}\leq\Im\lambda_{1}(t,\xi)\leq g(t)|\xi|^{2},\quad \frac{1}{g(t)}\leq\Im\lambda_{2}(t,\xi)\leq\frac{2}{g(t)}.\)__ Then, we introduce the corresponding matrix of eigenvectors \(M=M(t,\xi)\) and \(M^{-1}=M^{-1}(t,\xi)\) as \[M(t,\xi):=\left(\begin{array}{cc}1&1\\ \lambda_{1}(t,\xi)|\xi|^{-1}&\lambda_{2}(t,\xi)|\xi|^{-1}\end{array}\right), \qquad M^{-1}(t,\xi):=\frac{i}{\sqrt{g^{2}(t)|\xi|^{2}-4}}\left(\begin{array} []{cc}\lambda_{2}(t,\xi)|\xi|^{-1}&-1\\ -\lambda_{1}(t,\xi)|\xi|^{-1}&1\end{array}\right).\] Setting \(U^{(1)}(t,\xi):=M^{-1}(t,\xi)U(t,\xi)\) for all \(t\geq s\) and \((t,\xi)\in Z_{\text{ell}}\) we obtain the following system: \[D_{t}U^{(1)}(t,\xi)=M^{-1}(t,\xi)A(t,\xi)M(t,\xi)U^{(1)}(t,\xi)-M^{-1}(t,\xi) D_{t}M(t,\xi)U^{(1)}(t,\xi). \tag{22}\] Straight-forward calculations imply \[\mathcal{D}(t,\xi) =M^{-1}(t,\xi)A(t,\xi)M(t,\xi)=\left(\begin{array}{cc}\lambda _{1}(t,\xi)&0\\ 0&\lambda_{2}(t,\xi)\end{array}\right), \tag{23}\] \[\mathcal{R}(t,\xi) =M^{-1}(t,\xi)D_{t}M(t,\xi)=-\frac{1}{2}\left(\begin{array}{cc }a+b&a-b\\ -a-b&-a+b\end{array}\right), \tag{24}\] where \[a:=\frac{g^{\prime}(t)|\xi|^{2}}{i\sqrt{g^{2}(t)|\xi|^{4}-4|\xi|^{2}}},\qquad b :=\frac{g(t)g^{\prime}(t)|\xi|^{4}}{i(g^{2}(t)|\xi|^{4}-4|\xi|^{2})}.\] The system (22) has diagonal principal part \(\mathcal{D}\in S^{2}_{\text{ell}}[2,1,0]\) with the remainder \(\mathcal{R}\in S^{1}_{\text{ell}}[0,0,1]\). We carry out one more step of diagonalization procedure. Let \[N^{(1)}:=\left(\begin{array}{cc}0&\frac{R_{12}}{\lambda_{2}- \lambda_{1}}\\ \frac{R_{21}}{\lambda_{1}-\lambda_{2}}&0\end{array}\right)\in S^{1}_{\text{ell }}[-2,-1,1]. \tag{25}\] Now we set \(N_{1}(t,\xi):=I+N^{(1)}(t,\xi)\). In order to prove the invertibility of \(N_{1}\) we need to estimate the entries of \(N^{(1)}\). We have \[\frac{R_{21}}{\lambda_{1}-\lambda_{2}} =-\frac{1}{2}\frac{g^{\prime}(t)}{g^{2}(t)|\xi|^{2}-4}-\frac{1}{2} \frac{g(t)g^{\prime}(t)|\xi|}{(g^{2}(t)|\xi|^{2}-4)^{\frac{3}{2}}},\] \[\frac{R_{12}}{\lambda_{2}-\lambda_{1}} =-\frac{1}{2}\frac{g^{\prime}(t)}{g^{2}(t)|\xi|^{2}-4}+\frac{1}{2} \frac{g(t)g^{\prime}(t)|\xi|}{(g^{2}(t)|\xi|^{2}-4)^{\frac{3}{2}}}.\] From the definition of \(Z_{\rm ell}\), we have \(g^{2}(t)|\xi|^{2}\geq 5h(t)\). This implies the following estimates: \[\frac{1}{g^{2}(t)|\xi|^{2}}\leq\frac{1}{5h(t)}\qquad\text{and}\qquad\frac{1}{ 1-\frac{4}{g^{2}(t)|\xi|^{2}}}\leq\frac{1}{1-\frac{4}{5h(t)}}. \tag{26}\] Thus, using the estimates in (26) we get the following estimates: \[-\frac{1}{2}\frac{g^{\prime}(t)}{g^{2}(t)|\xi|^{2}-4}=-\frac{g^{\prime}(t)}{2 }\frac{1}{g^{2}(t)|\xi|^{2}\Big{(}1-\frac{4}{g^{2}(t)|\xi|^{2}}\Big{)}}\leq- \frac{g^{\prime}(t)}{2}\frac{1}{5h(t)\Big{(}1-\frac{4}{5h(t)}\Big{)}}=-\frac{ g^{\prime}(t)}{2-5g^{\prime}(t)}\leq\frac{1}{5}.\] Using the previous estimate, we have \[-\frac{1}{2}\frac{g(t)g^{\prime}(t)|\xi|}{(g^{2}(t)|\xi|^{2}-4)^{\frac{3}{2}} }=-\frac{g^{\prime}(t)}{2}\frac{1}{g^{2}(t)|\xi|^{2}\Big{(}1-\frac{4}{g^{2}(t) |\xi|^{2}}\Big{)}^{\frac{3}{2}}}\leq-\frac{g^{\prime}(t)}{2-5g^{\prime}(t)} \frac{1}{\Big{(}1-\frac{4}{5h(t)}\Big{)}^{\frac{1}{2}}}\leq\frac{1}{5}\frac{1 }{\sqrt{1-\frac{4}{5}}}=\frac{1}{\sqrt{5}}.\] Therefore, the previous two inequalities guarantee that the matrix \(N_{1}=N_{1}(t,\xi)\) is invertible. Let \(\mathcal{R}_{1}:=-N_{1}^{-1}((D_{t}-\mathcal{R})N^{(1)}+N^{(1)}F^{(1)})\). Then, in \(Z_{\rm ell}\) we have \(\mathcal{R}_{1}\in S^{0}_{\rm ell}\{-2,1,2\}\) such that the following identity holds: \[(D_{t}-\mathcal{D}(t,\xi)-\mathcal{R}(t,\xi))N_{1}(t,\xi)=N_{1}(t,\xi)(D_{t}- \mathcal{D}(t,\xi)-F^{(1)}(t,\xi)-\mathcal{R}_{1}(t,\xi)),\] where \(F^{(1)}=\operatorname{diag}\mathcal{R}\) and, \(\mathcal{D}\) and \(\mathcal{R}\) are defined in (23) and (24), respectively. The representation of \(\mathcal{R}_{1}\) follows immediately from the fact that \([N^{(1)},\mathcal{D}]=\mathcal{R}-F^{(1)}\). By taking into consideration the matrices \(F^{(1)}\in S^{1}_{\rm ell}\{0,0,1\}\), \(N^{(1)}\in S^{1}_{\rm ell}\{-2,-1,1\}\) and \(\mathcal{R}\in S^{1}_{\rm ell}\{0,0,1\}\), and using the property of symbol classes from Proposition 4.11 we may conclude that \(\mathcal{R}_{1}\in S^{0}_{\rm ell}\{-2,-1,2\}\). Let \(U^{(2)}(t,\xi):=N_{1}^{-1}(t,\xi)M^{-1}(t,\xi)U(t,\xi)\), then we obtain the following equivalent problem to (21) for \(U^{(2)}(t,\xi)\): \[\big{(}D_{t}-\mathcal{D}(t,\xi)-F^{(1)}(t,\xi)-\mathcal{R}_{1}(t,\xi)\big{)}U^ {(2)}(t,\xi)=0,\] for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\rm ell}\). Thus, we have obtained in \(Z_{\rm ell}\) the diagonalization of the system (21) modulo remainder \(\mathcal{R}_{1}\in S^{0}_{\rm ell}\{-2,-1,2\}\). Namely, the entries of the matrix \(\mathcal{R}_{1}=\mathcal{R}_{1}(t,\xi)\) are uniformly integrable over the elliptic zone \(Z_{\rm ell}\). For this reason, the matrix \(\mathcal{R}_{1}=\mathcal{R}_{1}(t,\xi)\) belongs to \(L^{1}_{\rm loc}(Z_{\rm ell})\). Hence, we can find the solution to the system \[\big{(}D_{t}-\mathcal{D}(t,\xi)-F^{(1)}(t,\xi)-\mathcal{R}_{1}(t,\xi)\big{)}U^ {(2)}(t,\xi)=0.\] We can write \(U^{(2)}(t,\xi)=E_{2}(t,s,\xi)U^{(2)}(s,\xi)\), where \(E_{2}=E_{2}(t,s,\xi)\) is the fundamental solution to the system \[D_{t}E_{2}(t,s,\xi)=\big{(}\mathcal{D}(t,\xi)+F^{(1)}(t,\xi)+\mathcal{R}_{1}(t, \xi)\big{)}E_{2}(t,s,\xi),\quad E_{2}(s,s,\xi)=I,\] for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\rm ell}\). First, we estimate \(E_{d}=E_{d}(t,s,\xi)\) as the fundamental solution of the diagonal part of this system, that is, \[D_{t}E_{d}(t,s,\xi)=\big{(}\mathcal{D}(t,\xi)+F^{(1)}(t,\xi)\big{)}E_{d}(t,s, \xi),\quad E_{d}(s,s,\xi)=I,\] for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\rm ell}\). Thus, \[E_{d}^{(11)}(t,s,\xi) =\exp\bigg{\{}-\frac{1}{2}\int_{s}^{t}\bigg{(}1+\frac{g^{\prime}( \tau)|\xi|^{2}}{g^{2}(\tau)|\xi|^{4}-4|\xi|^{2}}\bigg{)}\bigg{(}\sqrt{g^{2}( \tau)|\xi|^{4}-4|\xi|^{2}}+g(\tau)|\xi|^{2}\bigg{)}d\tau\bigg{\}},\] \[E_{d}^{(22)}(t,s,\xi) =\exp\bigg{\{}\frac{1}{2}\int_{s}^{t}\bigg{(}1+\frac{g^{\prime}( \tau)|\xi|^{2}}{g^{2}(\tau)|\xi|^{4}-4|\xi|^{2}}\bigg{)}\bigg{(}\sqrt{g^{2}( \tau)|\xi|^{4}-4|\xi|^{2}}-g(\tau)|\xi|^{2}\bigg{)}d\tau\bigg{\}},\] \[E_{d}^{(12)}(t,s,\xi) =E_{d}^{(21)}(t,s,\xi)=0.\] **Proposition 4.15**.: _We have the following estimate for all \((t,\xi),(s,\xi)\in Z_{\rm ell}\):_ \[|E_{d}(t,s,\xi)|\lesssim\exp\Big{(}-C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\] _with a positive constant \(C\) which is independent of \((t,\xi),(s,\xi)\in Z_{\rm ell}\)._ Proof.: The estimate for \(E_{d}=E_{d}(t,s,\xi)\) will be determined by the estimate of \(E_{d}^{(22)}=E_{d}^{(22)}(t,s,\xi)\). By applying the definition of the elliptic zone and Lemma 4.14, we get the following estimates: \[\frac{1}{2}\Big{(}\sqrt{g^{2}(t)|\xi|^{4}-4|\xi|^{2}}-g(t)|\xi|^{2}\Big{)} \leq-\frac{1}{g(t)}\qquad\text{and}\qquad\frac{|g^{\prime}(\tau)|\xi|^{2}}{g^{ 2}(\tau)|\xi|^{4}-4|\xi|^{2}}\leq\frac{2}{5}\quad\text{by (\ref{eq:E_d}).}\] This completes the proof. The fundamental solution \(E_{2}=E_{2}(t,s,\xi)\) satisfies \[\big{(}D_{t}-\mathcal{D}(t,\xi)-F^{(1)}(t,\xi)-\mathcal{R}_{1}(t,\xi)\big{)}E _{2}(t,s,\xi)=0,\quad E_{2}(s,s,\xi)=I\] for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\rm ell}\). Following the same procedure as in the proof of Proposition 2.5, we have \[E_{2}(t,s,\xi) =\exp\Big{(}i\int_{s}^{t}\Big{(}\mathcal{D}(\tau,\xi)+F^{(1)}(\tau,\xi)\Big{)}d\tau\Big{)}E_{2}(s,s,\xi)\] \[\quad+i\int_{s}^{t}\exp\Big{(}i\int_{\theta}^{\pi}\Big{(} \mathcal{D}(\tau,\xi)+F^{(1)}(\tau,\xi)\Big{)}d\tau\Big{)}\mathcal{R}_{1}( \theta,\xi)E_{2}(t,s,\xi)d\theta.\] Then, using the expression for \(E_{d}^{(22)}=E_{d}^{(22)}(t,s,\xi)\) we introduce the weight \[\beta(t,\xi)=\frac{1}{2}\Big{(}1+\frac{g^{\prime}(t)|\xi|^{2}}{g^{2}(t)|\xi|^ {4}-4|\xi|^{2}}\Big{)}\bigg{(}\sqrt{g^{2}(t)|\xi|^{4}-4|\xi|^{2}}-g(t)|\xi|^{2 }\Big{)}. \tag{27}\] Let us define \[Q_{\rm ell}=Q_{\rm ell}(t,s,\xi):=\exp\bigg{(}-\int_{s}^{t}\beta(\tau,\xi)d \tau\bigg{)}E_{2}(t,s,\xi). \tag{28}\] Then, we get \[Q_{\rm ell}(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\Big{(}i\mathcal{D}(\tau,\xi)+iF^{(1)}( \tau,\xi)-\beta(\tau,\xi)I\Big{)}d\tau\bigg{\}}\] \[\quad+\int_{s}^{t}\exp\Big{\{}\int_{\theta}^{t}\Big{(}i\mathcal{D }(\tau,\xi)+iF^{(1)}(\tau,\xi)-\beta(\tau,\xi)I\Big{)}d\tau\Big{\}}\mathcal{R} _{1}(\theta,\xi)\mathcal{Q}_{\rm ell}(\theta,s,\xi)\,d\theta.\] Furthermore, \[H(t,s,\xi) =\exp\bigg{\{}\int_{s}^{t}\big{(}i\mathcal{D}(\tau,\xi)+iF^{(1)}( \tau,\xi)-\beta(\tau,\xi)I\big{)}d\tau\bigg{\}}\] \[=\text{diag}\bigg{(}\exp\bigg{\{}-\frac{1}{2}\int_{s}^{t}\Big{(} \sqrt{g^{2}(\tau)|\xi|^{4}-|\xi|^{2}}+\frac{g^{\prime}(\tau)|\xi|^{2}}{\sqrt{ g^{2}(\tau)|\xi|^{4}-4|\xi|^{2}}}\Big{)}d\tau\bigg{\}},\,1\bigg{\}}.\] Hence, the matrix \(H=H(t,s,\xi)\) is uniformly bounded for \((s,\xi),(t,\xi)\in Z_{\rm ell}\). Taking account of \(\mathcal{R}_{1}\in S^{0}_{\rm ell}(-2,-1,2]\), the matrix \(Q_{\rm ell}=Q_{\rm ell}(t,s,\xi)\) which is given by the matrix representation, is uniformly bounded in \(Z_{\rm ell}\). From the last consideration we may conclude \[\bigg{(}\begin{array}{cc}|E_{2}^{(11)}(t,s,\xi)|&|E_{2}^{(12)}(t,s,\xi)|\\ |E_{2}^{(21)}(t,s,\xi)|&|E_{2}^{(22)}(t,s,\xi)|\end{array}\bigg{)}\lesssim\exp \bigg{(}-C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\bigg{)}\bigg{(}\begin{array}{ cc}1&1\\ 1&1\end{array}\bigg{)}\] for all \(t\geq s\) and \((s,\xi),(t,\xi)\in Z_{\rm ell}\). From \(U^{(2)}(t,\xi)=N_{1}^{-1}(t,\xi)M^{-1}(t,\xi)U(t,\xi)\) the backward transformation gives the representation \[E(t,s,\xi)=M(t,\xi)N_{1}(t,\xi)E_{2}(t,s,\xi)N_{1}^{-1}(s,\xi)M^{-1}(s,\xi).\] Due to Proposition 4.15 and the uniform bounded behavior of \(Q_{\rm ell}\) and \(N_{1}\) we have \[(|E(t,s,\xi)|) \lesssim|M(t,\xi)|\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right)\exp\left(-\;C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\right) \left(\begin{array}{cc}1&1\\ 1&1\end{array}\right)|M^{-1}(s,\xi)|\] \[\lesssim\exp\left(-\;C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\right) \left(\begin{array}{cc}1&\frac{1}{g(s)|\xi|}\\ g(t)|\xi|&\frac{g(t)}{g(s)}\end{array}\right),\] where we used \(|\lambda_{1}(t,\xi)|\approx g(t)|\xi|^{2}\), \(|\lambda_{2}(t,\xi)|\approx\frac{1}{g(t)}\), \(|\det M(t,\xi)|\approx g(t)|\xi|\), the definition of \(Z_{\rm ell}\) and the fact that \(g\) is decreasing for \(s\leq t\). These estimates give the desired statements and the proof is completed. **Step 2.**_A refined estimate for the fundamental solution \(E=E(t,s,\xi)\)_ **Proposition 4.16**.: _The fundamental solution \(E=E(t,s,\xi)\) satisfies for all \((t,\xi)\), \((s,\xi)\in Z_{\rm ell}\) the following estimates:_ \[\left(\begin{array}{cc}|E^{(11)}(t,s,\xi)|&|E^{(12)}(t,s,\xi)|\\ |E^{(21)}(t,s,\xi)|&|E^{(22)}(t,s,\xi)|\end{array}\right)\] \[\lesssim\exp\left(-\;C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\right) \left(\begin{array}{cc}1&\frac{1}{g(s)|\xi|}\\ \frac{1}{g(t)|\xi|}&\frac{1}{g(t)g(s)|\xi|^{2}}\end{array}\right)+\exp\left( -\;|\xi|^{2}\int_{s}^{t}g(\tau)d\tau\right)\left(\begin{array}{cc}0&0\\ 0&1\end{array}\right),\] _where the constant \(C\) is independent of \((s,\xi),(t,\xi)\in Z_{\rm ell}\)._ Proof.: Using conditions **(D1)**, **(D2)** and **(D3)**, we may follow the techniques of the proof of Lemma 3.9 in [9]. **Corollary 4.17**.: _We have the following representation of solutions to (21) for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\rm ell}\):_ \[\left(\begin{array}{c}|\xi|\hat{u}(t,\xi)\\ D_{t}\hat{u}(t,\xi)\end{array}\right)=\exp\left(\;\int_{s}^{t}\beta(s,\xi)ds \right)M(t,\xi)N_{1}(t,\xi)Q_{\rm ell}(t,s,\xi)N_{1}^{-1}(s,\xi)M^{-1}(s,\xi) \left(\begin{array}{c}|\xi|\hat{u}(s,\xi)\\ D_{t}\hat{u}(s,\xi)\end{array}\right),\] _where from (28) we used \(E_{2}(t,s,\xi)=\exp\left(\;\int_{s}^{t}\beta(\tau,\xi)d\tau\right)Q_{\rm ell}(t, s,\xi)\) with \(\beta=\beta(t,\xi)\) in (27)._ Taking account of the representation of solutions from the previous corollary with \(s=0\) and the refined estimates from Proposition 4.16, it follows \[|\xi|\hat{u}(t,\xi)| \lesssim\exp\bigg{(}-\;C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\bigg{)} \Big{(}|\xi|\hat{u}_{0}(\xi)|+|\hat{u}_{1}(\xi)|\Big{)},\] \[|\hat{u}_{t}(t,\xi)| \lesssim\exp\bigg{(}-\;C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\bigg{)} \Big{(}|\xi|\hat{u}_{0}(\xi)|+|\hat{u}_{1}(\xi)|\Big{)}+\exp\bigg{(}-|\xi|^{2} \int_{0}^{t}g(\tau)d\tau\bigg{)}|\hat{u}_{1}(\xi)|.\] This completes the proof of Proposition 4.12. ### Gluing procedure For large frequencies we have to glue the statements from Proposition 4.7, Proposition 4.8 and Proposition 4.12. We are able to extend the estimates from \(Z_{\rm hyp}\) in Proposition 4.7 to \(Z_{\rm red}\) in Proposition 4.8. For this reason, we obtain the following statement. **Corollary 4.18**.: _The following estimates hold for all \(t\geq s\geq t_{\xi_{1}}\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{2}} {6}\int_{s}^{t}g(\tau)d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(s,\xi)|+|\xi|^{ |\beta|-1}|\hat{u}_{t}(s,\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi |^{2}}{6}\int_{s}^{t}g(\tau)d\tau\bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(s, \xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(s,\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] Finally, we have to glue estimates from Corollary 4.18 and estimates from Proposition 4.12 for \(t=t_{\xi_{1}}\). **Corollary 4.19**.: _The following estimates hold for all \(t\in[t_{\xi_{1}},\infty)\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{ 2}}{6}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\exp\bigg{(}-C\int_{0}^{t_{ \xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi) |+|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\bigg{(}-\frac{|\xi|^{2}}{6}\int_{0}^ {t}g(\tau)d\tau\bigg{)}|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\quad\text{for}\quad| \beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{| \xi|^{2}}{6}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\exp\bigg{(}-C\int_{0}^{t _{\xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}( \xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\bigg{(}-\frac{|\xi|^{2}}{6}\int_{0}^ {t}g(\tau)d\tau\bigg{)}|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\quad\text{for}\quad| \beta|\geq 0.\] Proof.: Let us begin to estimate \(|\xi|^{|\beta|}\hat{u}(t,\xi)\). The statement of Corollary 4.18 implies \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{2}}{6}\int_{t _{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}}, \xi)|+|\xi|^{|\beta|-1}|\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}.\] Using the estimates for \(|\xi|^{|\beta|}|\hat{u}(t_{\xi_{1}},\xi)|\) and \(|\xi|^{|\beta|}|\hat{u}_{t}(t_{\xi_{1}},\xi)|\) from Proposition 4.12 we have \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{ 2}}{6}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\exp\bigg{(}-C\int_{0}^{t_{ \xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi) |+|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\bigg{(}-\frac{|\xi|^{2}}{6}\int_{0 }^{t}g(\tau)d\tau\bigg{)}|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{| \xi|^{2}}{6}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\exp\bigg{(}-C\int_{0}^{t _{\xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}( \xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\bigg{(}-\frac{|\xi|^{2}}{6}\int_{0 }^{t}g(\tau)d\tau\bigg{)}|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|.\] This completes the proof. For small frequencies we may use the estimates from Proposition 4.7, because \(t_{\xi_{2}}=0\). **Corollary 4.20**.: _The following estimates hold for all \(t\in[0,\infty)\) and \(0<|\xi|\leq\frac{\sqrt{3\hat{u}(0)}}{g(0)}\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^{ 2}}{2}\int_{0}^{t}g(\tau)d\tau\bigg{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi) |+|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\bigg{(}-\frac{|\xi|^ {2}}{2}\int_{0}^{t}g(\tau)d\tau\bigg{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}( \xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] ### Energy estimates To derive the corresponding energy estimates for large frequencies, using Corollary 4.19, we need to estimate the term \[\exp\bigg{(}-C\int_{0}^{\tau_{\xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\exp\bigg{(}- \frac{|\xi|^{2}}{6}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}.\] **Lemma 4.21**.: _To a given positive constant \(C\) we find a sufficiently small positive constant \(C_{1}\) such that the following estimate holds for \(t\geq t_{\xi_{1}}\):_ \[\exp\bigg{(}-C\int_{0}^{t_{\xi_{1}}}\frac{1}{g(\tau)}d\tau\bigg{)}\exp\bigg{(} -C_{1}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\bigg{)}\lesssim\exp\Big{(}-C_{1}G(t) \Big{)},\] _where \(G=G(t)\) is defined as follows:_ \[G(t):=1+\int_{0}^{t}g(\tau)d\tau. \tag{29}\] Proof.: Using the decreasing behavior of \(g=g(t)\) implies for all \(t\geq 0\) with a suitable \(C_{1}>0\) the relations \[-C\frac{1}{g(t)}\leq-C_{1}g(t),\qquad\text{hence},\qquad-C\int_{0}^{\tau_{ \xi_{1}}}\frac{1}{g(\tau)}d\tau\leq-C_{1}\int_{0}^{\tau_{\xi_{1}}}g(\tau)d\tau.\] Considering the definition of \(G=G(t)\) we get \[-C\int_{0}^{\tau_{\xi_{1}}}\frac{1}{g(\tau)}d\tau\leq C_{1}\Big{(}1-G(t_{\xi_ {1}})\Big{)}\qquad\text{and}\qquad-C_{1}\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau=C_{ 1}\Big{(}G(t_{\xi_{1}})-G(t)\Big{)}.\] This implies what we wanted to show. From Corollary 4.19 and Lemma 4.21 we obtain for large frequencies the following statement about "an exponential type decay" for large frequencies. **Corollary 4.22**.: _The following estimates hold for all \(t\in[0,\infty)\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)| \lesssim\exp\Big{(}-CG(t)\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0 }(\xi)|+|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta| \geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)| \lesssim\exp\Big{(}-CG(t)\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_ {0}(\xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta| \geq 0,\] _where \(G=G(t)\) is given in (29)._ For small frequencies we may use the estimates from Corollary 4.20. ### Conclusion By Corollary 4.20 and Corollary 4.22 the proof of Theorem 4.1 is completed. ## 5 Models with non-integrable and slowly increasing time-dependent coefficient \(g=g(t)\) We assume that the function \(g=g(t)\) satisfies the following conditions: 1. \(g(t)>0\), \(0\leq g^{\prime}(t)\leq 1\) and \(g^{\prime\prime}(t)\leq 0\) for all \(t\in[0,\infty)\), 2. \(\frac{1}{g}\notin L^{1}(0,\infty)\), 3. \(|g^{\prime}(t)|\leq C_{1}g(t)\frac{g(t)}{G(t)}\) for all \(t\in[0,\infty)\), where \(G(t):=1+\int_{0}^{t}g(\tau)d\tau\) and \(C_{1}>0\), moreover, \(|g^{\prime\prime}(t)|\leq\frac{g^{\prime}(t)}{g(t)}\) with \(t\geq t_{0}\), where \(t_{0}\) is large. **Theorem 5.1**.: _Let us consider the Cauchy problem (3), where the coefficient \(g=g(t)\) satisfies the conditions **(E1)** to **(E3)**. Then, the Sobolev solution \(u=u(t,x)\) satisfies the following estimates:_ \[\big{\|}|D|^{|\beta|}u(t,\cdot)\big{\|}_{L^{2}} \lesssim\max\Big{\{}\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{- \frac{\beta t}{2}},\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\Big{\}} \|u_{0}\|_{H^{\beta t}}\] \[\quad+\max\Big{\{}\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{- \frac{\beta t-1}{2}},\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)} \Big{\}}\|u_{1}\|_{H^{\beta t-1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\big{\|}_{L^{2}} \lesssim\max\Big{\{}\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{- \frac{\beta t+1}{2}},\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)} \Big{\}}\|u_{0}\|_{H^{\beta t+1}}\] \[\quad+\max\Big{\{}\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{- \frac{\beta t}{2}},\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\Big{\}} \|u_{1}\|_{H^{\beta t}}\quad\text{for}\quad|\beta|\geq 0.\] **Remark 5.2**.: _The statements of Theorem 5.1 show that we have the parabolic effect if the term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\) does not determine the decay. Namely, higher order energies decay faster with increasing order. We see this property in the following examples Example 5.3, 5.4 and 5.5. On the contrary, if the term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\) determines the decay, then we do not have any parabolic effect._ **Example 5.3**.: _Let us choose \(g(t)=(1+t)^{\gamma}\) with \(\gamma\in(0,1)\) (see Example 1.9). Then, \(g=g(t)\) satisfies the assumptions of Theorem 5.1. Consequently, the Sobolev solution \(u=u(t,x)\) satisfies the following estimates:_ \[\big{\|}|D|^{|\beta|}u(t,\cdot)\big{\|}_{L^{2}} \lesssim(1+t)^{-\frac{\beta(1+\gamma)}{2}}\|u_{0}\|_{H^{\beta t}}+ (1+t)^{-\frac{\beta(1+\gamma)}{2}}\|u_{1}\|_{H^{\beta t-1}}\quad\text{for} \quad|\beta|\geq 1,\] \[\big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\big{\|}_{L^{2}} \lesssim(1+t)^{-\frac{\beta(1+\gamma)}{2}}\|u_{0}\|_{H^{\beta t+1 }}+(1+t)^{-\frac{\beta(1+\gamma)}{2}}\|u_{1}\|_{H^{\beta t}}\quad\text{for} \quad|\beta|\geq 0.\] _The term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\) implies a faster decay._ **Example 5.4**.: _Let us consider \(g(t)=\frac{e+t}{\log(e+t)}\). We have_ \[\int_{0}^{t}g(s)ds=\int_{0}^{t}\frac{e+s}{\log(e+s)}ds=\frac{1}{2}\frac{(e+t) ^{2}}{\log(e+t)}-\frac{e^{2}}{2}+\frac{1}{2}\int_{0}^{t}\frac{e+s}{(\log(e+s) )^{2}}ds.\] _So, we get the estimate_ \[\int_{0}^{t}\frac{e+s}{\log(e+s)}ds\approx\frac{(e+t)^{2}}{\log(e+t)}.\] _On the other hand, we have_ \[\int_{0}^{t}\frac{1}{g(s)}ds=\int_{0}^{t}\frac{\log(e+s)}{e+s}ds=\frac{1}{2} \Big{(}\log(e+t)\Big{)}^{2}-\frac{1}{2}.\] _Then, the Sobolev solution \(u=u(t,x)\) satisfies the following estimates:_ \[\big{\|}|D|^{|\beta|}u(t,\cdot)\big{\|}_{L^{2}} \lesssim\Big{(}\frac{(e+t)^{2}}{\log(e+t)}\Big{)}^{-\frac{\beta t }{2}}\|u_{0}\|_{H^{\beta t}}+\Big{(}\frac{(e+t)^{2}}{\log(e+t)}\Big{)}^{-\frac{ \beta t-1}{2}}\|u_{1}\|_{H^{\beta t-1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\big{\|}|D|^{|\beta|}u_{t}(t,\cdot)\big{\|}_{L^{2}} \lesssim\Big{(}\frac{(e+t)^{2}}{\log(e+t)}\Big{)}^{-\frac{\beta t +1}{2}}\|u_{0}\|_{H^{\beta t+1}}+\Big{(}\frac{(e+t)^{2}}{\log(e+t)}\Big{)}^{- \frac{\beta t}{2}}\|u_{1}\|_{H^{\beta t}}\quad\text{for}\quad|\beta|\geq 0.\] _The term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\) implies a faster decay._ **Example 5.5**.: _We consider now \(g(t)=\frac{1+t}{\nu(1+t)}\), where \(\nu=\nu(s)\) is a positive strictly increasing function which tends to \(\infty\) for \(s\to\infty\) with_ \[|\nu^{(k)}(1+t)|\leq C_{k}\Big{(}\frac{1}{1+t}\Big{)}^{k},\ k=1,2\ \text{ and }\ C_{1},C_{2}\ \text{ are positive constants.}\] _Then, we have_ \[\int_{0}^{t}g(s)ds=\int_{0}^{t}\frac{1+s}{\nu(1+s)}ds=\frac{1}{2}\frac{(1+s)^{2}}{ \nu(1+s)}|_{0}^{t}+\frac{1}{2}\int_{0}^{t}(1+s)^{2}\frac{\nu^{\prime}(1+s)}{\nu ^{2}(1+s)}ds\approx\frac{1}{2}\frac{(1+t)^{2}}{\nu(1+t)}.\] _This implies the following estimate:_ \[\int_{0}^{t}\frac{1+s}{\nu(1+s)}ds\approx\frac{(1+t)^{2}}{\nu(1+t)}.\] _On the other hand, we have_ \[\int_{0}^{t}\frac{1}{g(s)}ds=\int_{0}^{t}\frac{\nu(1+s)}{1+s}ds=C(t_{0})+\int_ {t_{0}}^{t}\frac{\nu(1+s)}{1+s}ds\geq C(t_{0})+\nu(1+t_{0})\log\frac{1+t}{1+t_ {0}}.\] _Here we use the strictly increasing behavior of \(\nu\). Then, for sufficiently large time \(t\) and \(t_{0}\) we arrive at the estimate_ \[-\int_{0}^{t}\frac{\nu(1+s)}{1+s}ds\leq-C(t_{0})-\nu(1+t_{0})\log\frac{1+t}{1+ t_{0}}.\] _Using \(\lim_{s\to\infty}\nu(s)=\infty\) we see that the following estimates for Sobolev solution \(u=u(t,x)\) are valid:_ \[\left\|\left|D|^{|\beta|}u(t,\cdot)\right\|_{L^{2}}\right. \lesssim\left(\frac{(1+t)^{2}}{\nu(1+t)}\right)^{-\frac{\beta}{2} }\|u_{0}\|_{H^{\beta\infty}}+\left(\frac{(1+t)^{2}}{\nu(1+t)}\right)^{-\frac{ \beta-1}{2}}\|u_{1}\|_{H^{\beta\infty-1}}\quad\text{for}\quad|\beta|\geq 1,\] \[\left\|\left|D|^{|\beta|}u_{t}(t,\cdot)\right\|_{L^{2}}\right. \lesssim\left(\frac{(1+t)^{2}}{\nu(1+t)}\right)^{-\frac{\beta+1} {2}}\|u_{0}\|_{H^{\beta\infty+1}}+\left(\frac{(1+t)^{2}}{\nu(1+t)}\right)^{- \frac{\beta}{2}}\|u_{1}\|_{H^{\beta\infty}}\quad\text{for}\quad|\beta|\geq 0.\] _The term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\) implies a faster decay._ **Example 5.6**.: _We consider now \(g(t)=\mu(1+t)\) with \(\mu\in(0,1)\). For this example we have no parabolic effect because now the term \(\Big{(}1+\int_{0}^{t}g(\tau)d\tau\Big{)}^{-\frac{\beta t}{2}}\) decays faster with increasing \(|\beta|\) than the term \(\exp\Big{(}-C\int_{0}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\)._ Proof of Theorem 5.1.: We write the equation in (7) in the following form: \[D_{t}^{2}\nu+\frac{g(t)^{2}}{4}|\xi|^{4}\nu-\Big{(}1-\frac{g^{\prime}(t)}{2} \Big{)}|\xi|^{2}\nu=0.\] We introduce \(h=h(t)=1-\frac{g^{\prime}(t)}{2}\). Notice, that \(\frac{1}{2}\leq h(t)\leq 1\). Then, we consider \[D_{t}^{2}\nu+\frac{g(t)^{2}}{4}|\xi|^{4}\nu-h(t)|\xi|^{2}\nu=0.\] Due to condition **(E1)** the function \(f=f(t)=\frac{g(t)^{2}}{h(t)}\) is monotonically increasing. Thus, we have a separating line \(t_{\xi}\) as the solution of the implicit equation \(\frac{g(t)^{2}}{4h(t)}|\xi|^{2}=1\). Let us divide the extended phase space into the following zones: * hyperbolic zone: \[Z_{\text{hyp}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:1-\frac{g(t) ^{2}|\xi|^{2}}{4h(t)}\geq\frac{1}{4}\Big{\}},\] * reduced zone: \[Z_{\text{red}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:-\frac{1}{4}\leq 1 -\frac{g(t)^{2}|\xi|^{2}}{4h(t)}\leq\frac{1}{4}\Big{\}}.\] We introduce now a part of the elliptic region \(\Pi_{ell}\), namely the region \[R_{\text{ell}}=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:1-\frac{g(t)^{ 2}|\xi|^{2}}{4h(t)}\leq-\frac{1}{4}\Big{\}},\] and divide this region \(R_{\text{ell}}\) into the following zones: * pseudo-differential zone: \[Z_{\text{pd}}(N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:-N\leq 1 -\frac{g(t)^{2}|\xi|^{2}}{4h(t)}\leq-\frac{1}{4}\Big{\}},\] * elliptic zone: \[Z_{\text{ell}}(N)=\Big{\{}(t,\xi)\in[0,\infty)\times\mathbb{R}^{n}:1-\frac{g( t)^{2}|\xi|^{2}}{4h(t)}\leq-N\Big{\}},\] where \(N>0\) is sufficiently large. We denote the separating line between elliptic and pseudo-differential zone as \(t_{\xi_{1}}\), between pseudo-differential zone and reduced zone as \(t_{\xi_{2}}\) and that between reduced zone and hyperbolic zone as \(t_{\xi_{3}}\). ### Considerations in the elliptic zone \(Z_{ell}(N)\) **Proposition 5.7**.: _The following estimates hold for all \(t\in[t_{\xi_{1}},\infty)\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\Big{(}-C\int_{t_{ \xi_{1}}}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}(t_{ \xi_{1}},\xi)|+|\xi|^{|\beta|-1}|\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\quad\text {for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\Big{(}-C\int_{t_{ \xi_{1}}}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}(t_{ \xi_{1}},\xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(t_{\xi_{1}},\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\Big{(}-\frac{|\xi|^{2}}{2}\int_{t_ {\xi_{1}}}^{t}g(\tau)d\tau\Big{)}|\xi|^{|\beta|}|\hat{u}_{t}(t_{\xi_{1}},\xi) |\quad\text{for}\quad|\beta|\geq 0.\] Proof.: **Step 1.**_A straight-forward estimate for the fundamental solution \(E=E(t,s,\xi)\)_ Figure 4: Sketch of the zones for the case \(g=g(t)\) is non-integrable and slowly increasing **Proposition 5.8**.: _The fundamental solution \(E=E(t,s,\xi)\) satisfies the following estimate:_ \[\left(\begin{array}{cc}|E^{(11)}(t,s,\xi)|&|E^{(12)}(t,s,\xi)|\\ |E^{(21)}(t,s,\xi)|&|E^{(22)}(t,s,\xi)|\end{array}\right)\lesssim\exp\left(- \,C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\right)\left(\begin{array}{cc}1&\frac{ 1}{g(s)|\xi|}\\ g(t)|\xi|&\frac{g(t)}{g(s)}\end{array}\right),\] _for all \(t\geq s\) and \((t,\xi)\), \((s,\xi)\in Z_{ell}(N)\), where the constant \(C\) is independent of \((s,\xi),(t,\xi)\in Z_{ell}(N)\)._ Proof.: The proof coincides with the proof to Proposition 4.13. In order to guarantee the invertibility of \(N_{1}(t,\xi)=I+N^{(1)}(t,\xi)\), where \(N^{(1)}=N^{(1)}(t,\xi)\) is defined in (25), we use the definition of the elliptic zone \(g^{2}(t)|\xi|^{2}\geq 4(N+1)h(t)\) with sufficiently large \(N\). In this way, we may conclude that the matrix \(N^{(1)}=N^{(1)}(t,\xi)\) is invertible. **Step 2.**_A refined estimate for the fundamental solution \(E=E(t,s,\xi)\)_ **Proposition 5.9**.: _The fundamental solution \(E=E(t,s,\xi)\) satisfies the following estimate:_ \[\left(\begin{array}{cc}|E^{(11)}(t,s,\xi)|&|E^{(12)}(t,s,\xi)|\\ |E^{(21)}(t,s,\xi)|&|E^{(22)}(t,s,\xi)|\end{array}\right)\lesssim\exp\left(- \,C\int_{s}^{t}\frac{1}{g(\tau)}d\tau\right)\left(\begin{array}{cc}1&\frac{ 1}{g(s)|\xi|}\\ \frac{1}{g(s)|\xi|}&\frac{1}{g^{2}(s)|\xi|^{2}}\end{array}\right)+\exp\left(- \,|\xi|^{2}\int_{s}^{t}g(\tau)d\tau\right)\left(\begin{array}{cc}0&0\\ 0&1\end{array}\right)\] _for all \((t,\xi)\), \((s,\xi)\in Z_{ell}(N)\), where the constant \(C\) is independent of \((s,\xi),(t,\xi)\in Z_{ell}(N)\)._ Proof.: The proof is the same as the proof to Proposition 4.16 after using conditions (**E1**) and (**E3**). The only difference is the following: in the proof of Proposition 4.16 we used the decreasing behavior of \(g=g(t)\) with \(s\leq t\) to estimate \[\frac{1}{g(t)}+\frac{1}{g(s)}\exp\left(-\,C|\xi|^{2}\int_{s}^{t}g(\tau)d\tau \right)\lesssim\frac{1}{g(t)}.\] But now we estimate \[\frac{1}{g(t)}+\frac{1}{g(s)}\exp\left(-\,C|\xi|^{2}\int_{s}^{t}g(\tau)d\tau \right)\lesssim\frac{1}{g(s)}.\] For this reason the refined estimate for the entries of \(E=E(t,s,\xi)\) differs to the estimate for the entries from Proposition 4.16. This completes the proof to Proposition 5.7. ### Considerations in the pseudo-differential zone \(Z_{pd}(N)\) **Proposition 5.10**.: _The following estimates hold for all \((t,\xi),(s,\xi)\in Z_{pd}(N)\) with \(s\leq t\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\left(-\,\frac{1}{4(N+1)}|\xi|^{2} \int_{s}^{t}g(\tau)d\tau\right)\left(|\xi|^{|\beta|}|\hat{u}(s,\xi)|+|\xi|^{| \beta|-1}|\hat{u}_{t}(s,\xi)|\right)\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\left(-\,\frac{1}{4(N+1)}|\xi| ^{2}\int_{s}^{t}g(\tau)d\tau\right)\left(|\xi|^{|\beta|+1}|\hat{u}(s,\xi)|+|\xi| ^{|\beta|}|\hat{u}_{t}(s,\xi)|\right)\quad\text{for}\quad|\beta|\geq 0.\] Proof.: We consider the transformed equation \[v_{tt}+\left(h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\right)v=0.\] If we define the micro-energy \(V(t,\xi)=\left(\frac{g(t)}{C_{N}}|\xi|^{2}v,D_{t}v\right)^{ \mathrm{T}}\) then, we get the following system of first order: \[D_{t}V(t,\xi)=\left(\begin{array}{cc}\frac{D_{t}g(t)}{g(t)}&\frac{g(t)}{C_{N }}|\xi|^{2}\\ \frac{h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}}{\frac{g(t)}{C_{N}}|\xi|^{2}} &0\end{array}\right)V(t,\xi). \tag{30}\] We have the estimate \[\Big{|}h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\Big{|}\leq\frac{N}{N+1}\frac{g^{2 }(t)}{4}|\xi|^{4}\qquad\text{taking account of}\qquad\frac{g^{2}(t)}{4(N+1)}|\xi|^{4} \leq h(t)|\xi|^{2}\leq\frac{g^{2}(t)}{4}|\xi|^{4}.\] Thus, to estimate the entries of the matrix (30) we will use \[\frac{\Big{|}h(t)|\xi|^{2}-\frac{g^{2}(t)}{4}|\xi|^{4}\Big{|}}{\frac{g(t)}{C_{N }}|\xi|^{2}}\leq C_{N}\frac{N}{N+1}\frac{g(t)}{4}|\xi|^{2}.\] To estimate \(\frac{g(t)}{C_{N}}|\xi|^{2}\), let us choose \(C_{N}=2+\kappa_{N}\). If we choose \(\kappa_{N}\in(0,\frac{2}{N})\), then we get a suitable estimate. Let us choose \(\kappa=\frac{1}{N}\), then \(C_{N}=\frac{2N+1}{N}\). Thus, all entries of the matrix can be estimated by \(\frac{4N+2}{4N+4}\frac{g(t)}{2}|\xi|^{2}\). The entry \(\frac{g^{\prime}(t)}{g(t)}\) does not bring any additional term, it brings only a constant by using the definition of the pseudo-differential zone. **Corollary 5.11**.: _The fundamental solution \(E=E(t,s,\xi)\) to (30) for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\mathrm{pd}}(N)\) satisfies the following estimate:_ \[|E(t,s,\xi)|\leq\exp\Big{(}\frac{2N+1}{4N+4}|\xi|^{2}\int_{s}^{t}g(\tau)d\tau \Big{)}.\] From the backward transformation and the equivalence \(\frac{g(t)}{2}|\xi|^{2}\approx h(t)|\xi|\) in \(Z_{\mathrm{pd}}(N)\), we may conclude the desired statements of the proposition for all \(t\geq s\) and \((t,\xi),(s,\xi)\in Z_{\mathrm{pd}}(N)\). Considerations in the hyperbolic zone \(Z_{\mathrm{hyp}}\) and in the reduced zone \(Z_{\mathrm{red}}\) The treatment in the hyperbolic and reduced zone is the same as it was explained in Sections 4.1 and 4.2, respectively. We are able to extend the estimates from \(Z_{\mathrm{hyp}}\) to \(Z_{\mathrm{red}}\). For this reason, we obtain for \(t\leq t_{\xi_{2}}\) the following estimates. **Proposition 5.12**.: _The following estimates hold for all \(t\in(0,t_{\xi_{2}}]\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\Big{(}-\frac{|\xi|^{2 }}{6}\int_{0}^{t}g(\tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+| \xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\Big{(}-\frac{|\xi| ^{2}}{6}\int_{0}^{t}g(\tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}( \xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] Proof.: The proof is the same as the proof to Proposition 4.7. ### Gluing procedure For large frequencies we may use the estimates from Proposition 5.7 because of \(t_{\xi_{1}}=0\). We glue for small frequencies the estimates from Propositions 5.10 and 5.12 with the estimates from Proposition 5.7. **Corollary 5.13**.: _The following estimates hold for all \(t\in[t_{\xi_{1}},\infty)\) with a sufficiently large \(N\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\Big{(}-C\int_{t_{ \xi_{1}}}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\exp\Big{(}-\frac{1}{4(N+1)}|\xi|^{2 }\int_{0}^{t_{\xi_{1}}}g(\tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}( \xi)|+|\xi|^{|\beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta| \geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\Big{(}-C\int_{t_{ \xi_{1}}}^{t}\frac{1}{g(\tau)}d\tau\Big{)}\exp\Big{(}-\frac{1}{4(N+1)}|\xi|^{2 }\int_{t_{\xi_{1}}}^{t}g(\tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0} (\xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\] \[\qquad\qquad\qquad\qquad+\exp\Big{(}-\frac{1}{4(N+1)}|\xi|^{2} \int_{0}^{t}g(\tau)d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+| \xi|^{|\beta|}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 0.\] Proof.: Let us begin to estimate \(|\xi|^{|\beta|}\hat{u}(t,\xi)\). The statement of Proposition 5.7 implies \[|\xi|^{|\beta|}\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-C\int_{t_{\xi_{1}}}^{t}\frac{1} {g(\tau)}d\tau\bigg{)}\bigg{(}|\xi|^{|\beta|}\hat{u}(t_{\xi_{1}},\xi)|+|\xi|^{| \beta|-1}|\hat{u}_{i}(t_{\xi_{1}},\xi)|\Big{)}.\] Using the estimates for \(|\xi|^{|\beta|}\hat{u}(t_{\xi_{1}},\xi)|\) and \(|\xi|^{|\beta|}|\hat{u}_{i}(t_{\xi_{1}},\xi)|\) from Proposition 5.10 and 5.12 we have with a large \(N\) the estimate \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\bigg{(}-C\int_{t_{\xi_{1}}}^{t} \frac{1}{g(\tau)}d\tau\bigg{)}\exp\Big{(}-\frac{1}{4(N+1)}\int_{0}^{\xi_{1}}g (\tau)d\tau\Big{)}\bigg{(}|\xi|^{|\beta|}\hat{u}_{0}(\xi)|+|\xi|^{|\beta|-1}| \hat{u}_{1}(\xi)|\Big{)}.\] In the same way, we may conclude \[|\xi|^{|\beta|}\hat{u}_{i}(t,\xi)|\lesssim\exp\bigg{(}-C\int_{t_{ \xi_{1}}}^{t}\frac{1}{g(\tau)}d\tau\bigg{)}\exp\bigg{(}-\frac{1}{4(N+1)}|\xi|^ {2}\int_{0}^{\xi_{1}}g(\tau)d\tau\bigg{)}\bigg{(}|\xi|^{|\beta|+1}|\hat{u}_{0} (\xi)|+|\xi|^{|\beta|}|\hat{u}_{1}(\xi)|\bigg{)}\] \[+\exp\bigg{(}-\frac{1}{4(N+1)}|\xi|^{2}\int_{0}^{t}g(\tau)d\tau \bigg{)}\bigg{(}|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+|\xi|^{|\beta|}|\hat{u}_{1 }(\xi)|\bigg{)}.\] This completes the proof. ### Energy estimates For small frequencies we shall consider the interplay between two phase functions appearing in the estimates of Corollary 5.13. For this reason we discuss the term \(S_{r}(t,|\xi|)\) which is defined as follows: \[S_{r}(t,|\xi|):=|\xi|^{r}\exp\bigg{(}-C\int_{t_{\xi_{1}}}^{t}\frac{1}{g(\tau)} d\tau\bigg{)}\exp\bigg{(}-C_{N}|\xi|^{2}\int_{0}^{t_{\xi_{1}}}g(\tau)d\tau \bigg{)}.\] **Proposition 5.14**.: _To a given positive constant \(C\) there exists a small positive constant \(C_{N}\) such that for \(t>0\) it holds_ \[S_{r}(t,|\xi|)\lesssim\max_{\xi\in\mathbb{R}^{n}}\bigg{\{}|\xi|^{r}\exp\bigg{(} -C_{N}|\xi|^{2}\int_{0}^{t}g(\tau)d\tau\bigg{)}\bigg{\}}\lesssim\bigg{(}1+\int _{0}^{t}g(\tau)d\tau\bigg{)}^{-\frac{\tau}{2}}\quad\text{for}\quad r\geq 0.\] Proof.: It is sufficient to verify the statement for large \(t\), because for small \(t\) the set of admissible \(\xi\) forms a compact set. To estimate the term \(S_{r}(t,|\xi|)\) it is important that the first partial derivative \(\partial_{\xi}|S_{r}(t,|\xi|)\) is negative for \(|\xi|\leq\varepsilon_{r}\). We have \[\partial_{|\xi|}S_{r}(t,|\xi|) =S_{r}(t,|\xi|)\bigg{(}\frac{r}{|\xi|}+C\frac{1}{g(t_{\xi_{1}})}d_ {|\xi|}t_{\xi_{1}}-2C_{N}|\xi|\int_{0}^{t_{\xi_{1}}}g(\tau)d\tau-C_{N}|\xi|^{2 }g(t_{\xi_{1}})d_{|\xi|}t_{\xi_{1}}\bigg{)}\] \[\leq S_{r}(t,|\xi|)\bigg{(}\frac{r}{|\xi|}+\Big{(}C\frac{1}{g(t_{ \xi_{1}})}-C_{N}|\xi|^{2}g(t_{\xi_{1}})\Big{)}d_{|\xi|}t_{\xi_{1}}\bigg{)},\] Taking into account of \(g^{2}(t_{\xi_{1}})|\xi|^{2}=4(N+1)h(t_{\xi_{1}})\) and \(\frac{1}{2}\leq h(t)\leq 1\), we have \[|\xi|^{2}g(t_{\xi_{1}})=\frac{|\xi|^{2}g^{2}(t_{\xi_{1}})}{g(t_{\xi_{1}})}= \frac{4(N+1)h(t_{\xi_{1}})}{g(t_{\xi_{1}})}\leq\frac{4(N+1)}{g(t_{\xi_{1}})}.\] Therefore, if we choose the constant \(C_{N}\) sufficiently small, then the term \(C\frac{1}{g(t_{\xi_{1}})}\) dominates the term \(C_{N}|\xi|^{2}g(t_{\xi_{1}})\). Moreover, after differentiation of \(g^{2}(t_{\xi_{1}})|\xi|^{2}=4(N+1)h(t_{\xi_{1}})\), we get \[\Big{(}4(N+1)h^{\prime}(t_{\xi_{1}})-2|\xi|^{2}g(t_{\xi_{1}})g^{\prime}(t_{ \xi_{1}})\Big{)}d_{|\xi|}t_{\xi_{1}}=2|\xi|g^{2}(t_{\xi_{1}}),\qquad d_{|\xi|} t_{\xi_{1}}=\frac{2|\xi|g^{2}(t_{\xi_{1}})}{4(N+1)h^{\prime}(t_{\xi_{1}})-2|\xi|^{2 }g(t_{\xi_{1}})g^{\prime}(t_{\xi_{1}})},\quad\text{respectively}.\] Employing \(g(t)>0\), \(0\leq g^{\prime}(t)\leq 1\) and \(g^{\prime\prime}(t)\leq 0\) from condition **(E1)**, and taking account of \(h^{\prime}(t)=-\dfrac{g^{\prime\prime}(t)}{2}\), we find \[4(N+1)h^{\prime}(t_{\xi_{1}})-2|\xi|^{2}g(t_{\xi_{1}})g^{\prime}(t_{\xi_{1}}) \geq-2(N+1)g^{\prime\prime}(t_{\xi_{1}})-2|\xi|^{2}g(t_{\xi_{1}})\geq-2|\xi|^{2 }g(t_{\xi_{1}}).\] We note that \(4(N+1)h^{\prime}(t_{\xi_{1}})-2|\xi|^{2}g(t_{\xi_{1}})g^{\prime}(t_{\xi_{1}})<0\) using again \(h^{\prime}(t_{\xi_{1}})=-\dfrac{g^{\prime\prime}(t_{\xi_{1}})}{2}\), \(|\xi|^{2}g(t_{\xi_{1}})=4(N+1)\dfrac{h(t_{\xi_{1}})}{g(t_{\xi_{1}})}\), \(\frac{1}{2}\leq h(t_{\xi_{1}})\leq 1\), and the condition **(E3)**, respectively, as follows: \[4(N+1)h^{\prime}(t_{\xi_{1}})-2|\xi|^{2}g(t_{\xi_{1}})g^{\prime} (t_{\xi_{1}}) =-2(N+1)g^{\prime\prime}(t_{\xi_{1}})-8(N+1)\dfrac{g^{\prime}(t_{ \xi_{1}})}{g(t_{\xi_{1}})}h(t_{\xi_{1}})\] \[\leq 2(N+1)\dfrac{g^{\prime}(t_{\xi_{1}})}{g(t_{\xi_{1}})}-4(N+1 )\dfrac{g^{\prime}(t_{\xi_{1}})}{g(t_{\xi_{1}})}=-2(N+1)\dfrac{g^{\prime}(t_{ \xi_{1}})}{g(t_{\xi_{1}})}<0.\] Then, we get \[d_{|\xi|}t_{\xi_{1}}\leq\dfrac{2|\xi|g^{2}(t_{\xi_{1}})}{-2|\xi|^{2}g(t_{\xi_{ 1}})}=-\dfrac{g(t_{\xi_{1}})}{|\xi|}.\] Moreover, for a fixed \(r\) the term \(\dfrac{r}{|\xi|}\) is dominated by the negative term \[|\xi|^{2}g(t_{\xi_{1}})d_{|\xi|}t_{\xi_{1}}\leq|\xi|^{2}g(t_{\xi_{1}})\Big{(}- \dfrac{g(t_{\xi_{1}})}{|\xi|}\Big{)}=-\dfrac{|\xi|^{2}g^{2}(t_{\xi_{1}})}{| \xi|}=-\dfrac{4(N+1)h(t_{\xi_{1}})}{|\xi|}\leq-2(N+1)|\xi|^{-1}\] if we choose \(N\) large enough. In order to complete the proof it is sufficient to study small frequencies with \(|\xi|\leq\varepsilon_{r}\). For \(|\xi|\geq\varepsilon_{r}\) we have an "exponential decay" from the elliptic zone. Let us now fix \(t>0\). Then, the above term takes its maximum for the \(|\bar{\xi}|\) satisfying \(t=t_{\bar{\xi}_{1}}\). For \(t=t_{\bar{\xi}_{1}}\), the first integral vanishes in \(S_{r}(t,|\xi|)\). Consequently, we get \[S_{r}(t,|\xi|)\leq S_{r}(t_{\bar{\xi}_{1}},|\bar{\xi}|)=|\bar{ \xi}|^{r}\exp\Big{(}-C|\bar{\xi}|^{2}\int_{0}^{\varepsilon_{1}}g(\tau)d\tau \Big{)}\] \[\lesssim\max_{\xi\in\mathbb{R}^{+}}\Big{\{}|\xi|^{r}\exp\Big{(}-C| \xi|^{2}\int_{0}^{t}g(\tau)d\tau\Big{)}\Big{\}}\lesssim\Big{(}1+\int_{0}^{t}g( \tau)d\tau\Big{)}^{-\frac{r}{2}}.\] The proof is completed. Using Proposition 5.14 we obtain the following statement. **Corollary 5.15**.: _The following estimates hold for all \(t>0\) and small frequencies \(0<|\xi|\leq 1\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\Big{(}1+\int_{0}^{t}g(\tau )d\tau\Big{)}^{-\frac{|\beta|}{2}}|\hat{u}_{0}(\xi)|+\Big{(}1+\int_{0}^{t}g( \tau)d\tau\Big{)}^{-\frac{|\beta|-1}{2}}|\hat{u}_{1}(\xi)|\quad\text{for}\quad| \beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\Big{(}1+\int_{0}^{t}g (\tau)d\tau\Big{)}^{-\frac{|\beta|+1}{2}}|\hat{u}_{0}(\xi)|+\Big{(}1+\int_{0}^{ t}g(\tau)d\tau\Big{)}^{-\frac{|\beta|}{2}}|\hat{u}_{1}(\xi)|\quad\text{for}\quad| \beta|\geq 0.\] For large frequencies we may use the estimates from Proposition 5.7 because of \(t_{\xi_{1}}=0\). These estimates imply an "exponential type decay". **Corollary 5.16**.: _The following estimates hold for all \(t>0\) and large frequencies \(|\xi|\geq 1\):_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\lesssim\exp\Big{(}-C\int_{0}^{t} \dfrac{1}{g(\tau)}d\tau\Big{)}\Big{(}|\xi|^{|\beta|}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|-1}|\hat{u}_{1}(\xi)|\Big{)}\quad\text{for}\quad|\beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\lesssim\exp\Big{(}-C\int_{0}^{ t}\dfrac{1}{g(\tau)}d\tau\Big{)}\Big{(}|\xi|^{|\beta|+1}|\hat{u}_{0}(\xi)|+|\xi|^{| \beta|}|\hat{u}_{1}(\xi)|\Big{)}+\exp\bigg{(}-C\int_{0}^{t}g(\tau)d\tau\Big{)} \xi|^{|\beta|}|\hat{u}_{1}(\xi)|\quad\text{for}\quad|\beta|\geq 0.\] ### Conclusion Taking into consideration all these estimates and the fact, that the statements from Proposition 5.10, Corollaries 5.15 and 5.16 determine the decay estimates and regularity of the data, respectively, we may conclude the proof of Theorem 5.1. ## 6 Concluding remarks **Remark 6.1**.: _Scale-invariant models Let us turn to the scale-invariant case_ \[\begin{cases}u_{tt}-\Delta u+\mu(1+t)(-\Delta)u_{t}=0,&(t,x)\in[0,\infty) \times\mathbb{R}^{n},\\ u(0,x)=u_{0}(x),\quad u_{t}(0,x)=u_{1}(x),&x\in\mathbb{R}^{n},\end{cases} \tag{31}\] _where \(\mu>0\). Taking into consideration results from [11] one can arrive at the following estimates:_ \[|\xi|^{|\beta|}|\hat{u}(t,\xi)|\leq C\Big{(}|\xi|^{|\beta|}|\hat {u}(0,\xi)|+|\xi|^{|\beta|-1}|\hat{u}_{t}(0,\xi)|\Big{)}\;\;\text{for}\;\;| \beta|\geq 1,\] \[|\xi|^{|\beta|}|\hat{u}_{t}(t,\xi)|\leq C(1+t)\Big{(}|\xi|^{| \beta|+1}|\hat{u}(0,\xi)|+|\xi|^{|\beta|}|\hat{u}_{t}(0,\xi)|\Big{)}\;\;\text{ for}\;\;|\beta|\geq 0,\] _for all \((t,\xi)\in[0,\infty)\times\mathbb{R}^{n}\). This implies that multiplication of \(|\xi|^{|\beta|}\) on the left-hand sides in the above estimates gives no faster decay depending on \(|\beta|\) of higher order energies. Thus, we have no parabolic effect. In this way we get the following result._ **Theorem 6.2**.: _Let us consider the Cauchy problem (31), where the data \((u_{0},u_{1})\) is assumed to belong to \(\dot{H}^{|\beta|}\times\dot{H}^{|\beta|-1}\) with \(|\beta|\geq 1\). Then, we have the following estimates for Sobolev solutions:_ \[\big{\|}|D|^{|\beta|}u(t,\cdot)\big{\|}_{L^{2}} \lesssim\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot{H}^{|\beta|- 1}},\] \[\big{\|}|D|^{|\beta|-1}u_{t}(t,\cdot)\big{\|}_{L^{2}} \lesssim(1+t)\Big{(}\|u_{0}\|_{\dot{H}^{|\beta|}}+\|u_{1}\|_{\dot {H}^{|\beta|-1}}\Big{)}.\] **Remark 6.3**.: _Parabolic effect We have an almost complete picture on the validity of the "parabolic effect", that is, higher order energies have a faster decay with increasing order. In the case, that \(g=g(t)\) is "above the scale-invariant case" (see previous Remark 6.1) we have shown in Section 2 that there is no any parabolic effect. In the case, that \(g=g(t)\) is decreasing and integrable we have shown in Section 3 that we do not have any parabolic effect. Under the assumptions for \(g=g(t)\) in Sections 4 and 5 we have, in general, the parabolic effect._ **Remark 6.4**.: _Comparison of obtained results The estimates of Theorem 2.1 are in some sense related to the estimates of Theorem 6.2 although \(g(t)=\mu(1+t)\), \(\mu>0\) does not satisfy the assumption **(A2)**. The difference between both results is the difference of order of regularity of the data which is \(2\) in Theorem 2.1 and \(1\) in Theorem 6.2. The estimates of Theorem 5.1 are in some sense related to the estimates of Theorem 6.2. Theorem 5.1 gives us at least the information that in the scale-invariant case we cannot expect any parabolic effect. The estimates of Theorem 4.1 are compatible with the estimates of Theorem 5.1. If we formally put \(g(t)\equiv 1\), then both results coincide. In the case \(g(t)\equiv 1\) we apply straight-forward Fourier analysis. The estimates of Theorem 3.1 are in some sense related to the estimates of Theorem 4.1. If \(g\in L^{1}(0,\infty)\), then from Theorem 4.1 we get formally no decay estimates and no parabolic effect anymore. The difference between both results is the difference of order of regularity of the data which is \(2\) in Theorem 3.1 and \(1\) in Theorem 4.1._ ## Acknowledgments The discussions on this paper began during the research stay of the first author at the Technical University Bergakademie Freiberg, within the period August to October 2022. This stay was supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP), Grant 2021/01743-3.
2305.14148
Distributing circuits over heterogeneous, modular quantum computing network architectures
We consider a heterogeneous network of quantum computing modules, sparsely connected via Bell states. Operations across these connections constitute a computational bottleneck and they are likely to add more noise to the computation than operations performed within a module. We introduce several techniques for transforming a given quantum circuit into one implementable on a network of the aforementioned type, minimising the number of Bell states required to do so. We extend previous works on circuit distribution over fully connected networks to the case of heterogeneous networks. On the one hand, we extend the hypergraph approach of [Andres-Martinez & Heunen. 2019] to arbitrary network topologies. We additionally make use of Steiner trees to find efficient realisations of the entanglement sharing within the network, reusing already established connections as often as possible. On the other hand, we extend the embedding techniques of [Wu, et al. 2022] to networks with more than two modules. Furthermore, we discuss how these two seemingly incompatible approaches can be made to cooperate. Our proposal is implemented and benchmarked; the results confirming that, when orchestrated, the two approaches complement each other's weaknesses.
Pablo Andres-Martinez, Tim Forrer, Daniel Mills, Jun-Yi Wu, Luciana Henaut, Kentaro Yamamoto, Mio Murao, Ross Duncan
2023-05-23T15:14:53Z
http://arxiv.org/abs/2305.14148v3
# Distributing circuits over heterogeneous, modular quantum computing network architectures ###### Abstract We consider a heterogeneous network of quantum computing modules, sparsely connected via Bell states. Operations across these connections constitute a computational bottleneck and they are likely to add more noise to the computation than operations performed within a module. We introduce several techniques for transforming a given quantum circuit into one implementable on a network of the aforementioned type, minimising the number of Bell states required to do so. We extend previous works on circuit distribution over fully connected networks to the case of heterogeneous networks. On the one hand, we extend the hypergraph approach of [1] to arbitrary network topologies. We additionally make use of Steiner trees to find efficient realisations of the entanglement sharing within the network, reusing already established connections as often as possible. On the other hand, we extend the embedding techniques of [2] to networks with more than two modules. Furthermore, we discuss how these two seemingly incompatible approaches can be made to cooperate. Our proposal is implemented and benchmarked; the results confirming that, when orchestrated, the two approaches complement each other's weaknesses. ###### Contents * 1 Introduction * 2 The DQC problem * 3 Background * 3.1 EJPP protocol and distributable packets * 3.2 DQC via hypergraph partitioning * 3.3 Embedding * 3.4 Non-local gate distribution via vertex cover * 3.5 Intermediate representation of distribution * 4 Distribution techniques * 4.1 Gate distribution using Steiner trees * 4.2 Combining embedding and Steiner trees * 4.3 Partitioning on heterogenous networks * 5 Benchmarks * 5.1 Networks * 5.2 Circuits * 5.3 Distribution workflows * 5.4 Results * 6 Conclusion and future work Introduction Quantum computing providers are racing to scale up their systems, targeting qubit numbers and gate fidelities that would allow for demonstrations of quantum advantage on practical applications. As architectures scale up, their basic components grow farther apart, increasing the cost of communicating between them. Moreover, operations between distant components require more intermediary elements to be involved, thus making it challenging to maintain high fidelity as errors accumulate. Distributed quantum computing [1] provides an alternative: once a quantum computing module pushing the limits of current classical technology is engineered, it may be more practical to produce copies of it and connect them together than to produce larger singular devices. Indeed, researchers in academia and industry have proposed both short and long-term distributed quantum computing projects. **Short-term.**: An emerging field of research studies the use of classical postprocessing to 'knit together' multiple quantum circuits [2, 3], with the goal of simulating circuits that are too large to be run in current quantum computers. The quantum circuit is 'cut' at different points, creating smaller subcircuits that can be run on current quantum computers. The classical postprocessing may be done 'offline' -- _i.e._ after the quantum computation has finished -- but the overhead scales exponentially with the number of cuts, so the technique is only applicable to circuits that can be split using few cuts. Practical applications can be found in the field of quantum chemistry, where knowledge of the symmetries of the system being modelled can be exploited to generate circuits in which two groups of qubits barely interact with each other [4]. **Long-term.**: There is a history of academics proposing modular quantum computers [5, 6, 7] and related technologies appear in the field of quantum internet [8, 9]. In such modular architectures, it is expected that different modules will interact with each other throughout the computation via entanglement sharing. Currently, the challenge of high-rate generation of entangled states between different modules is too great for the technology to become widely applicable, but we can expect to eventually reach an inflexion point where the communication cost within a large enough module will be comparable to that of entanglement generation between separate modules [5]. The current road-map of IBM promises the release of the first prototype of a modular quantum computer (Heron) by the end of 2023, and Quantum plans to develop a modular quantum computer for its H5 generation.1 Footnote 1: The road-map of these companies is publicly available at their respective web-pages at the time of writing. When we reach the inflexion point where modular architectures become advantageous, communication of quantum information between modules will be a significant bottleneck of the computation. It is thus essential to develop circuit optimisation methods that minimise the amount of quantum communication required to distribute a circuit. This is the purpose of the present manuscript. The methods we discuss here are also applicable to the short-term applications of classically simulated circuit knitting [3, 4], since reducing the amount of communication between modules is equivalent to reducing the number of cuts and, hence, the exponential classical overhead. In this work, we assume that all quantum communication is carried out by the consumption of Bell pairs shared between modules. Previous works on distributed quantum computing focus either on the minimisation of the circuit's depth [10] or attempt to minimise the number of Bell pairs consumed [11, 12, 13, 14]. This manuscript falls into the second category, since we identify Bell pair generation and sharing as the main bottleneck of the computation. Among the works in this category, [11, 12, 14] assume a fully connected network of modules. [13] studies heterogeneous networks, where not every pair of modules are connected to each other directly, and where each module may have different qubit register capacities. The task of circuit distribution has some similarities with the qubit routing problem [15, 16], in that both are concerned with gate scheduling and assignment qubits to hardware registers. The main distinction between them lies in that the goal of routing is to implement a circuit on a _single_ module (with limited connectivity), whereas the distribution problem deals with the interaction between multiple modules. Thus, the distribution problem can be studied at a higher level of abstraction, where we may assume operations within a module to be comparatively free. This leads to distribution being naturally related to the mathematical problem of graph partitioning, whereas qubit routing is an instance of token swapping [16]. Moreover, this distinction leads to a desirable separation of concerns: once a circuit is distributed, the next step on a compilation stack is to solve the routing problem for each of its subcircuits, optimising its implementation for the specific hardware constraints of the module it is assigned to. In Section 2 we give a precise definition of the circuit distribution problem. We review the relevant literature in Section 3, focusing on approaches that minimise the number of Bell pairs consumed [11, 12, 13, 14]. The main contribution of our work is the generalisation of the approaches of [11, 14] to target heterogeneous networks. We identify the key optimisation opportunities exploited by these generalisations and describe an approach to combine them in Section 4. The proposed approach has been implemented as an open source project, \(\mathsf{pyrket\_dqc}\), which we benchmark in Section 5. ## 2 The DQC problem In this work we focus on the problem of distributing quantum circuits (DQC) over general networks of quantum computers, minimising the number of entangled resources required to do so. A network is comprised of a collection of quantum computers that we refer to as _modules_. These modules are connected via quantum communication channels, with Local Operations and Classical Communication (LOCC) also available. A quantum communication channel may be used to generate maximally entangled bipartite states between two modules. We refer to a such shared state as an _ebit_, and take it to be a Bell state: \[\frac{1}{\sqrt{2}}\left(\left|00\right\rangle+\left|11\right\rangle\right). \tag{1}\] Formally, the network is specified by an undirected graph \(G=(V,E)\). Each vertex \(\mathtt{A}\in V\) corresponds to a module and each edge \((\mathtt{A},\mathtt{B})\in E\) indicates that ebits may be prepared and shared between modules \(\mathtt{A}\) and \(\mathtt{B}\). Each module \(\mathtt{A}\in V\) is capable of managing \(\omega(\mathtt{A})\) qubits dedicated to computation -- its _computation register_ -- and \(\epsilon(\mathtt{A})\) qubits dedicated to communication -- its _link qubit register_. Thus, \(\epsilon(\mathtt{A})\) determines the maximum number of connections that can be simultaneously maintained by module \(\mathtt{A}\). These link qubits are disentangled from the rest of the computation at the end of the communication protocol described in Section 3.1. Consequently, we may reuse the space in the link qubit register throughout the computation in order to establish new communications channels at different points in time.2 Footnote 2: In this work we abstract away details about inter-module entanglement generation and management. We refer the reader to [17, 18, 19, 20] for details and reviews of methods of constructing a complete ‘quantum internet protocol stack’. We assume each of the modules is capable of universal quantum computation and we consider no restrictions on the module's internal qubit connectivity. The particular universal gate set, and the actual internal connectivity of the modules, may be accounted for by a later stage of circuit compilation [21] acting individually on the local subcircuit assigned to each module. Our objective is to minimise the total number of ebits consumed, whose preparation and sharing is expected to be the bottleneck of any distributed quantum computation. Throughout the paper we consider that LOCC are comparatively free and assume that circuits are constructed using the gateset \(\{H,R_{Z},CR_{Z}\}\), which we depict using the following shorthand: \[\tikzfig{height=1.5}{\includegraphics[width=1.5}]{images=1.5}{\includegraphics[width=1. qubits in \(Q\) and where all multi-qubit gates between modules are realised via the generation and consumption of ebits. We are interested in distributions that consume the fewest number of ebits. ## 3 Background ### EJPP protocol and distributable packets A non-local \(CR_{Z}\) gate can be implemented by consuming a single ebit. The distribution protocol we use originates from [22] and we refer to it as the EJPP protocol, using the initials of the latter paper's authors. Fig. 1 provides an example of such a protocol. During the EJPP protocol, a qubit \(\hat{q}\) is shared with a remote module B, entangling it with an ancilla qubit stored in the link qubit register of the module. The starting process of the EJPP protocol -- boxed in grey in Fig. 1 -- generates and consumes an ebit to produce a link qubit that is entangled with \(\hat{q}\). The _ending process_ only uses LOCC and disentangles the link qubit. Crucially, multiple non-local gates can be implemented using a single EJPP protocol and, hence, consuming a single ebit. **Definition 2** (Distributable packet).: A _distributable packet_ rooted on qubit \(\hat{q}\) is a subset of a circuit's non-local \(CR_{Z}\) gates that act on \(\hat{q}\) and can all be implemented simultaneously using a single EJPP protocol.3 Footnote 3: This definition captures the essence of Definition 16 from [14]. Here, we refer to the elements of a distributable packet \(P\) as gates \(g\in P\), whereas in [14] the elements of \(P\) are pairs \((\hat{q},t_{g})\), where \(\hat{q}\) is the qubit that \(P\) is rooted on and \(t_{g}\) is the layer in the circuit that gate \(g\) appears at. There is an immediate one-to-one correspondence between these two notations; we chose \(g\in P\) for the sake of brevity. **Lemma 3**.: Let \(P\) be a subset of \(CR_{Z}\) gates in a circuit comprised of gates in \(\{H,R_{Z},CR_{Z}\}\) for which a qubit allocation map \(\phi\) has been provided. If the following three conditions hold, then \(P\) is a distributable packet rooted on qubit \(\hat{q}\). 1. Each gate \(g\in P\) acts on \(\hat{q}\). 2. For each \(g\in P\) let \(q_{g}\) be the qubit \(g\) acts on such that \(q_{g}\neq\hat{q}\); there is a module \(\texttt{B}\in V\) such that \(\phi(\hat{q})\neq\texttt{B}\) and \(\phi(q_{g})=\texttt{B}\) for all \(g\in P\). 3. For every pair of gates \(g,g^{\prime}\in P\), there is no \(H\) gate in the circuit acting on \(\hat{q}\) between \(g\) and \(g^{\prime}\). Proof.: Conditions (a) and (b) ensure that sharing the state of qubit \(\hat{q}\) with module B is sufficient to implement all of the gates in \(P\) locally within B. A starting process creates a link qubit in module B that is entangled with \(\hat{q}\). Then, each gate \(g\in P\) is replaced by the same gate acting on \(q_{g}\) and said link qubit. The ending process is applied after the last gate in \(P\), measuring out the link and correcting as necessary to guarantee determinism. Condition (c) along with the circuit's gateset \(\{H,R_{Z},CR_{Z}\}\) implies that all gates between \(g\) and \(g^{\prime}\) commute past them. Closer inspection of the circuit for a starting process and ending process (see Fig. 1) reveals these also commute with \(R_{Z}\) and \(CR_{Z}\) gates. Therefore, their presence within the EJPP protocol does not affect its operation and we can apply all gates between \(g\) and \(g^{\prime}\) unchanged, on their original qubits. Then, it only remains to check the equivalence of the circuits in Fig. 1 generalises to the case of any number of consecutive \(CR_{Z}\) gates, which is straightforward. **Remark 4**.: While conditions (a) and (b) are necessary for all gates in \(P\) to be implementable using a single EJPP protocol, (c) can be replaced with a more general condition using a technique known as _embedding_[14]. 1. For every pair of gates \(g,g^{\prime}\in P\), all gates in the circuit acting on \(\hat{q}\) between \(g\) and \(g^{\prime}\) are _embeddable_. This leads to larger distributable packets. We will discuss what the term _embeddable_ refers to in Section 3.3. For now, it suffices to know that condition (c) implies (c\({}^{*}\)). ### DQC via hypergraph partitioning The qubit allocation subproblem introduced in Section 2 is reminiscent of a graph partitioning problem. Indeed, we may define the connectivity graph of a circuit as follows: each qubit in the circuit corresponds to a vertex and each \(CR_{Z}\) gate creates an edge between the vertices of the pair of qubits it acts on. It is straightforward to see that partitioning such a graph into \(k\) blocks corresponds to allocating each of the qubits to one of \(k\) different modules, and cut Figure 1: **Distribution of two non-local \(CR_{Z}\) gates via an EJPP protocol. The gates act on two different modules A and B, the former containing a qubit \(\hat{q}\) that both \(CR_{Z}\) gates are applied to. The starting process generates and consumes an ebit, depicted by a wavy line. Other than the ebit, all operations on the distributed circuit are LOCC.** edges correspond to non-local gates. Thus, the standard graph partitioning problem -- whose goal is to minimise the number of edges cut -- would produce qubit allocations that minimise the number of non-local gates. However, such an approach would not consider the fact that a single EJPP protocol is capable of implementing multiple non-local gates consuming a single ebit. If our objective is to minimise the number of ebits consumed, a different partition that creates more non-local gates may be advantageous. An example of such a situation is shown in Fig. 2. Crucially, the optimal qubit allocation of Fig. 1(a) places qubits \(q_{0}\) and \(q_{1}\) in module A and qubits \(q_{2}\) and \(q_{3}\) in module B, but such an assignment differs from the optimal partition of the circuit's connectivity graph Fig. 1(b), which would instead place \(q_{0}\) and \(q_{2}\) in module A and qubits \(q_{1}\) and \(q_{3}\) in module B. The former allocation yields four non-local gates whereas the latter yields only three, however, the former can be distributed using two ebits, while the latter requires three. In [11] it was shown that qubit allocation and non-local gate distribution could both be solved simultaneously via a reduction to hypergraph partitioning. Formally, the only difference between a hypergraph and a graph is that its edges need not be pairs, but subsets of vertices known as 'hyperedges'. The intuition behind why hypergraphs are better suited to describe the DQC problem is that, when multiple gates belong to the same distributable packet (Definition 2), we may represent them as a single hyperedge. Then, if any number of these gates become non-local due to a qubit allocation, the corresponding effect is that a single hyperedge will be cut by the partition, thus precisely capturing the number of EJPP protocols required to implement the distributable packet. The algorithm that builds such a hypergraph from a given circuit is described in [11] and Fig. 1(c) shows the outcome of the process on a simple circuit. In [11] the authors proved the following theorem. **Theorem 5** ([11]).: _Given a circuit, each of its possible distributed implementations corresponds to a unique partition of its hypergraph. Assuming a fully connected network of modules, the number of ebits required to implement such a distribution coincides with the cost of the partition, calculated using the connectivity metric.4_ Footnote 4: For a given hypergraph, where \(H\) is its set of hyperedges, the connectivity metric [23] of a given partition is calculated as \(\sum_{h\in H}\lambda(h)\!-\!1\) where \(\lambda(h)\) corresponds to the number of different partition blocks the hyperedge \(h\) has vertices on. This implies that we may reduce the problem of distributing a quantum circuit to the problem of hypergraph partitioning as follows. 1. Build the hypergraph of the circuit as described in [11]. 2. Use a state-of-art hypergraph partitioner to obtain an efficient partition. 3. Translate the partition into a distribution of the circuit. Notice that in the hypergraph of Fig. 1(c) there are more vertices than qubits in the circuit. In fact, there is a vertex per qubit and a vertex per \(CR_{Z}\) gate; we call them _qubit-vertices_ and _gate-vertices_ respectively. When a partition assigns a qubit-vertex to block A it indicates that such a qubit is to be allocated to module A; similarly, when a gate-vertex is assigned to block A it indicates that the corresponding \(CR_{Z}\) gate ought to be implemented as a local \(CR_{Z}\) gate within module A, with the aid of an EJPP protocol if the hyperedge is cut. Fig. 1(d) and Fig. 1(e) exemplify how a partition of the hypergraph gives rise to a distribution of the original circuit. This approach, as presented in [11] has some shortcomings, identified below. * Hypergraph partitioners often assume that all blocks of the partition should be filled with approximately the same number of vertices each. In our task, however, each module A may have a different capacity of workspace qubits \(\omega(\texttt{A})\). To account for this constraint, we may use hypergraph partitioners such as KaHyPar[23] which allow us to indicate the maximum capacity of each block; more details on Section 4.3. * In [11], only fully connected networks were considered (_i.e._ complete graphs), whereas in this work we consider heterogeneous networks (Section 2). Creation and sharing of ebits between adjacent modules is directly supported by the network's hardware. Non-adjacent modules may still share an ebit, but producing it will require some entanglement distribution, consuming multiple hardware-supported ebits in the process. The framework summarised in this section is not capable of making such a distinction. Some techniques used to extend hypergraph partitioning to account for heterogeneous networks are discussed in Section 4.3. * In Section 3.3 we discuss some advanced techniques to further reduce the ebit count of implementing non-local gates by merging multiple distributable packets. These techniques are beyond what can be captured in terms of hypergraph partitioning. Thus, in Section 4 we use the hypergraph partitioning to provide an initial solution to the DQC problem, whose non-local gate distribution is later refined using the techniques from Section 3.3, Section 3.4.1 and Appendix A. Approaches that solve the two subproblems of DQC separately -- qubit allocation and non-local gate distribution -- have been proposed in the literature. In [12], the authors solve the qubit allocation subproblem by partitioning a weighted graph describing the connectivity of the circuit, where the calculation of the weights attempts to take into account cases where multiple non-local gates may be implemented using a single ebit. Such an approach has the same shortcomings listed above, with the additional drawback that the weights only provide an estimate for the ebit cost (rather than the exact value as in the case of hypergraph partitioning) and the advantage that graph partitioners are simpler and, hence, can be expected to perform better than hypergraph partitioners. A follow up paper by the same authors [13] solves the qubit allocation subproblem using a Tabu search algorithm. The latter work supports heterogeneous networks, solving one of the three shortcomings discussed above, but fails to take advantage of the optimisation opportunities we discuss in Section 4.1. Both of these works solve non-local gate distribution on a second step, which we review in Section 3.4. ### Embedding Lemma 3 provides sufficient conditions for a group of non-local \(CR_{Z}\) gates to belong to the same distributable packet. Remark 4 hinted at a more general condition involving the notion of _embedding_ proposed in [14]. Fig. 3 provides a couple of examples where the \(CR_{Z}\) gates of phase \(\alpha\) and \(\beta\) belong to the same distributable packet even though there are \(H\) gates between them, violating condition (c) from Lemma 3. **Definition 6** (Embedding unit).: Consider an EJPP protocol with starting process \(\mathcal{S}_{\hat{q},\mathsf{B}}\) sharing qubit \(\hat{q}\) with module \(\mathsf{B}\) and ending process \(\mathcal{E}_{\hat{q},\mathsf{B}}\) (see Fig. 1). An embedding unit is a subcircuit \(C\) satisfying the Figure 3: **Examples of embedding. (a) Embedding \(H\cdot Z\cdot H\), (b) embedding \(H\cdot CZ\cdot H\), the correction \(CZ\) gate \(x^{\prime}\) is local. In both examples, the \(CR_{Z}\) gates with phase \(\alpha\) and \(\beta\) are implemented by the same EJPP protocol.** Figure 2: **Example of correspondence between circuits, graphs and hypergraphs. (a) An input circuit, (b) its connectivity graph, (c) its hypergraph, as defined in [11], (d) an optimal partition of the hypergraph, only two hyperedges are cut, (e) the distributed circuit that arises from the hypergraph partition, the number of EJPP protocols matches the connectivity metric of the hypergraph partition: two cuts (Theorem 5). Starting processes and ending processes are depicted as wavy arrows.** following identity: \[\mathcal{E}_{\hat{q},\texttt{B}}\,C\,\mathcal{S}_{\hat{q},\texttt{B}}\ =\ \left( \bigotimes_{\texttt{A}\in V}L_{\texttt{A}}\right)C\left(\bigotimes_{\texttt{A} \in V}K_{\texttt{A}}\right) \tag{2}\] where \(V\) is the set of modules in the network and for each module \(\texttt{A}\in V\), \(L_{\texttt{A}}\) and \(K_{\texttt{A}}\) are local gates within \(\texttt{A}\). We refer to the gates \(L_{\texttt{A}}\) and \(K_{\texttt{A}}\) as the _correction gates_ of the embedding. In essence, an embedding unit is a subcircuit appearing between gates of a distributable packet \(P\) such that, if \(P\) is distributed, we only require local correction gates to maintain circuit equivalence. Importantly, notice that we do not require \(C\) to be local -- it has not yet been distributed. Indeed, the embedded \(CZ\) gate labelled \(x^{\prime}\) in Fig. 2(b) is non-local. It is straightforward from the above definition that any gate that commutes with a starting process \(\mathcal{S}_{\hat{q},i}\) forms an embedding unit by itself, which is the reason why condition (c) from Lemma 3 implies (c\({}^{*}\)) from Remark 4. More interesting embedding units containing \(H\) gates are captured by the following lemma. **Lemma 7**.: Let \(C\) be a circuit built from \(\{H,R_{Z},CR_{Z}\}\) containing a qubit \(\hat{q}\), let \(\texttt{B}\) be a module and let \(\phi\) be a qubit allocation such that \(\phi(\hat{q})\neq\texttt{B}\). If each of the following conditions holds, then \(C\) is an embedding unit of an EJPP protocol sharing \(\hat{q}\) with module B. 1. The first gate and last gates in \(C\) are \(H\) gates acting on \(\hat{q}\). 2. All \(CR_{Z}\) gates within \(C\) that act on \(\hat{q}\) have their other qubit allocated to module B. 3. All \(CR_{Z}\) gates within \(C\) that act on \(\hat{q}\) have \(\pi\) phase -- _i.e._ they are \(CZ\) gates. 4. All \(R_{Z}\) gates within \(C\) that act on \(\hat{q}\) may be squashed together so that only \(R_{Z}\) gates with \(\pi\) phase remain -- _i.e._ Pauli \(Z\) gates. 5. There are no more than two \(H\) gates acting on \(\hat{q}\) in \(C\). Proof.: Immediate from Corollary 30 of [14]. Alternatively, this is a straightforward generalisation of the two embedding units shown in Fig. 3. Lemma 7 provides a sufficient condition for a subcircuit to be an embedding unit. In [14] a more detailed analysis shows that condition (e) can be relaxed, but the formalisation of this more general condition is too intricate to be presented in this summary. These more general conditions can be checked on a circuit using Algorithm 35 from [14], which we implemented in the software pytket_dqc we present in this work. Equipped with the notion of embedding units and the algorithm from [14] to identify them, we can now build larger distributable packets. Whenever a gate commutes with the packet's starting process, embedding it requires no correction gates. Whenever we encounter an embedding unit, we apply the embedding rules from Corollary 30 of [14] to introduce the required local correction gates; more details are provided in Appendix B. Condition (b) from Lemma 7 has a rather subtle implication: if two embedding units on different qubits contain the same \(CZ\) gate, only one of the two is embeddable, see Fig. 4. We then say that such a pair of embedding units have an _embedding conflict_; similarly, two distributable packets that contain embedding units in conflict are also said to have an embedding conflict. Resolving an embedding conflict consists of choosing which of the two distributable packets should be distributed and splitting the other one into two separate packets so that embedding the conflicting \(CZ\) gate a second time is no longer necessary. An algorithm for non-local gate distribution that takes advantage of embedding and resolves embedding conflicts was proposed in [14]; we briefly review the algorithm in Section 3.4.1. For the sake of brevity, we have not discussed how to deal with situations where a certain embedding unit must be embedded within more than one distributed packet. Thanks to Corollary 14 from [14], we know that these situations will never cause new conflicts. A more subtle situation arises when two distributable packets \(P\) and \(P^{\prime}\) happen to be intertwined in the sense that some gate \(g\in P\) needs to be embedded within \(P^{\prime}\) while, at the same time, some other gate \(g^{\prime}\in P^{\prime}\) needs to be embedded within \(P\). We describe how we deal with such a situation in Appendix B. ### Non-local gate distribution via vertex cover In this section we review the literature on the subproblem of non-local gate distribution. We focus on approaches that reduce it to finding the minimum vertex cover of a graph. We begin from a version of the problem with the following simplifications: * we assume that the network of modules is fully connected, * we ignore the optimisation opportunities embedding provides and * we impose that a non-local gate must be implemented in either of the two modules it acts on -- unfortunately, this prevents beneficial distributions such as the one in Fig. 5 from being considered. This simplified problem is presented in [12] under the name MS-HC; we summarise their solution in this section. One of the contributions of the present work is the extension of their approach to the general problem where these three constraints are lifted. In particular, Section 4.2 and Appendix A.3 allows us to consider heterogeneous networks of modules, we use the approach of [14] (summarised in Section 3.4.1) to exploit embedding and employ the method in Appendix A.1 to lift the last of the constraints. Once a qubit allocation has been chosen, all that remains to do is identify distributable packets and, for each non-local \(CR_{Z}\) gate, decide which of the packets it belongs to should be used to distribute it. In the absence of embedding, the first task -- which we refer to as _gate packing_ -- is trivial: scan the circuit qubit by qubit, from beginning to end, and find sequences of gates satisfying Lemma 3. We shall only consider the collection of largest distributable packets, _i.e._ those that are not a subset of any other distributable packet; as a consequence, each non-local \(CR_{Z}\) belongs to exactly two distributable packets -- one per qubit. The second task corresponds to finding the minimum vertex cover of a graph whose vertices represent the distributable packets and where an edge between two of these corresponds to the existence of at least one non-local \(CR_{Z}\) gate contained in both packets. A vertex cover of a graph is a subset of its vertices such that each edge is incident to at least one vertex in the subset; thus, a vertex cover of the previous graph selects which distributable packets ought to be realised so that all non-local \(CR_{Z}\) gates are distributed. A minimum vertex cover of the graph would select the fewest number of distributable packets and, hence, yield the optimal distribution under the given constraints. In the appendix of [12] its authors show that the previous graph is guaranteed to be bipartite, which implies the minimum vertex cover can be found efficiently. The authors of [12] then considered a more general problem where non-local gates may be implemented in a detached manner, _i.e._ so that distributions such as the one from Fig. 5 may be explored. This more general problem is not known to be reducible to a vertex cover problem on a bipartite graph. Nevertheless, the authors of [12] proposed an efficient algorithm that is guaranteed to provide a distribution only a logarithmic factor away from the best distribution achievable under the given constraints. However, said algorithm is still solving a simplified problem since it is omitting the following constraints and optimisation opportunities. **Network topology:**: In Section 2 we let networks be described by arbitrary (connected) graphs. Thus, the approach should take into account the distance between modules when computing the cost of distributing non-local gates. **Bounded link qubit register:**: Each module A may have a bound to the size of its link qubit register \(\epsilon(\texttt{A})\) (see Section 2). The resulting distribution should refrain from exceeding it. **Embedding:**: The embedding technique described in Section 3.3 lets us create larger distributable packets. Thus, an algorithm using embedding is likely to cover all non-local gates using fewer packets and, hence, find a distribution that uses fewer ebits. In [13] the authors propose an algorithm that is aware of the network topology and the bound to the size of the link qubit register. Rather than a minimum vertex cover problem, they consider the dual problem of maximising the number of non-local gates covered using a fixed number of ebits while satisfying a set of linear constraints. The linear constraints are used to capture the network topology and the bound to Figure 4: **Embedding conflict. (a) A simple circuit with two embedding units: one containing the \(CZ\) gate and the blue \(H\) gates; the other containing the \(CZ\) gate and the orange \(H\) gates. (b) Embedding only one of the embedding units causes no issues: the correction gate \(y\) is local. However, notice that the other embedding unit (the one containing the orange \(H\) gates) contains \(y\) as well. (c) Since \(y\) does not satisfy Lemma 7 (both of its qubits are in the same module), embedding it would create a non-local correction gate \(y^{\prime}\), defeating the purpose of embedding.** Figure 5: **Distribution with detached gate. In the distributed circuit, the \(CR_{Z}\) gate with phase \(\beta\) is implemented within module B, but originally had none of its qubits allocated to it — we refer to it as a detached gate.** the link qubit register. The central element of the optimisation procedure is carried out by an integer linear programming (ILP) subroutine. However, this approach does not take advantage of embedding. In [14] an algorithm that exploits embedding is proposed. The algorithm is based on finding minimum vertex covers and it makes use of graph colouring to identify solutions that satisfy the bound to the link qubit register. However, the algorithm is targeted to networks containing only two modules and, consequently, has a trivial network topology. The approach to DQC we propose in the present paper takes multiple insights from this latter work, orchestrating them in a more general framework. Thus, we dedicate the following section to introduce the ideas from [14] that are relevant to us. #### 3.4.1 Embedding-aware approach The approach to non-local gate distribution using minimum vertex cover can be extended to account for embedding. The means to do so were described in [14], where detailed algorithms were provided. Once again, the first step is to identify the largest distributable packets that can be realised without the use of embedding. Then, for each distributable packet \(P\), we check whether the gates that come immediately after \(P\) form an embedding unit. If so, this would allow to merge \(P\) with the packet appearing immediately after the embedding unit, creating a larger distributable packet. The algorithm identifies the largest distributable packets that can be achieved by such merging, and records the embeddings that are required to do so. This task simply requires us to carry out a scan over the circuit and, hence, it scales linearly with the dimensions of the circuit. The resulting distributable packets are then arranged in a graph \(G\) as in the case of the standard vertex cover approach: its vertices correspond to each of the packets and its edges correspond to common non-local \(CR_{Z}\) gates between them. It may seem that it only remains to find a minimum vertex cover of \(G\), but this would not account for embedding conflicts (see Fig. 4). Instead, we need to define an additional graph \(K\) whose vertices correspond to the embeddings that were used when merging distributable packets, where an edge between two such embeddings appears if and only if the embeddings are in conflict. With these two graphs \(G\) and \(K\) at hand, a sketch of the algorithm is presented below. 1. Find a minimum vertex cover \(\mathcal{C}_{G}\) of \(G\). 2. Find the subset \(\kappa\) of embeddings required to implement all of the distributable packets in \(\mathcal{C}_{G}\). 3. Extract the subgraph \(K_{\kappa}\) of \(K\) whose vertex set is \(\kappa\) and whose edges are those from \(K\) that connect vertices in \(\kappa\). 4. Obtain a minimum vertex cover \(\mathcal{C}_{K}\) of \(K_{\kappa}\): this is the smallest set of embeddings that we must give up in order to resolve all embedding conflicts incurred by \(\mathcal{C}_{G}\). 5. For each element in \(\mathcal{C}_{K}\) -- an embedding --, identify which distributable packet \(P\in\mathcal{C}_{G}\) used it (there is exactly one) and update \(\mathcal{C}_{G}\) replacing \(P\) with two distributable packets: one containing all of the gates in \(P\) that come before the \(CZ\) gate responsible for the embedding conflict and another with the gates in \(P\) that come afterwards. The resulting set of distributable packets \(\mathcal{C}_{G}\) is no longer a _minimum_ vertex cover of \(G\), but it is a vertex cover with no embedding conflicts. Thus, it can be used to generate a valid distribution. This approach is not guaranteed to return the overall optimal solution, but it does resolve the embedding conflicts of a given vertex cover of \(G\) in an optimal way. In an attempt to find better overall solutions, we may choose to repeat the routine above for multiple distinct vertex covers of \(G\) and pick the best among them [14]. The algorithms presented in [14] for the tasks just described were designed for networks with exactly two modules. Generalising these to networks of multiple modules is immediate: it is sufficient that our conditions for identifying distributable packets (Lemma 3) and embedding units (Lemma 7) required that all of their \(CR_{Z}\) gates acted on the same two modules. This guarantees that both \(G\) and \(K\) are bipartite graphs, so we may find a minimum vertex cover for them efficiently. The fact that these graphs are bipartite graphs is not trivial, but it follows from the same argument the authors of [12] used for their bipartite graph for the MS-HC problem. The authors of [14] propose how to take into account the bound to the link qubit register size -- via graph colouring - solutions that exceed such a bound. Then they present an efficient way of splitting the offending distributable packets so that the number of EJPP protocols that are simultaneously active is reduced, at the cost of increasing the total number of ebits consumed. Such an approach is beyond the scope of the present paper and we omit further details for the sake of brevity. ### Intermediate representation of distribution Throughout this section we have discussed multiple approaches aimed at optimising different aspects of the DQC problem. We have considered multiple abstractions -- _e.g._ distributable packets, embedding units, embedding conflicts, hypergraphs, _etc._ -- each tailored to be as natural as possible to the approach at hand. Our goal in this paper is to propose an approach that can take advantage of the insights of each of these optimisation methods. To do so, we require an intermediate representation where the outcome of each of these optimisation methods can be represented. Such an intermediate representation could simply be a partially distributed circuit; however, a more abstract representation would be preferable to minimise the overhead of dealing with superfluous low level details -- such as the correction gates required for an embedding unit, the exact placement of a starting process within a circuit, the reuse of link qubits, _etc._ -- that could easily be deferred to the final step of the workflow. Fortunately, all of what has been reviewed in this section can be captured within the framework of hypergraphs discussed in Section 3.2, which makes it a natural choice for our intermediate representation of a distribution. **Definition 8** (IR of distributions).: A Distribution contains the following information. * A hypergraph of \(|Q|+|\mathcal{G}|\) vertices, where \(Q\) is the set of qubits in the original circuit and \(\mathcal{G}\) is its collection of \(CR_{Z}\) gates. We refer to these as qubit-vertices and gate-vertices respectively, as established in Section 3.2. * An allocation map \(\phi\colon Q\cup\mathcal{G}\to V\), where \(V\) is the set of modules in the network. Additionally, we include the original circuit and the network of modules, which remain unchanged throughout the workflow. The purpose of including the original circuit and the network of modules within the Distribution is to be able to assess the ebit cost (see Section 4.2). Furthermore, the information contained in Distribution is all we require to generate the corresponding distributed circuit; we explain how to do so in Appendix B. Notice that the allocation map \(\phi\) determines a partition of the hypergraph. Below, we briefly discuss how the different abstractions considered in this section can be captured within a Distribution. Recall that, by construction of the hypergraph in Section 3.2, each hyperedge has a single qubit-vertex and each gate-vertex is present in exactly two hyperedges. **Non-local gate:**: a gate \(g\in\mathcal{G}\) is non-local if and only if its adjacent qubit-vertices \(q\) and \(q^{\prime}\) satisfy \(\phi(q)\neq\phi(q^{\prime})\). **Detached gate:**: a gate \(g\in\mathcal{G}\) is detached if and only if its adjacent qubit-vertices \(q\) and \(q^{\prime}\) satisfy \(\phi(q)\neq\phi(g)\) and \(\phi(q^{\prime})\neq\phi(g)\). **Distributable packet:**: a distributable packet \(P\) rooted on \(\hat{q}\) can be represented as a hyperedge with qubit-vertex \(\hat{q}\) and the gate-vertices corresponding to the gates in \(P\). In general, a hyperedge may contain the union of any number of distributable packets as long as they are all rooted on the same qubit \(\hat{q}\). Whereas it is necessary for all gates of a distributable packet \(g\in P\) to be allocated to the same module \(\phi(g)\), this requirement does not apply to hyperedges. As a consequence, we can extract the distributable packets comprising a hyperedge by grouping its gate-vertices in terms of where they are allocated to. **Embedding unit:**: if two distributable packets may be merged together by embedding the gates between them, the same can be said about merging the hyperedges the packets belong to. As such, embedding techniques alter the hypergraph itself, increasing the size of hyperedges for the sake of reducing their number. Embedding units can be retrieved on demand by inspecting the subcircuit between any two gates on the same hyperedge. Since Distribution is meant to capture valid distributions, we assume no embedding conflicts are incurred; it is the responsibility of the optimising method to guarantee that this is satisfied. Verifying that the bound to computation registers \(\omega(\mathtt{A})\) of each module \(\mathtt{A}\in V\) is satisfied is straightforward: simply count the number of \(q\in Q\) such that \(\phi(q)=\mathtt{A}\). The cost in the number of ebits can be inferred using the methods presented in Section 4.2. Unfortunately, the satisfaction of bound to link qubit registers \(\epsilon(\mathtt{A})\) cannot be easily checked using our intermediate representation; instead, we need to generate its corresponding distributed circuit (as detailed in Appendix B) and count the number of link qubits used -- recall that this is not the same as the number of ebits, since space in the link qubit registers may be reused. This is not an obstacle to our optimisation approaches since none of them consider the bound \(\epsilon(\mathtt{A})\) within their routines: satisfaction of this bound is deferred to a final pass at the end of the workflow that acts directly on the distributed circuit and is described in Appendix C. ## 4 Distribution techniques In this section we discuss the novel distribution techniques that we have implemented in pytket_dqc, our DQC tool, available at [https://github.com/CQCL/pytket-dqc](https://github.com/CQCL/pytket-dqc). Our tool is designed as an extension to pytket, the Python interface of the TKET compiler [21] and, as such, it may easily be integrated in a full compilation stack. Our techniques are orchestrated together in the default workflows detailed on Section 5.3. The user may choose to run these default workflows or create a custom one, combining the distribution techniques available as they prefer. Any DQC workflow making use of pytket_dqc should contain the following steps, in this precise order. **Rebase.**: Rewrite the circuit to an equivalent one in the gateset \(\{H,R_{Z},CR_{Z}\}\). Within pytket_dqc we provide an automated method to do so, based upon the rebase passes provided within pytket. **Qubit allocation.**: Assign to which module each qubit of the circuit should be allocated, adhering to the bound on the size of the computation register. Our techniques are based on the hypergraph representation discussed in Section 3.5, and the user may choose between an annealing approach or a third-party hypergraph partitioner with a greedy refinement, both of which are detailed in Section 4.3. Both of these take advantage of Steiner trees as discussed in Section 4.1. **Gate packing.**: This step is meant to identify opportunities where embedding may be used, passing this information to the next step. In particular, we implemented the algorithm proposed in [14] for this task, whose core ideas are summarised in Section 3.3. **Non-local gate distribution.**: Two options are available: either use the solution provided by the qubit allocation step -- distribute gates according to which modules their gate-vertices are assigned to -- or make use of the vertex cover approach proposed in [14] and summarised in Section 3.4.1. The former option will not take advantage of embedding, but will make use of Steiner trees; conversely, the latter option will consider embedding but not Steiner trees. Neither of these guarantee satisfaction of the bound to the link qubit registers; this is deferred to the last step of the workflow. **Refinement.**: The previous step makes use of either the embedding technique or Steiner trees. During this refinement step, the user can choose to apply any number of the passes described in Appendix A. These refinement passes further improve upon the current solution by taking advantage of readily available opportunities for optimisation using Steiner trees and embedding. The key insight that lets us combine these two seemingly mutually exclusive techniques is described in Section 4.2. A refinement that lets us take advantage of detached gates (as in Fig. 5) is also provided. **Circuit generation.**: Our tool provides methods for the automatic generation of the distributed circuit as a pytket circuit or QASM file. We keep track of the occupancy of the link qubit register of each module and reuse link qubits after the EJPP protocol that employed them terminates. Thus, even though our methods do not guarantee satisfaction of a bound to communication memory, the required memory capacity is not directly dependent on the number of EJPP protocols carried out, but rather on the maximum number of EJPP protocols simultaneously active at any given time. As shown in Appendix C, the size of the link qubit registers remains manageable, even if the user does not specify a bound. If the user does specify a bound to link qubit registers, we use the routine described in Appendix C to update the distributed circuit as necessary to satisfy the bound, at the cost of increasing the number of ebits required. Moreover, our tool provides some basic functions for analysing the distributed circuit, such as counting the number of ebits used and the qubit occupancy of the registers of each module. We also provide a method to verify the equivalence between the original circuit and the distributed one, based on [24], which is automatically called at the end of the circuit generation step. ### Gate distribution using Steiner trees One approach to implementing a distribution hyperedge between two non adjacent modules in a heterogeneous network would be to first construct a single ebit between the relevant modules. This could be done via entanglement swapping; consuming ebits between intermediate modules in the network to build the single required ebit. This single ebit can then be used to perform the EJPP protocol at a total cost in ebits equal to the shortest path in the network between the two modules. In the case where the hyperedge is distributed between three modules, which is to say two distributable packets, and so EJPP processes, are required, the e-bit cost of this approach is the sum of the cost of constructing two ebits. In this case this would be the sum of the shortest paths in the network between the module from which the hyperedge is being distributed, and the two other modules. During the above described technique, the proxy link qubits in the intermediate modules are measured before the non-local gates have been applied. Alternatively, as these disentangling operation do not affect the qubits which are acted on by the non-local gate, they may be delayed until after the non-local gates have been enacted. Additionally, the starting and ending process commute with the controls of the distributed gates. This means that when non-local gates belong to the same hyperedge are distributed to separate modules, all starting processes can be performed before the gates are enacted, and all ending process may be performed after all gates are acted. This process is depicted in Fig. 6. Reusing intermediate link qubits in the aforementioned way reduces the e-bit cost of the distribution to the size of the smallest subtree of the module network which includes the modules of concern. This subgraph is known as a Steiner tree. This approach extends to Steiner trees of arbitrary shape, as exemplified in Fig. 6. Circuit distribution in pytket_dqc makes use of Steiner trees instead of entanglement swapping, allowing us to make savings upon a naive application of the EJPP protocol. Note that it is not possible to safely commute entangling and disentangling operation as described in the case when Steiner trees are combined with embedding units containing \(H\) gates. This is discussed in in Section 4.2. ### Combining embedding and Steiner trees The approach proposed in Section 4.1 efficiently generates the entanglement sharing required for the distribution of the gates in a hyperedge, using Steiner trees. To do so, we maintain the entanglement of some proxy link qubits throughout the whole duration of the collection of EJPP processes. Unfortunately, if the hyperedge includes any distributable packet that requires some embedding, such as the example in Fig. 7, maintaining the entanglement of these proxy link causes a problem: correction gates acting on them will be required. As shown in Fig. 7 these correction gates may be non-local, thus creating the need for extra ebits to implement them, defeating the purpose of embedding. There is a simple solution to our compatibility issue: maintain the entanglement of these proxy link qubits for as long as possible to maximise the use of Steiner trees, but disentangle them right before an embedding unit so that they do not interfere with it. The implementation of such an intuition is sketched in Algorithm 1. Fig. 6(d) shows the result of running Algorithm 1 on a simple circuit. The proxy link qubit of module B is maintained throughout the circuit, whereas the link qubits of modules C and D are only maintained as long as necessary to implement the two \(CR_{Z}\) gates. Maintaining the link qubit of module B saves one ebit, whereas our management of the link qubits of modules C and D avoids the need for non-local correction gates that would otherwise be required (see Fig. 6(c)). Thus, it is possible to define distributions that combine the techniques of embedding and Steiner trees, and Algorithm 1 is capable of generating the corresponding circuit. We can count the number of ebits consumed in the distributed circuit outputted by Algorithm 1, thus obtaining the exact ebit cost of the distribution. This can be done for each cut hyperedge in our hypergraph, and it is straightforward to check that Algorithm 1 runs in time \(\mathcal{O}(g_{d}+g_{e})\) where \(g_{d}\) is the number of gate-vertices in the hyperedge and \(g_{e}\) is the number of gates that need to be embedded to realise its distribution. Thus, this provides an efficient function to calculate the exact ebit cost of a given cut hyperedge, using both embedding and Steiner trees. This cost function will be used by the combinatorial optimisation approaches of Appendix A which will be the ones to ultimately decide how each non-local gates should be distributed. **Remark 9**.: Algorithm 1 iterates over the hyperedge's subcircuit (hedge_circ): given a hyperedge whose qubit-vertex is \(\hat{q}\), its subcircuit is the sequence of gates from the original circuit that contains all of the gates corresponding to gate-vertices of the hyperedge and every gate in between these that acts on \(\hat{q}\). The hyperedge given to Algorithm 1 as input is required to be valid, in the sense that every gate in its subcircuit is either distributable or embeddable. We can verify this ahead of time by checking the conditions from Lemma 3 (with the amend from Remark 4) and Lemma 7 respectively. ### Partitioning on heterogenous networks In Section 3.2 we reviewed an approach that reduces the DQC problem on fully connected networks to hypergraph partitioning [11]. In the case of heterogeneous networks, the DQC problem still reduces to (a version of) hypergraph partitioning, but the cost function of a partition is different -- since we need to consider the distance between modules -- and we must filter out invalid solutions where the module's computation register capacity is exceeded. In this section we propose two approaches to solve this alternative version of hypergraph partitioning and, thus, the DQC problem on heterogeneous networks. Both of our approaches start from an initial partition and apply rounds of updates to it, guided by the cost function defined in Section 4.2. On each round, vertices of the hypergraph are moved from their assigned module to a different one; then, the cost of every hyperedge containing a reallocated vertex is updated. We can calculate the gain of the moves as the difference between the new cost and the previous cost. Depending on the gain and the approach used, the moves will be committed or rolled back. Since calculation of the cost function from Section 4.2 requires finding Steiner trees on the network's graph -- which is a non-trivial computation -- we keep a cache of already computed Steiner trees. Recall that our hypergraphs have two kinds of vertices: qubit-vertices and gate-vertices. The allocation of a qubit-vertex to a module fills up one slot of the module's computation register, whereas the allocation of gate-vertices do not affect the computation register. Consequently, we assign weight 1 to qubit-vertices and weight 0 to gate-vertices and filter out partitions where the sum of weights in a module exceeds the corresponding module's computation register capacity. If a move would cause the capacity of a module to be exceeded, we select a qubit-vertex on the offending module and swap it with the vertex we intended to move. Our approaches assume unbounded link qubit registers, unlike [13]. In contrast, we make use of Steiner trees as discussed in Section 4.1, tap Figure 6: **Gate distribution via EJPP embedding with Steiner trees.** (a) An input circuit where each qubit is allocated to a different module. (b) The distribution of (a) onto a line network. This should be compared to the approach of using entanglement swapping, where the number of required ebits would be 3. In this case disentangling operation have been delayed until after the non-local gates have been enacted, and starting processes related to \(\beta\) have been commuted to the start of the computation. This results in a total ebit cost of 2 (c) The distribution of (a) in a T shaped network with entangling and disentangling operations acting along the edges of the Steiner tree connecting the relevant modules. Figure 7: **Combining embedding and Steiner trees.** (a) An input circuit where each qubit is allocated to a different module, (b) the topology of the network of modules. (c) An equivalent circuit generated using the approach from Section 4.1: two non-local correction gates arise, drawn in red. (d) An equivalent circuit generated by Algorithm 1. ``` 1:procedure(\(\triangleright\))\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\)\(\triangleright\)\(\{\)\(\{\{\{\{\{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}})\)\(\triangleright\)\(\triangleright\)\(\)\(\}\)\(\triangleright\)\(\)\(\)\(\{\{\{\{\{\{\{\{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{}{}}{}}{}}{}}{}}{}}{}}{}}{}}{}}{}}{}}{}}}{}}{}}{}}{}}{}}}{}}}{}}{}}}{}}}{}}}}{}}}}}{}}}}}}{}}}}}}}{}}}}}}{}}}}{}}}{}}}{}\}\}\}\}\}\}\}}\}\}}}\}}}\}}\}}}\}}\}}}\}}\}}}\}}\}}\}}}\}}}}\}}}\}\}}}}\}}\}}\{\}\}}\}\}}\}}\{\{\}\\\}\}\\\\\}}}}}\{\{\\\\\\\}}\\\\}}\{\\{\}\}\}\}\}\{\}\}\}\{\}\}\}\{\}\\\\{\}\}\{\}\}\}\\{\}\\\}\{\\\\}\\\\{\}\}\}\\\\{\{\{\\\\\\\\{\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\ partition -- the boundary of the partition. For each vertex \(v\) in said boundary we find all of the modules that \(v\) has a neighbour in; we then calculate the gain of moving \(v\) to each of these modules and pick the most advantageous move (with ties broken randomly) or, if all of them are detrimental, we choose not to move \(v\). A round finishes when this routine has been run once for each vertex in the boundary. Thus, each round generates a new partition and the cost of its distribution is decreased monotonically. There is no attempt to escape local minima. The initial solution provided by KaHyPar -- which does have strategies to avoid local minima [23] -- already identifies groups of qubits that should be allocated to the same module; such grouping is a property of the circuit and hence, equally valid in the context of heterogeneous networks. Unfortunately, our greedy refinement struggles to move vertices that have many neighbours within its allocated module but few in other modules. We expect this to be a noticeable limitation in the case of networks resembling a line graph, where some of these immobile vertices may be stuck on the ends of the network. In practice, however, we expect that modules will be arranged in a small-world network5 such as a hypercube, where the allocation of a few immobile vertices is not crucial thanks to the network's small average distance. In such cases, the potential for optimisation would primarily come from making smart choices of where the vertices that do not strongly belong to any of the modules (_i.e._ those in the boundary of the partition), taking into account the topology of the network. Footnote 5: In a small-world network of \(N\) nodes, few of them are adjacent to each other, but, the path between any two nodes tends to be of length \(\log N\). Small-world networks are common in engineering due to their logarithmic scaling average distance, which reduces communication bottlenecks [25]. ## 5 Benchmarks Here we present the results of benchmarking the methods described in Section 4, comparing them to [12]. We describe the networks, circuits, and distribution workflows used in Section 5.1, Section 5.2 and Section 5.3 respectively. The results of the benchmarks are shown and discussed in Section 5.4. ### Networks The following architectures are used in the experiments of Section 5.4. Generator methods for these networks are available within pytket_dqc. **Homogeneous:**: All modules are directly connected to all other modules. All modules contain the same number of qubits, and no bound is set on the number of link qubits available in each module. This models an idealised network, and is exemplified in Fig. (a)a. We refer to the following collectively as _heterogeneous networks_. We will generate random instances of heterogeneous networks, and they are designed to be representative of real world networks. **Unstructured:**: Modules are connected according to edges in random Erdos-Renyi graphs, where each possible edge in the graph is added with a fixed probability. In our case we post-select to generate only connected graphs. This is the most common notion of random networks, and is exemplified in Fig. (b)b. **Scale-free:**: The distribution of node degrees in a scale-free network follows a power law. Such networks have few nodes, called hubs, with high degree. This is a common model for networks, including the World Wide Web [26]. They can be generated using preferential attachment, where high degree nodes are more likely to receive new edges as nodes are added. This is the case for the Barabasi-Albert model [26] of scale-free networks, which we use to generate them here.6 Scale-free networks are exemplified in Fig. (c)c. Footnote 6: We find this broad class of networks to be a well motivated example for the purposes of our comparison. However, practical considerations give subdivisions of the class of scale-free networks [27]. A fine grained analysis of the resulting impact on quantum circuit distribution would be of interest. **Small-world:**: The characteristic path lengths of small-world networks are small, while the clustering coefficient is large [25, 28]. This is compared to random Erdos-Renyi graphs which have small characteristic path and small clustering coefficient. Unlike Scale-free networks, small-world networks do not include hub nodes. Such networks are used to model social networks and are prevalent in engineering due to their communication efficiency [25]. We generate them using the Watts-Strogatz model [28], and exemplify them in Fig. (d)d. The particular sizes of the networks we use are listed in the results of Section 5.4. In the case of the heterogeneous networks, edge probabilities are set so that the average number of edges incident on each module is two, and qubits are assigned at random to each module. We take that the size of the link qubit register is the largest integer smaller than the average number of computational qubits per module. This means that one would not typically be able to fit the computational qubits of one network module into the link qubit register of another, and as such that networking the modules together results in an increase in the number of computation qubits. Bounds to the size of the link qubit register are not considered in Section 5.4, but are explored in Appendix C. ### Circuits The following classes of randomly generated circuits are considered during the experiments of Section 5.4. **CZ Fraction:**: Consisting of \(d\) layers of gates, with each layer built from \(H\) and \(CZ\) gates. A parameter cz_fraction determines the proportion of the qubits on which \(CZ\) gates are acted in each layer. These benchmark circuits are already in the gateset considered by the distribution workflows studied, and so provide a controlled way to study the performance of these workflows. \(CZ\) fraction circuits are introduced in [12], exemplified in Fig. (a)a, and detailed in Algorithm 2. While CZ Fraction circuits were designed for the study of DQC workflows, the following are inspired by popular protocols. **Quantum Volume:**: Consists of \(d\)_layers_ of random two-qubit gates, each acting on different bipartitions of the qubits, and similar to those used for the quantum volume benchmark [29]. By utilising uniformly random two-qubit unitaries and all-to-all connectivity, Quantum Volume Circuits provide a comprehensive benchmark. While CZ Fraction and Pauli Gadget circuits naturally decompose to contain \(CZ\) gates when rewritten in \(\{H,R_{Z},CR_{Z}\}\), Quantum Volume circuits will contain \(CR_{Z}\) gates of a variety of rotation angles. This exemplifies the capacity for pytket_dqc to distribute such gates. Quantum Volume circuits are exemplified in Fig. (b)b and detailed in Algorithm 3. **Pauli Gadget:**: Pauli gadgets [30] are quantum circuits implementing the exponential of a Pauli tensor. Sequences of Pauli gadgets acting on qubits form _product formula_ circuits, most commonly used in Hamiltonian simulation and the variational quantum eigensolver (VQE)[31, 32, 33]. Circuits from this particular class of Pauli Gadget circuits are constructed from several layers of random Pauli Gadgets, each acting on a random subset of \(n\) qubits [34]. Pauli Gadget circuits are exemplified in Fig. (c)c and detailed in Algorithm 4. In the case of all benchmarks conducted in this work, the number of layers used is set to be equal to the number of qubits in the circuit. The comparative size of the circuits in these classes is seen in Fig. 10. Note that \(CZ\) fraction circuits contain many fewer two-qubit gates than circuits from the other two classes. This is because each layer of the Quantum Volume and Pauli Gadget circuits corresponds to many gates when decomposed into the \(\{H,R_{Z},CR_{Z}\}\) gate set. Further, while circuits spanning the same number of qubits in the Quantum Volume class contain more two-qubit gates than those in the Pauli Gadget class, this number is comparable. ### Distribution workflows This section details the distribution workflows used in the experiments of Section 5.4. Our novel distribution workflows improve upon the distributions output by the following schemes, presented in the literature [11, 14] and available through pytket_dqc **Embed:**: Utilises the approach discussed in Section 3.4.1 for distributing quantum circuits using vertex covering. **Partition:**: Utilises the approach discussed in Section 3.2 for distributing quantum circuits using hypergraph partitioning. The following workflows are novel to this work, and are available through pytket_dqc. The refinement passes referenced here are detailed further in Appendix A. **EmbedSteiner:**: All gates in each hyperedge of distributions resulting from Embed act between the same two modules. EmbedSteiner improves upon the output of Embed by merging packets where doing so does not require additional embedding, as discussed in Appendix A.3. This results in an ebit saving from reusing proxy link qubits when distributing entanglement according Steiner trees. **EmbedSteinerDetach:**: Non-local gates are allocated by Embed to either one of the two modules that contain the qubits the gate acts on. Figure 8: **Example network architecture graphs.** Vertices indicate modules. Edges indicate connections along which ebits can be established. ``` 1:Input: Width, \(n\in\mathbb{Z}\), depth, \(d\in\mathbb{Z}\), fraction \(p\in[0,1]\) Output: Circuit, \(C_{n}\) ``` ``` 1:for each layer \(t\) up to depth \(d\)do 2:for each qubit \(q_{i}\)do 3: With probability \(1-p\) apply \(H\). 4: Randomly pair all qubits to which no \(H\) was acted. 5: To each pair apply \(CZ\). ``` **Algorithm 2** Building an instance of CZ Fraction. ``` 1:Input: Width, \(n\in\mathbb{Z}\), depth, \(d\in\mathbb{Z}\) Output: Circuit, \(C_{n}\) ``` **Algorithm 3** Building an instance of Quantum Volume. ``` 1:Input: Width, \(n\in\mathbb{Z}\), depth, \(d\in\mathbb{Z}\) Output: Circuit, \(C_{n}\) ``` **Algorithm 4** Building an instance of Pauli Gadget. ``` 1:Input: Width, \(n\in\mathbb{Z}\), depth, \(d\in\mathbb{Z}\) Output: Circuit, \(C_{n}\) ``` [MISSING_PAGE_POST] #### 5.4.1 Homogeneous networks We compare the techniques described in Section 4 to the techniques of [12], namely Full6*-Simple and Full6*-LP. Aligning with the target scenario of [12], we consider homogeneous networks and CZ Fraction circuits. We consider networks with 4, 5 and 6 modules, each with 8 qubits per module, as well as 2 module networks with 16 and 25 qubits per module. For each network size we generate 5 random CZ fraction circuits of that size. Results concerning networks with more than 2 modules can be seen in Fig. 11. Consistently, the unrefined distribution workflows producing the lowest cost distributions are Partition and Full6*-Simple.7 For smaller networks Partition mildly outperforms Full6*-Simple. Footnote 7: Note that this contrasts with the results reported in [12]. This is the result of correcting a poor choice of default parameters in [11], which limited how large a hyperedge could be. Annealing performs the worst overall, which is to be expected as the methods used are particularly general. However, Annealing is particularly sensitive to the values of hyper-parameters, particularly the number of annealing iterations performed. Hence, these results may be improved by increasing the number of iterations. Here, the number of iterations is chosen so that the time taken by Annealing is roughly comparable to those of the best performing unrefined distribution workflows, as seen in Fig. (b)b. Partition performs the quickest across circuit sizes and \(CZ\) fractions, while the scaling of Full6*-LP and EmbedSteinerDetach is the worst. However, as no workflow takes more than a few minutes to complete, the time taken is acceptable in all cases. Embed performs poorly in the results of Fig. (a)a. This is unsurprising as it corresponds to the original work from [14] which was designed to work best with 2 modules, where detached gates need not be considered. However EmbedSteinerDetach significantly improves upon Embed, demonstrating the significant potential gains to be made from the use to detached gates. Indeed, in the case of 2 modules, as seen in Fig. 12, Embed performs the best (particularly in the regime of 50 qubits and \(CZ\) fraction of 0.5 and 0.7). In this case EmbedSteinerDetach does not improve the results, as is to be expected since in the 2 module case there is no opportunity for detached gates. In the case of networks containing more than 2 modules, PartitionEmbed barely improves upon Partition. This may be because Partition produces many detached gates which cannot be embedded by the embedding refinement pass. In the case of 2 server networks, where no gates are detached, PartitionEmbed mildly improves upon Partition, but does not outperform Embed. This demonstrates that embedding can be beneficial when sequences of gates act between 2 modules, but implies that embedding should be considered in the first instance on such networks, rather than through refinement. We consider the performance of these techniques on the Quantum Volume and Pauli Gadget circuit classes, giving the results in Fig. 13. Here we consider only network with greater than 2 modules, and so do not consider Embed which performs well only on 2 module networks. As these circuits have a significantly larger number of gates than the \(CZ\) Fraction circuits we consider only the quicker distribu Figure 10: **Average number of two-qubit gates for circuit of each type.** Points indicate the mean number of two-qubit gates for circuits of the given type, covering the given number of qubits. Error bars indicate one standard deviation. Figure 9: **Examples of circuits used for benchmarks.** Figure 11: **Distribution techniques applied to homogeneous networks and CZ fraction circuits. Here we use the notation where homogeneous_n_m is a homogeneous network connecting n modules in a network with a total of n qubits. Bars indicate the median over 5 circuits. Error bars indicate 75% percentile range.** Figure 12: **Distribution techniques applied to homogeneous networks and CZ fraction circuits over 2 modules. Here we use the notation that homogeneous_n_m is a homogeneous network connecting n nodes in a network with a total of n qubits. Bars indicate the median over 5 circuits. Error bars indicate 75% percentile range.** tion workflows, namely FullG*-Simple, Partition, PartitionEmbed, and Annealing. Note that the cost of distributing Pauli Gadget circuits is cheaper for a similar total number of two-qubit gates than the cost of distributing Quantum Volume circuits. Refer to Fig. 10 for details on comparative 2-qubit gate counts. This is to be expected since the structure of Pauli Gadget circuits, having long sequences of \(CZ\) gates, allows for the construction of larger distributable packets. For the same reason, Pauli Gadget circuits may be distributed more quickly. In Fig. 13 we see a similar pattern to the relative performance of the schemes as we saw in Fig. 11, namely that there is no significant difference in the e-bit costs of the distributions produced by each workflow, apart from that Annealing has a higher cost. Partition performs best if both the ebit cost and time taken are considered. FullG*-Simple performs similarly well as measured by ebit cost, but the time required to distribute with FullG*-Simple scales worse as the number of distributable packets becomes very large, as is the case for the larger Quantum Volume circuits. #### 5.4.2 Heterogeneous networks Here we compare the performance of Embed, EmbedSteiner, EmbedSteinerDetach Partition, PartitionHetero, PartitionEmbed, and PartitionHeteroEmbed, each of which is capable of performing circuit distribution over heterogeneous networks (although Embed and Partition are not designed for them). We do not include results for Annealing in the plots of this section, as in each case it is outperformed by PartitionHetero. We use networks with 3, 4, and 5 modules, each with an average of 6 computational qubits per module. Here we do not bound the size of the link qubit register, instead exploring these bounds in Appendix C. For each network size we generate 5 random instances of each of the heterogeneous networks described in Section 5.1. The results of these benchmarks can be found in Figs. 14 and 15, and our findings are detailed below. FullG*-Simple and FullG*-LP are not suited to heterogeneous networks, and so the most relevant comparable result is that of [13]. Unfortunately, we were not able to access the implementation of the work of [13] for comparison. The latter work does not make use of Steiner trees for entanglement distribution, which are utilised by all the the schemes presented in this section except Embed. The work of [13] does not consider embedding either, which is considered by Embed, EmbedSteiner, EmbedSteinerDetach, PartitionEmbed, and PartitionHeteroEmbed, and is shown to provide a reduction in ebit cost. As such we expect our techniques to compare favourably to those of [13]. Refinement has little effect on Quantum Volume circuits.We expect that distributable packets are unavoidably small in the case of Quantum Volume circuits since there are few consecutive \(CR_{Z}\) gates in the circuits and few valid embedding units: the phases of \(R_{Z}\) gates will rarely satisfy condition (d) from Lemma 7. In Fig. 13(a) this manifests in there being no gain from using refinement passes targeted at the use of Steiner trees and embedding. Additionally, no benefit is found in these circuits when performing boundary reallocation targeted at optimising for the network topology (PartitionHetero) and detached gates (EmbedSteinerDetach). This again reflects that the hyperedges are too small (often just edges from gate-vertex to qubit-vertex) which, combined with the uniformly random connectivity of the circuit, leads to no window for improvement of the vertex allocation. Each refinement improves the median cost of Pauli Gadget circuits.As opposed to Quantum Volume circuits, distributable packets in Pauli Gadget circuits are relatively large, and can be beneficially combined. This is shown in the improvement achieved in Figs. 13(a) and 14(a) by employing refinement passes making use of Steiner trees, detached gates and embedding. Pauli Gadget circuits are cheaper and quicker to distribute.Fig. 13(a) demonstrates that, as a result of Pauli Gadget circuits having larger distributable packets, the cost of distribution of Pauli Gadget circuits is much less than that of Quantum Volume circuits of similar size. Likewise, as seen in Fig. 13(b), the time required to distribute Pauli gadget circuits is shorter since run time scales primarily with respect to the number of packets, rather than the number of qubits or gates in the circuit. CZ Fraction circuits on networks with more than 2 modules do not benefit greatly form embedding.As observed initially in Fig. 10(a), we see again in Fig. 14(a) that refinement to make use of embedding has little impact on the resulting cost of distributing CZ fraction circuits onto networks with more than 2 modules. This identifies a middle ground between the more structured Pauli Gadget circuits, which do benefit from embedding, and the larger gate set of the Quantum Volume circuits, which do not benefit from refinement of any kind. Techniques combined perform bestWe see that EmbedSteinerDetach typically perform as well or better than the other workflows. This demonstrates the benefit of combining the use of detached gates, Steiner trees, and embedding, and that no one or two alone would perform ## 6 Conclusion Figure 13: **Distribution techniques applied to homogeneous networks and Quantum Volume and Pauli Gadget circuits. Here we use homogeneous networks built of 3, 4, 5 and 6 modules, each with 8 qubits. Each sample in the experiment corresponds to a single circuit, with 5 samples per bar. Bars indicate the median over five circuits. Error bars indicate 75% percentile range.** Figure 14: **Distribution over heterogeneous networks. Here we use heterogeneous networks built of 3, 4, and 5 modules, each with an average of 6 qubits. Each sample in the experiment corresponds to a single circuit-network pair. Each bar/box considers 5 circuits and 5 networks, giving a total of 25 circuit-network pairs per bar/box.** Figure 15: **Distribution techniques applied to heterogeneous networks and CZ fraction circuits.** Here we use the notation where type_n_m is a network of type connecting n modules in a network with a total of m qubits. Bars indicate the median over 5 circuits. Error bars indicate 75% percentile range. best. That EmbedSteinerDetach mildly outperforms PartitionHeteroEmbed on average -- which also makes use of detached gates, Steiner trees and embedding -- in the Pauli Gadget results of Fig. 13(a) indicates that embedding is hard to capture in a refinement pass, so it should instead be optimised for in the first instance. #### 5.4.3 Chemically-Aware Ansatz We explore the performance of our approaches in the particular case of a chemically-aware unitary coupled cluster singles and doubles ansatz [36]. We use the example of the minimal basis H\({}_{2}\)O molecule with C\({}_{2v}\) point group symmetry and the 6 electrons in 5 spatial orbital (6e, 5o) active space. The corresponding circuit contains 10 qubits, and is built from Pauli gadgets selected to reflect the symmetries of the system. In the gateset \(\{H,R_{Z},CR_{Z}\}\) the circuit contains 463 2-qubit gates. We distribute this circuit onto the networks of 11 qubits depicted in Fig. 16, without bounds on the link qubit register sizes. The results are listed in Table 1. In the results of Section 5.4.2, Section 5.4.1 and Appendix C the number of qubits in the circuit matches the total number of computation qubits in the network. However our tools are capable of managing situations where there are more computational qubits in the network than are required by the circuit, as demonstrated here. As expected and indicated by the results of Section 5.4.2, we see that the ebit cost decreases with additional refinement. Here it is noticeable that embedding is beneficial, both when introduced as part of a refinement pass, and when introduced during an initial circuit distribution. This shows that real application have circuit structures which benefit from embedding. Indeed it is the case that EmbedSteinerDetach, which introduces embedding in the first instance, performs best. ## 6 Conclusion and future work In this work we consider the distribution of quantum circuits over heterogeneous networks. We propose a collection of methods for distributing a given quantum circuit over an arbitrary network in a way which minimises the number of ebits required. We make these methods available through pytket_dqc. Our first contribution is to introduce two workflows, Annealing and Partition, which perform quantum circuit distribution over heterogeneous networks in a way which makes use of detached gates. Secondly, where previous work had made use of detached gates or embedding, we present approaches to combining both. We do so by starting from distribution workflows this make use of either and applying rounds of refinement to make the most use of the other. Finally by proposing and incorporating entanglement distribution via Steiner trees, and by developing methods to combine their use with embedding, we further improve our solutions. We extensively benchmark our distribution workflows on a selection of random and application motivated circuits. We identify that the best workflow to utilise on bipartite networks is Embed, while for larger homogeneous networks Partition is best. For structured application motivated circuits on heterogeneous networks EmbedSteinerDetach is best, while for unstructured Quantum Volume circuits it is best to simply use the fastest workflow, in this case Partition. In the future, optimisation strategies that can take into account the bound to the link qubit registers should be explored further. We are aware of two papers that do so, namely [13, 14]; however, the approach from [13] does not consider the embedding technique nor Steiner trees, while [14] targets networks with only two modules. Moreover, even though the approach from [13] tends to yield solutions that meet the specified bound to the link qubit register, this is not guaranteed -- in certain cases, it is necessary to split some of the distributable packets in a similar way we discuss in Appendix C. Future work may also consider preprocessing of the circuit to facilitate less costly distributions. This is particularly applicable to Pauli Gadgets, which may be decomposed in a variety of ways [30], each of which may be more or less suited to distribution. Finally, we encourage the investigation of dynamical quantum circuit distribution, which combines gate teleportation and qubit teleportation. In [11, 13] the authors propose approaches to doing so, suggesting the static distribution of segments of circuits, stitched together via qubit teleportation. The work of this paper can be straightforwardly used as a static distributor in this framework, obtaining similar gains as those reported in [11, 13]. We expect that approaches capable of freely interleaving qubit teleportation and EJPP processes be even more beneficial, and we suggest this be Figure 16: **Networks for chemistry aware experiments. Numbers in vertices indicate the number of qubits in each module; edges indicate connections alone which ebits can be established.** the most pressing line of further work. Code AvailabilityThe techniques outlined in Section 4 are implemented in pytket_dqc, which can be found in [https://github.com/CQCL/pytket-dqc](https://github.com/CQCL/pytket-dqc) along with example notebooks. Documentation for pytket_dqc can found at [https://cqcl.github.io/pytket-dqc/](https://cqcl.github.io/pytket-dqc/). The results of the benchmarks in Section 5 can be found in [https://github.com/CQCL/pytket-dqc_experiment_data](https://github.com/CQCL/pytket-dqc_experiment_data). Benchmark ToolsThe results in Section 5.4 were obtained using a MacBook Pro with a 2.3 GHz Dual-Core Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 memory. Time to generate distribution in each plot refers to the time taken by this machine. AcknowledgementsThe authors thank Ranjani Sundaram, Himanshu Gupta, and C.R.Ramakrishnan for their insights on their own work, and for sharing the related code. We also acknowledge the contributions made by Kosuke Matsui and Akihito Soeda during early conversations about the project. Thanks to Matty Hoban and Yao Tang for their careful proofreading of this manuscript. JYW is supported by Ministry of Science and Technology, Taiwan, R.O.C. under Grant no. NSTC 110-2112-M-032-005-MY3, 111-2923-M-032-002-MY5 and 111-2119-M-008-002. TF and MM are supported by: MEXT Quantum Leap Flag-ship Program (MEXT QLEAP) JPMXS0118069605, JPMXS0120351339, Japan Society for the Promotion of Science 21H03394.
2307.02153
A renormalization group improvement for thermally resummed effective potential
We propose a novel method for renormalization group improvement of thermally resummed effective potential. In our method, $\beta$-functions are temperature dependent as a consequence of the divergence structure in resummed perturbation theory. In contrast to the ordinary $\overline{\text{MS}}$ scheme, the renormalization group invariance of the resummed finite-temperature effective potential holds order by order, which significantly mitigates a notorious renormalization scale dependence of phase transition quantities such as a critical temperature even at the one-loop order. We also devise a tractable method that enables one to incorporate temperature-dependent higher-order corrections by fully exploiting the renormalization group invariance.
Koichi Funakubo, Eibun Senaha
2023-07-05T09:50:07Z
http://arxiv.org/abs/2307.02153v2
# A renormalization group improvement for thermally resummed effective potential ###### Abstract We propose a novel method for renormalization group improvement of thermally resummed effective potential. In our method, \(\beta\)-functions are temperature dependent as a consequence of the divergence structure in resummed perturbation theory. In contrast to the ordinary \(\overline{\mathrm{MS}}\) scheme, the renormalization group invariance of the resummed finite-temperature effective potential holds order by order, which significantly mitigates a notorious renormalization scale dependence of phase transition quantities such as a critical temperature even at the one-loop order. We also devise a tractable method that enables one to incorporate temperature-dependent higher-order corrections by fully exploiting the renormalization group invariance. ## I Introduction Thermal effective potential has been widely used to analyze phase transitions such as electroweak phase transition (EWPT). As is well know, the perturbative method to evaluate the effective potential suffers from bad high-temperature behavior even in a theory with small coupling constants [1; 2]. One of the remedies is to incorporate the most dominant part of the higher order terms at high temperatures, that is, the mass corrections which are proportional to \(T^{2}\), into the lower order contributions in a systematic manner. This is the so-called resummation of the thermal mass, or, simply, the thermal resummation. The resummation also cures the infrared divergence originating from the zero Matsubara frequency mode in bosonic-loop contributions.1 The renormalization-group (RG) improvement of the effective potential is another method to re-arrange the perturbation series, in which a part of higher-oder contributions are taken into the lower-order terms in perturbation theory [3; 4; 5; 6]. It is based on the fact that the bare Lagrangian, hence, the all-order results including the counterterms (CTs), be independent of the renormalization scale. Although the perturbative effective potential has an explicit scale dependence at some fixed order, the scale invariance is improved by introducing the running parameters. Once the effective potential is made scale invariant at some order, the scale can be fixed in such a way that some part of the higher order terms vanish. The running parameters defined by use of \(\beta\)-functions have been often determined by renormalizing the theory with the \(\overline{\mathrm{MS}}\)-scheme. At finite temperatures, a new scale dependent term arises, which cannot be taken care of by the running parameters defined by the \(\overline{\mathrm{MS}}\)-scheme. This situation is made more serious, when one executes the thermal resummation, leading to violation of the order-by-order RG invariance of the effective potential (for recent studies, see, e.g., Refs. [7; 8]). Footnote 1: Since the smallest frequency of a fermion is \(\pi T\), the effect of the fermionic thermal resummation is much weaker than the bosonic one, so one usually considers only the bosonic thermal resummation. In this letter, we propose a novel RG improvement method for the resummed effective potentials in which the RG invariance holds order by order. In our method, \(\beta\)-functions are properly defined in resummed perturbation theory instead of using those in the \(\overline{\mathrm{MS}}\) scheme. As a consequence, our \(\beta\)-functions of the dimensionful parameters are temperature dependent. For illustrative purpose, we first work in the \(\phi^{4}\) theory and explicitly show the RG invariance of the resummed one- and two-loop effective potentials in our scheme. Moreover, we further refine the effective potential by incorporating a series of dominant temperature-dependent higher-order terms by taking advantage of the RG invariance. To apply our scheme to a case of first-order phase transition as needed for electroweak baryogenesis [9] (for reviews see, e.g., Refs. [10]), we extend the \(\phi^{4}\) theory by adding another real scalar field. We make numerical comparisons between the \(\overline{\mathrm{MS}}\) and our schemes and show that the latter yields much less renormalization scale dependence on a critical temperature even at the one-loop level. At the two-loop level, however, not much numerical differences are observed between the two schemes unless hard thermal loops are significantly sizable. Our numerical study also shows that our refined RG-improved one-loop effective potential can capture the two-loop order effects properly. This would be particularly useful when the two-loop effective potential is not available. ## II \(\beta\)-functions in the resummed theory We first clarify differences between \(\beta\)-functions in the \(\overline{\mathrm{MS}}\) and those in our scheme. To make our discussion simpler, we focus on scalar theories. The derivation of \(\beta\)-functions in more general theories is given in Ref. [11] (see also Ref. [12]). Let us collectively denote scalar fields and couplings as \(\phi_{i}(x)\) and \(g_{k}\) and scalar masses as \(m_{a}^{2}\). We also define a vacuum energy as \(\Omega\). As we see in the next section, \(\Omega\) is also needed to show the RG invariance of the effective potential. In this work, we adopt the dimensional regularization in which the spacetime dimension is analytically continued to \(d=4-\epsilon\) dimension. Since the \(\beta\)-functions of the dimensionless parameters are not affected within the scope of our discussion here, we derive only those of dimensionful parameters. In the \(\overline{\text{MS}}\) scheme, the bare parameters are expressed in terms of the renormalized ones and \(\epsilon\) poles as \[m_{Ba}^{2} =\left(\delta_{ab}+\sum_{n=1}^{\infty}\frac{b_{ab}^{(n)}(g)}{ \epsilon^{n}}\right)m_{b}^{2}, \tag{1}\] \[\Omega_{B}\mu^{\epsilon} =\Omega+\sum_{n=1}^{\infty}\frac{\omega_{n}(g)}{\epsilon^{n}}, \tag{2}\] where \(\mu\) is an arbitrary scale. From the \(\mu\) independence of the bare parameters, one can define the \(\beta\)-functions of each parameter as \[m_{a}^{2}\beta_{m_{a}^{2}} =\lim_{\epsilon\to 0}\mu\frac{dm_{a}^{2}}{d\mu}=\sum_{k,b}b_{ab,k}^{(1)}g_{k}m_{ b}^{2}, \tag{3}\] \[\beta_{\Omega} =\lim_{\epsilon\to 0}\mu\frac{d\Omega}{d\mu}=\omega_{1}, \tag{4}\] where \(b_{ab,k}^{(1)}=db_{ab}^{(1)}/dg_{k}\). It is important to note that the \(\beta\)-functions are given by the coefficients of the single \(\epsilon\) pole, which implies that if those coefficients are modified by thermal resummations, the \(\beta\)-functions would no longer remain the same for the theoretical consistency. This is exactly the case we consider in the following. In resummed perturbation theories, the Lagrangian is reorganized as [13] \[\mathcal{L}_{B} =\mathcal{L}_{R}+\mathcal{L}_{\text{CT}}\] \[=\left[\mathcal{L}_{R}-\frac{1}{2}\Sigma_{a}(T)\phi_{a}^{2} \right]+\left[\mathcal{L}_{\text{CT}}+\frac{1}{2}\Sigma_{a}(T)\phi_{a}^{2} \right], \tag{5}\] where \(\Sigma_{a}(T)\) denotes the thermal mass of the scalar \(\phi_{a}\). At the leading order, \(\Sigma_{a}(T)=\mathcal{O}(g_{i}T^{2})\) with \(g_{i}\) representing scalar quartic couplings. Even though nothing has changed in the bare Lagrangian, \(\Sigma_{a}(T)\) in the first square brackets is regarded as zeroth order in the resummed perturbation theory while that in the second ones is the part of the counterterm (CT) which is one order higher in this perturbative expansion (referred to as _thermal counterterm_ hereafter). Because of this reorganization, the propagators of the scalars are temperature dependent, and one encounters temperature dependent divergences when computing effective potentials at loop levels. Although such divergences must be cancelled in the all-order calculation, they inevitably appear at a fixed order in the resummed perturbation theory. With this consideration, we modify Eq. (1) as \[m_{Ba}^{2}=\left(\delta_{ab}+\sum_{n=1}^{\infty}\frac{b_{ab}^{(n) }(g)}{\epsilon^{n}}\right)m_{b}^{2}+\sum_{n=1}^{\infty}\frac{\tilde{b}_{ab}^{( n)}(g)}{\epsilon^{n}}\Sigma_{b}(T). \tag{6}\] As explicitly shown in concrete models up to the two-loop level in the next two sections, \(\Sigma(T)\) must be treated as if it were the \(\mu\)-independent objects though it contains \(g_{i}\). This condition, called _consistency condition_, is necessary to prove the order-by-order RG invariance of the resummed effective potential. Following the same procedure as in the \(\overline{\text{MS}}\) scheme with the consistency condition, one obtains \[m_{a}^{2}\beta_{m_{a}^{2}}=\sum_{k,b}\left(b_{ab,k}^{(1)}m_{b}^{2 }+\tilde{b}_{ab,k}^{(1)}\Sigma_{b}\right)\sigma_{k}g_{k}. \tag{7}\] We note that although the vacuum energy is also modified by the thermal resummation, the relation \(\beta_{\Omega}=\omega_{1}\) still holds under the aforementioned consistency condition. ## III \(\phi^{4}\) theory We demonstrate how our RG scheme works using the \(\phi^{4}\) theory. The bare Lagrangian is given by \[\mathcal{L}_{B} =\frac{1}{2}\partial_{\mu}\Phi_{B}\partial^{\mu}\Phi_{B}-V_{B}( \Phi_{B}), \tag{8}\] \[V_{B}(\Phi_{B}) =\Omega_{B}-\frac{\nu_{B}^{2}}{2}\Phi^{2}+\frac{\lambda_{B}}{4!} \Phi_{B}^{4}. \tag{9}\] As mentioned in Sec. II, after decomposing \(\mathcal{L}_{B}\) into \(\mathcal{L}_{R}\) and \(\mathcal{L}_{\text{CT}}\), we subtract and add \(\Sigma(T)\) in each part. The explicit forms of CTs are summarized in Ref. [11]. With the resummed Lagrangian, we evaluate the effective potential up to the two-loop level. Denoting the classical background field as \(\varphi\), the tree-level effective potential has the form \[V_{0}(\varphi)=\Omega+\frac{1}{2}\left(-\nu^{2}+\Sigma(T)\right) \varphi^{2}+\frac{\lambda}{4!}\varphi^{4}, \tag{10}\] The field-dependent mass is defined as \[M^{2}=\frac{\partial^{2}V_{0}}{\partial\varphi^{2}}=m^{2}+\Sigma(T), \tag{11}\] with \(m^{2}=-\nu^{2}+\lambda\varphi^{2}/2\). Using a propagator with \(M^{2}\), one can obtain the one-loop correction to the effective potential [13] \[\mu^{\epsilon}V_{1}(\varphi)=\frac{M^{4}}{4(16\pi^{2})}\left(- \frac{2}{\epsilon}+\ln\frac{M^{2}}{\bar{\mu}^{2}}-\frac{3}{2}+\mathcal{O}( \epsilon)\right), \tag{12}\] where \(\bar{\mu}=\sqrt{4\pi e^{-\gamma_{E}}}\mu\simeq 2.66\mu\) with \(\gamma_{E}\) being the Euler constant. In our renormalization scheme, we remove the divergences including the temperature dependent pieces by the one-loop CTs, resulting in \[\delta^{(1)}\Omega =\frac{1}{\epsilon}\frac{(\nu^{2}-\Sigma)^{2}}{32\pi^{2}},\quad \delta^{(1)}\nu^{2}=\frac{1}{\epsilon}\frac{\lambda}{16\pi^{2}}(\nu^{2}-\Sigma),\] \[\delta^{(1)}\lambda =\frac{1}{\epsilon}\frac{3\lambda^{2}}{16\pi^{2}}. \tag{13}\] The bare \(\nu_{B}\) is expressed as \[\nu_{B}^{2}=Z_{\Phi}^{-2}(\nu^{2}+\delta^{(1)}\nu^{2}), \tag{14}\] where \(Z_{\Phi}\) denotes the wavefunction renormalization constant for \(\Phi\), and \(Z_{\Phi}=1\) at the one-loop level. From Eq. (14), the coefficient of the single \(\epsilon\) pole is found to be \(b_{1}(\lambda)=-\tilde{b}_{1}(\lambda)=\lambda/16\pi^{2}\). Plugging them into the formula (7), one obtains \[\nu^{2}\beta^{(1)}_{\nu^{2}}=\frac{\lambda(\nu^{2}-\Sigma)}{16\pi^{2}}. \tag{15}\] By doing the same step, one can find the \(\beta\)-functions of the remaining parameters and \(\gamma\)-function as \[\beta^{(1)}_{\Omega}=\frac{(\nu^{2}-\Sigma)^{2}}{32\pi^{2}},\quad\beta^{(1)}_ {\lambda}=\frac{3\lambda^{2}}{16\pi^{2}},\quad\gamma^{(1)}_{\Phi}=0. \tag{16}\] Note that the \(\beta\)-functions in our scheme are reduced to those in the \(\overline{\text{MS}}\) scheme by taking \(\Sigma=0\), which implies that differences between our scheme and \(\overline{\text{MS}}\) scheme could be sizable when \(\Sigma\) dominates over \(\nu^{2}\). We also note that in the \(\overline{\text{MS}}\) scheme, the \(T\)-dependent divergences appearing in Eq. (12) remain at this order, and higher-loop contributions are needed to cancel them [14] (See also Ref. [15]).2 Footnote 2: If \(\mathcal{L}_{R}\) and \(\mathcal{L}_{\text{CT}}\) are defined as in Ref. [16; 17] instead of the way they are defined in Eq. (5), the order-by-order renormalization with the \(\overline{\text{MS}}\) scheme also holds by regarding the thermal mass term as one-order higher. After the renormalization, the resummed one-loop effective potential at the one-loop level is given by \[V_{\text{eff}}(\varphi)=V_{0}(\varphi)+V_{1}(\varphi), \tag{17}\] where \[V_{0}(\varphi) =\Omega+\frac{1}{2}\left(-\nu^{2}+\Sigma(T)\right)\varphi^{2}+ \frac{\lambda}{4!}\varphi^{4}, \tag{18}\] \[V_{1}(\varphi) =\frac{M^{4}}{4(16\pi^{2})}\left(\ln\frac{M^{2}}{\bar{\mu}^{2}}- \frac{3}{2}\right)+\frac{T^{4}}{2\pi^{2}}I_{B}(A^{2})-\frac{1}{2}\Sigma(T) \varphi^{2}, \tag{19}\] with \(A^{2}=M^{2}/T^{2}\) and the thermal function of the boson (\(I_{B}\)) is defined as \[I_{B}(A^{2})=\int_{0}^{\infty}dx\ x^{2}\ln\left(1-e^{-\sqrt{x^{2}+A^{2}}} \right). \tag{20}\] The last term in \(V_{1}(\varphi;T)\) is nothing but the thermal CT. In the high-\(T\) expansion, the \(+\Sigma(T)\varphi^{2}/2\) term arises from \(T^{4}I_{B}/(2\pi^{2})\), which is cancelled by the thermal CT, avoiding the double counting of \(\Sigma(T)\varphi^{2}/2\). As is the one-loop level, we regularize the two-loop effective potential by requiring that all the divergences be absorbed by the CTs, As a result, the two-loop contributions to the \(\beta\)-functions of the model parameters in our scheme are, respectively, given by \[\gamma^{(2)}_{\Phi} =\frac{\lambda^{2}}{12(16\pi^{2})^{2}}, \tag{21}\] \[\beta^{(2)}_{\Omega} =\frac{(\nu^{2}-\Sigma)\Sigma}{16\pi^{2}},\] (22) \[\nu^{2}\beta^{(2)}_{\nu^{2}} =\frac{\lambda^{2}(-\nu^{2}+\Sigma)}{(16\pi^{2})^{2}}+\frac{ \lambda\Sigma}{16\pi^{2}}+2\nu^{2}\gamma^{(2)}_{\Phi},\] (23) \[\beta^{(2)}_{\lambda} =-\frac{6\lambda^{3}}{(16\pi^{2})^{2}}+4\lambda\gamma^{(2)}_{ \Phi}. \tag{24}\] We should note that \(\beta^{(2)}_{\nu^{2}}\) contains the \(\lambda\Sigma/16\pi^{2}\) term that is only one-loop suppressed. This is exactly the same form as the thermal correction term in Eq. (15) with an opposite sign. Therefore, they are seemingly cancelled with each other in the sum of the one- and two-loop \(\beta\)-functions \(\beta_{\nu^{2}}=\beta^{(1)}_{\nu^{2}}+\beta^{(2)}_{\nu^{2}}\). However, when we evaluate \(\beta_{\nu^{2}}\) perturbatively, \(\lambda\Sigma/16\pi^{2}\) in \(\beta^{(2)}_{\nu^{2}}\) should be treated as the one-order higher term than that in \(\beta^{(1)}_{\nu^{2}}\). In contrast to the \(\overline{\text{MS}}\)-scheme, \(\beta^{(2)}_{\Omega}\) is nonzero in our scheme. After the renormalization, the the two-loop correction to the resummed effective potential is cast into the form \[V_{2}(\varphi)=\frac{\lambda}{8}\bar{I}^{2}(M)-\frac{\lambda^{2}\varphi^{2}}{1 2}\tilde{H}(M)-\frac{1}{2}\Sigma(T)\bar{I}(M), \tag{25}\] where the thermal functions \(\tilde{H}(M)\) and \(\bar{I}(M)\) are defined as \[\tilde{H}(M)=3\bigg{[} -\frac{\bar{I}^{2}(M)}{2M^{2}}+\frac{\bar{I}(M)}{16\pi^{2}}- \frac{M^{2}}{(16\pi^{2})^{2}}\left(1+\frac{2}{3}f_{2}\right)\] \[\quad-\frac{1}{2M^{2}}\frac{T^{2}}{\pi^{2}}\big{(}I^{\prime}_{B}(A ^{2})\big{)}^{2}-\frac{T^{2}}{16\sqrt{3}\pi^{3}}I^{\prime}_{B}(A^{2})\] \[\quad+\frac{4T^{2}}{(16\pi^{2})^{2}}K(A)\bigg{]}, \tag{26}\] \[\bar{I}(M)=\frac{M^{2}}{16\pi^{2}}\left(\ln\frac{M^{2}}{\bar{\mu }^{2}}-1\right)+\frac{T^{2}}{\pi^{2}}I^{\prime}_{B}(A^{2}), \tag{27}\] with \(I^{\prime}_{B}(A^{2})=\partial I_{B}(A^{2})/\partial A^{2}\) and \(f_{2}\simeq-1.76\). \(K(A)\) is a genuine thermal function arising from a sunset-type diagram. Its explicit form is given in Ref. [11]. The last term comes from the thermal CT which plays a role in eliminating the double counting and linear-like terms in \(\varphi\) such as \(\mathcal{O}((M^{2})^{1/2}T^{3})\)[18]. Now we scrutinize the RG invariances of the resummed effective potentials obtained above. The effective poten tial satisfies RGE \[0 =\mu\frac{dV_{\text{eff}}}{d\mu}\equiv\mathcal{D}V_{\text{eff}}\] \[=\left[\mu\frac{\partial}{\partial\mu}+\nu^{2}\beta_{\nu^{2}} \frac{\partial}{\partial\nu^{2}}+\beta_{\lambda}\frac{\partial}{\partial \lambda}-\gamma_{\Phi}\varphi\frac{\partial}{\partial\varphi}+\beta_{\Omega} \frac{\partial}{\partial\Omega}\right]V_{\text{eff}}. \tag{28}\] Let us check the RG invariance of the resummed effective potential at the one-loop level. Applying (28) to \(V_{0}\) and \(V_{1}\) respectively, one finds \[\mathcal{D}V_{0}|_{\text{one-loop}} =\beta_{\Omega}^{(1)}-\frac{\nu^{2}}{2}\beta_{\nu^{2}}^{(1)} \varphi^{2}+\frac{\beta_{\lambda}^{(1)}}{4!}\varphi^{4}=\frac{M^{4}}{32\pi^{ 2}}, \tag{29}\] \[\mathcal{D}V_{1}|_{\text{one-loop}} =\mu\frac{\partial V_{1}}{\partial\mu}=-\frac{M^{4}}{32\pi^{2}}+ \mathcal{O}\left(\frac{1}{(16\pi^{2})^{2}}\right), \tag{30}\] where the consistency condition \(\mathcal{D}\Sigma=0\) is used. Therefore, one gets \(\mathcal{D}(V_{0}+V_{1})=0+\mathcal{O}(1/(16\pi^{2})^{2})\), and the error is the two-loop order. On the other hand, if one uses the \(\overline{\text{MS}}\) scheme, the error is estimated as \(\mathcal{D}(V_{0}+V_{1})_{\overline{\text{MS}}}=(-2m^{2}+\Sigma)\Sigma/(32\pi ^{2})+\mathcal{O}(1/(16\pi^{2})^{2})\rightarrow-\lambda\varphi^{2}\Sigma/(32 \pi^{2})+\mathcal{O}(1/(16\pi^{2})^{2})\), where the \(\varphi\)-independent terms are suppressed after the right arrow. Note that despite the lack of the RG invariance in the \(\overline{\text{MS}}\) scheme, the scale dependence could be unexpectedly smaller than that in our scheme due to an accidental cancellation between the two different errors. An example is given in Ref. [11]. However, such a less scale dependence has no robust footing. The proof of the RG invariance at the two-loop level is also straightforward. Applying the derivative operator \(\mathcal{D}\) to the resummed effective potentials (18), (19), and (25), respectively, we can verify that \(\mathcal{D}(V_{0}+V_{1}+V_{2})|_{\text{two-loop}}=0+\mathcal{O}(1/(16\pi^{2} )^{3})\). We here emphasize again that the order-by-order RG invariance holds by virtue of the modified \(\beta\)-functions in our scheme. Now we consider a further refinement that fully exploits the RG invariance to incorporate a series of temperature-dependent higher-order terms. The explicit form of the resummed one-loop effective potential that satisfies RGE (28) is \[\bar{V}_{\text{eff}}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi}; t)+\bar{V}_{1}(\bar{\varphi};t)\] \[=\bar{\Omega}+\frac{1}{2}\left(-\bar{\nu}^{2}+\Sigma\right)\bar{ \varphi}^{2}+\frac{\bar{\lambda}}{4!}\bar{\varphi}^{4}\] \[\quad+\frac{\bar{M}^{4}}{4(16\pi^{2})}\left(\ln\frac{\bar{M}^{2} }{e^{2t}\bar{\mu}_{0}^{2}}-\frac{3}{2}\right)+\frac{T^{4}}{2\pi^{2}}I_{B}(\bar {A}^{2})-\frac{1}{2}\Sigma\bar{\varphi}^{2}, \tag{31}\] with \(\bar{A}=\bar{M}/T\), \(\bar{M}^{2}=-\bar{\nu}^{2}+\Sigma(T)+\bar{\lambda}\bar{\varphi}^{2}/2\). The barred parameters \(\bar{\Omega}\), \(\bar{\nu}^{2}\), \(\bar{\lambda}\), and \(\bar{\varphi}\) are the running parameters as functions of \(t=\ln(\bar{\mu}/\bar{\mu}_{0})\) with \(\bar{\mu}_{0}\) being an initial scale. Hereafter, the unbarred parameters are defined at \(t=0\). Because \(t\) is arbitrary, it would be preferable to determine it in such a way that dominant higher-order terms are incorporated into the potential (31). At zero temperature, we could choose \(t(\varphi)=\ln(\bar{m}^{2}/\bar{\mu}_{0}^{2})/2\) to absorb logarithmic terms that could ruin the validity of perturbativity in some domain [4; 5]. At finite temperature, however, this choice is not able to tame dominant temperature-dependent terms arising from \[\bar{I}(\bar{M})\underset{T\gg\bar{M}}{\simeq}\frac{T^{2}}{12}, \tag{32}\] For this reason and because the truncation error of RGE at this order is given by \[\frac{d\bar{V}_{\text{eff}}(\bar{\varphi};t)}{dt}=\frac{\partial\bar{V}_{\text {eff}}(\bar{\varphi};t)}{\partial t}=0+\frac{1}{2}\frac{\partial\bar{M}^{2}}{ \partial t}\bar{I}(\bar{M}), \tag{33}\] we choose \(t\) to eliminate this error at each \(\varphi\), yielding \[t(\varphi)=\frac{8\pi^{2}}{\bar{M}^{2}}\bar{I}(\bar{M})_{t=0}. \tag{34}\] In this scheme, the higher-order terms in \(\bar{I}(\bar{M})\) appearing beyond the one-loop order can be taken into (31) through the \(t\)-\(\varphi\) relation in Eq. (34). In the zero temperature limit, Eq. (34) is reduced to \(t(\varphi)=\ln(\bar{m}^{2}/e\bar{\mu}_{0}^{2})/2\). Therefore, our scheme in this limit is related to the aforementioned scheme \(t(\varphi)=\ln(\bar{m}^{2}/\bar{\mu}_{0}^{2})/2\) by changing the input scale \(\bar{\mu}_{0}\) to \(\bar{\mu}_{0}/\sqrt{e}\). Let us denote the RG-improved potential (31) with \(\ell\)-loop order \(\beta\)-functions as \(\bar{V}_{\text{eff}}^{(\ell)}(\varphi;t(\varphi))\), which contains a part of the higher order terms beyond the \(\ell\)-loop, arising from the running parameters including the vacuum energy \(\bar{\Omega}(t)\). It is easy to check that \(\bar{V}_{\text{eff}}^{(1)}(\varphi;t(\varphi))\) include \(\bar{I}(\bar{M})\) terms in the two-loop effective potential (25) using the \(t\) expansion of (31) \[\bar{V}_{\text{eff}}(\varphi;t) =\bar{V}_{\text{eff}}(\varphi;0)+\frac{\partial\bar{V}_{\text{ eff}}(\varphi;t)}{\partial t}\bigg{|}_{t=0}t\] \[\quad+\frac{1}{2}\frac{\partial^{2}\bar{V}_{\text{eff}}(\varphi;t)}{ \partial t^{2}}\bigg{|}_{t=0}t^{2}+\cdots \tag{35}\] and the \(t\)-\(\varphi\) relation (34). From those expression, for example, it follows that \[\bar{V}_{\text{eff}}^{(1)}(\varphi;t(\varphi))=\bar{V}_{\text{ eff}}^{(1)}(\varphi;0)+\frac{\lambda(M^{2}+\lambda\varphi^{2})}{8M^{2}}\bar{I}^{2}(M)_{t=0}. \tag{36}\] The second term is exactly the same as \(\mathcal{O}(\bar{I}^{2}(M))\) terms in \(V_{2}(\varphi)\) given in Eq. (25). On the other hand, \(\bar{V}_{\text{eff}}^{(2)}(\varphi;t(\varphi))\) contains even \(\mathcal{O}(\bar{I}(M))\) terms including the thermal CT in \(V_{2}(\varphi)\). This appears analogous to the leading and next-to-leading logarithmic resummations at zero temperature [4; 5]. An important difference is that \(t\)-expanded \(\bar{V}_{\text{eff}}^{(2)}(\varphi;t(\varphi))\) includes more terms that are not present in the fixed-order (\(t=0\)) \(V_{2}(\varphi)\) in Eq. (25) [11]. Since the \(\phi^{4}\) theory does not accommodate the first-order phase transition, we will consider a multi-scalar theory in the next section. \(\phi^{4}\) theory with additional scalar As a simplest extension, another real scalar field is added to the \(\phi^{4}\) theory in order to compare quantities related to first-order phase transition in both \(\overline{\text{MS}}\) and our schemes. For illustration, we consider a simplified potential by imposing two \(\mathbb{Z}_{2}\) symmetries. The bare potential of the extended model has the form \[V_{0}(\Phi_{B1},\Phi_{B2}) =\Omega_{B}+\frac{\nu_{B1}^{2}}{2}\Phi_{1}^{2}+\frac{\nu_{B2}^{2}} {2}\Phi_{B2}^{2}\] \[\quad+\frac{\lambda_{B1}}{4!}\Phi_{B1}^{4}+\frac{\lambda_{B2}}{4! }\Phi_{B2}^{4}+\frac{\lambda_{B3}}{4}\Phi_{B1}^{2}\Phi_{B2}^{2}, \tag{37}\] which is invariant under \(\mathbb{Z}_{2}\) symmetries \(\Phi_{B1}\rightarrow-\Phi_{B1}\) and \(\Phi_{B2}\rightarrow-\Phi_{B2}\). As in the \(\phi^{4}\) theory, we subtract and add the thermal masses of \(\Phi_{1}\) and \(\Phi_{2}\) (denoted as \(\Sigma_{1}\) and \(\Sigma_{2}\)) in the renormalized Lagrangian and CTs, respectively. In this study, we assume that only \(\Phi_{1}\) develops the vacuum expectation value while \(\Phi_{2}\) does not. For later use, the classical background field of \(\Phi_{1}\) is denoted as \(\varphi\). It is straightforward to show that the finite temperature effective potentials up to the two-loop level satisfy the RGE by virtue of the temperature-dependent \(\beta\)-functions in our scheme [11]. To improve the potentials further, we choose \(t\) in order to incorporate a series of temperature-dependent higher-order terms. For instance, at the one-loop order, we impose \[\frac{\partial\bar{V}_{\text{eff}}(\bar{\varphi};t)}{\partial t}=0+\frac{1}{2} \sum_{i}\frac{\partial\bar{M}_{i}^{2}}{\partial t}\bar{I}(\bar{M}_{i})=0, \tag{38}\] where \(\bar{M}_{1}^{2}=\bar{\nu}_{1}^{2}+\Sigma_{1}(T)+\bar{\lambda}_{1}\bar{\varphi} ^{2}/2\) and \(\bar{M}_{2}^{2}=\bar{\nu}_{2}^{2}+\Sigma_{2}(T)+\bar{\lambda}_{3}\bar{\varphi} ^{2}/2\) with \(\Sigma_{1}(T)=(\lambda_{1}+\lambda_{3})T^{2}/24\) and \(\Sigma_{2}(T)=(\lambda_{2}+\lambda_{3})T^{2}/24\). With this condition, the RG-improved effective potential is given by \[\bar{V}_{\text{eff}}(\bar{\varphi};t(\varphi))=\bar{V}_{0}(\bar {\varphi};t(\varphi))+\bar{V}_{1}(\bar{\varphi};t(\varphi))\] \[=\bar{\Omega}+\frac{1}{2}\left(\bar{\nu}_{1}^{2}+\Sigma_{1}(T) \right)\bar{\varphi}^{2}+\frac{\bar{\lambda}_{1}}{4!}\bar{\varphi}^{4}\] \[\quad+\sum_{i=1,2}\left[\frac{\bar{M}_{i}^{4}}{4(16\pi^{2})} \left(\ln\frac{\bar{M}_{i}^{2}}{e^{2}\bar{\mu}_{0}^{2}}-\frac{3}{2}\right)+ \frac{T^{4}}{2\pi^{2}}I_{B}(\bar{A}_{i}^{2})\right]\] \[\quad-\frac{1}{2}\Sigma_{1}(T)\bar{\varphi}^{2}, \tag{39}\] where \(\bar{A}_{i}=\bar{M}_{i}/T\), and the explicit form of \(t(\varphi)\) is \[t(\varphi)=\frac{8\pi^{2}\sum_{i}\frac{\partial\bar{M}_{i}^{2}}{\partial t} \bar{I}(\bar{M}_{i})_{t=0}}{\sum_{i}\bar{M}_{i}^{2}\frac{\partial\bar{M}_{i}^ {2}}{\partial t}}. \tag{40}\] Expanding (39) in powers of \(t\), \(\bar{V}_{\text{eff}}^{(1)}(\varphi;t)\) is cast into the form \[\bar{V}_{\text{eff}}^{(1)}(\varphi;t(\varphi))=\bar{V}_{\text{eff}}^{(1)}( \varphi;0)+\frac{\big{(}\sum_{i}\alpha_{i}\bar{I}(M_{i})_{t=0}\big{)}^{2}}{8 \sum_{i}\alpha_{i}M_{i}^{2}}, \tag{41}\] where \(\alpha_{i}=16\pi^{2}\partial\bar{M}_{i}^{2}/\partial t|_{t=0}\). Unlike the \(\phi^{4}\) theory, the form of the second term does not coincides with that in the fixed-order two-loop effective potential \(V_{2}\). Such a mismatch between the RG-improved and fixed-order effective potentials is peculiar to the multi-field case, which is attributed to the fact that the single parameter \(t\) alone cannot incorporate two different \(\bar{I}(M_{i})\) terms correctly in principle. We investigate to what extent our scheme can capture the higher-order effects by comparing with the two-loop order result. In this model, there are 5 parameter in the scalar potential, i.e., \((\nu_{1}^{2},\nu_{2}^{2},\lambda_{1},\lambda_{2},\lambda_{3})\). Using vacuum and mass conditions, we convert them into \((v,\nu_{2}^{2},m_{1},\lambda_{2},m_{2})\). As an example of the first-order phase transition, we take \(v(\bar{\mu}_{0})=200\), \(m_{1}(\bar{\mu}_{0})=5.0\), \(m_{2}(\bar{\mu}_{0})=125\), \(\nu_{2}^{2}(\bar{\mu}_{0})=85.0^{2}\), \(\lambda_{2}(\bar{\mu}_{0})=5.0\), where \(\nu_{1}^{2}(\bar{\mu}_{0})\) and \(\lambda_{1}(\bar{\mu}_{0})\) are determined by the first and second derivatives of the effective potentials at a given order while \(\lambda_{3}(\bar{\mu}_{0})\) at the tree-level. \(\bar{\mu}_{0}\) is fixed by the condition \(t(\varphi=v)=0\). At the both one- and two-loop levels, \(\bar{\mu}_{0}\simeq 75.81\). The dimensionful parameters are given in units of any mass scale. Because of the smallness of \(m_{1}\), the appearance of the imaginary parts of the effective potentials is only limited to low temperature, and the effective potentials are all real and well-defined near critical temperatures \(T_{C}\), where the potentials have two degenerate minima. In Fig. 1, \(v(T)/T\) are shown as a function of the temperature \(T\) in the \(\overline{\text{MS}}\) (left) and our (right) schemes, respectively. The dotted and dashed curves in blue represent the results obtained by using the one-loop effective potential (39) in the cases of \(t=0\) and 5, respectively. As clearly seen, the renormalization scale dependence on \(T_{C}\) in the \(\overline{\text{MS}}\) case is much larger than that in our scheme. This is due to large violation of RG invariance in the former. On the other hand, the dot-dashed and two-dot-dashed lines in red correspond to the results using (39) and the two-loop effective potential \(\bar{V}_{2}(\bar{\varphi},t)\) [the RG-improved version of Eq. (25) but with the two scalars] with \(t=0\) and 5, respectively. In those cases, the renormalization scale dependence is even milder than that in the one-loop result with our scheme. Note that the improvement in the \(\overline{\text{MS}}\) scheme is because of the _partial_ restoration of the RG invariance. One can explicitly check that the effective potential follows the RG invariance up to the \(\mathcal{O}(\lambda_{i}^{2}T^{2})\) order in the high temperature limit [18]. In this parameter set, the residual RG-noninvariant terms are numerically small and the truncation errors become dominant, which explain the two-loop results. We also overlay the results by use of the effective potentials \(\bar{V}_{\text{eff}}^{(1)}(\bar{\varphi},t(\varphi))\) and \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi},t(\varphi))\) with the \(t\)-\(\varphi\) relation (40). The former is denoted by the solid line in grey while the latter by thick solid line in black. One can see that \(v(T_{C})/T_{C}\) using \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi},t(\varphi))\) in both schemes lie within the two-loop level scale uncertainties, while not in the case using \(\bar{V}_{\text{eff}}^{(1)}(\bar{\varphi},t(\varphi))\). This demonstration suggests that \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi},t(\varphi))\) can give the results closer to those at the two-loop order. ## V Conclusion We have proposed the novel method for renormalization group improvement of thermally resummed effective potential. In our method, the RG invariance of the resummed finite-temperature effective potential holds order by order since the \(\beta\)-functions are correctly defined in resummed perturbation theory. Taking the extended \(\phi^{4}\) theory as an example, we showed that the renormalization scale dependence of the first-order phase transition quantities, especially \(T_{C}\) in our scheme is much smaller than that in the \(\overline{\text{MS}}\) scheme even at the one-loop level. At the two-loop level, however, no significant differences are observed in the both schemes. This is because that the RG invariance in the \(\overline{\text{MS}}\) is restored up to \(\mathcal{O}(\lambda_{i}^{2}T^{2})\) order in the high temperature limit and the residual RG-noninvariant terms are numerically unimportant. We also devised the tractable method that enables one to incorporate a series of the temperature-dependent higher-order corrections utilizing the RG invariance in our scheme. Applying this method to RG-improved one-loop effective potential, \(v(T_{C})/T_{C}\) in the case of \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi},t(\varphi))\) falls within the uncertainties of the two-loop order renormalization scale dependence, suggesting that our refined method could be a practical choice when the two-loop effective potential is not available.
2306.15706
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Prompt tuning is a parameter-efficient way to deploy large-scale pre-trained models to downstream tasks by adding task-specific tokens. In terms of vision-language pre-trained (VLP) models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks, which greatly exacerbates the already high computational overhead. In this paper, we revisit the principle of prompt tuning for Transformer-based VLP models, and reveal that the impact of soft prompt tokens can be actually approximated via independent information diffusion steps, thereby avoiding the expensive global attention modeling and reducing the computational complexity to a large extent. Based on this finding, we propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning. To validate APT, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of downstream tasks. Meanwhile, the generalization of APT is also validated on CLIP for image classification and StableDiffusion for text-to-image generation. The experimental results not only show the superior performance gains and computation efficiency of APT against the conventional prompt tuning methods, e.g., +7.01% accuracy and -82.30% additional computation overhead on METER, but also confirm its merits over other parameter-efficient transfer learning approaches.
Qiong Wu, Shubin Huang, Yiyi Zhou, Pingyang Dai, Annan Shu, Guannan Jiang, Rongrong Ji
2023-06-27T05:43:47Z
http://arxiv.org/abs/2306.15706v2
# Approximated Prompt Tuning for Vision-Language Pre-trained Models ###### Abstract Prompt tuning is a parameter-efficient way to deploy large-scale pre-trained models to downstream tasks by adding task-specific tokens. In terms of vision-language pre-trained (VLP) models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks, which greatly exacerbates the already high computational overhead. In this paper, we revisit the principle of prompt tuning for Transformer-based VLP models, and reveal that the impact of soft prompt tokens can be actually approximated via independent information diffusion steps, thereby avoiding the expensive global attention modeling and reducing the computational complexity to a large extent. Based on this finding, we propose a novel _Approximated Prompt Tuning_ (APT) approach towards efficient VL transfer learning. To validate APT, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of downstream tasks. Meanwhile, the generalization of APT is also validated on CLIP for image classification and StableDiffusion for text-to-image generation. The experimental results not only show the superior performance gains and computation efficiency of APT against the conventional prompt tuning methods, \(e.g.,+7.01\%\) accuracy and \(-82.30\%\) additional computation overhead on METER, but also confirm its merits over other parameter-efficient transfer learning approaches1. Footnote 1: Our code is given in supplementary materials and will be publicly released after acceptance. \({}^{1}\) Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, Xiamen University, 361005, P.R. China. \({}^{2}\) Institute of Artificial Intelligence, Xiamen University, 361005, P.R. China. \({}^{3}\) Intelligent Manufacturing Department, Contemporary Amperex Technology Co. Limited {qiong, shubinhuang}stu.xmu.edu.cn, {zhouyiyi, pydai}xmu.edu.cn, {shuan01, jianggn}catl.com, rrjixmu.edu.cn ## Introduction Prompt tuning [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 73, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 10, 10, 10, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 101, 10, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 101, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 99, 101, 11, 12, 13, 14, 15, 1 downstream tasks, as shown in Fig. 1-a. In addition, we also notice that even with a bunch of tokens, soft prompt tuning still has limited impacts on the input sequence, _i.e._, attention weights, leading to a sub-optimal adaption, as shown in Fig. 1-b. Considering that these tokens are often involved in self-attention, of which computation is quadratic to the input length [21], this inefficient adaption will significantly increase the already high computational overhead of VLP models. By revisiting the principle of prompt tuning, we find that there exists a potential solution for efficient VL adaption. Particularly, prompt tuning aims to use additional tokens to influence the input sequence, so as to minimize the gap between pre-training and downstream tasks [23, 24]. In terms of soft prompt tuning, the tokens are usually inserted into the self-attention layers of VLP models [11, 12, 13]. Via analyzing self-attention, we observe that the obtained attention weight matrix can be actually divided into four sub-parts as shown in Fig. 2-a. Here, we call them _input-only_, _input2prompt_, _prompt2input_ and _prompt-only_ attention matrices, respectively. Under the setting of deep prompt tuning [12], _i.e._, the prompts are layer-wise, the computations of _prompt2input_ and _prompt-only_ can be indeed skipped and will not affect the prompt tuning of next layer. And the _input-only_ is the default operation of pre-trained models that cannot be changed. In this case, the key to improving prompt tuning lies on the _input2prompt_, which is essentially an information diffusion step from the prompt tokens to the input sequence under the perspective of graph theory [13]. However, we find that its functionality can be actually approximated via a more effective process independent to global attention, thereby improving the efficiency of VL adaption. Motivated by this observation, we propose a novel _approximated prompt tuning_ (APT) approach for VLP models in this paper. Similar to deep prompt tuning [13, 12], APT inserts a set of learnable tokens into each self-attention layer of the VLP model. As shown in Fig. 2-b, a key difference is that we separate these tokens from the expensive global self-attention and approximate their effects independently by aggregating the prompt tokens with low-rank transformations. In this way, the proposed APT can effectively diffuse more information from prompt tokens to the input sequence while avoiding the expensive global self-attention, as shown in Fig. 1-b. To validate APT, we apply it to two deep-fusion based VLP models, namely ViLT [21] and METER [14], on three VL benchmarks including VQA [11], NLVR\({}^{2}\)[23] and Flickr30K [15]. In addition, we also examine its generalization on CLIP [12] for the _base-to-new_ classification task [13] and on StableDiffusion [14, 15] for text-to-image generation. The experimental results well confirm the obvious merits of APT over the conventional prompt tunning methods [11, 12, 13], _e.g._, \(+8.30\%\) accuracy on VQA2.0 for METER while reducing up to \(17.07\%\) additional computation overhead. Our APT also yield better performance than most parameter-efficient transfer learning (PETL) approaches [12, 13, 14, 15], _e.g._\(70.94\%\) on VQA2.0 for ViLT and \(80.97\%\) on NLVR\({}^{2}\) for METER. Overall, our contributions are three-fold: * We identify the key challenges of prompt tuning on common VLP models, _e.g._ ViLT [21] and METER [14], which are excessive computation overhead and low prompt tuning efficiency. * We propose a novel _approximated prompt tuning_ (APT) method for both parameter- and computation-efficient prompt tuning, which approximates the influence of prompt tokens via independent aggregation steps. * The proposed APT not only outperforms existing prompt tuning methods but also achieves better performance than other PETL approaches on \(2\) VLP models and \(4\) VL tasks. Its generalization is also validated on CLIP and StableDiffusion. ## Related Work ### Vision-Language Pre-training Similar to NLP pre-training paradigms [14, 15, 16, 17, 18, 19, 20], vision-language (VL) pre-training also apply generative prediction tasks, _e.g. masked language modeling_ (MLM) and _masked image modeling_ (MIM), to achieve self-supervised learning on massive cross-modal data. A key difference is that VLP models usually require two encoders to process image and language information, _e.g._, BERT [14] and Faster-RCNN [15], and their Transformer-based backbone networks are not only used for high-level representation learning, but also for cross-modal deep fusion and interaction [11, 12, 13, 14, 15, 16, 17, 18, 19, 20]. METER [14], the base model of this paper, is the typical model using this paradigm. Meanwhile, we also validate APT on the Figure 2: The illustrations of global self-attention matrices with and without APT. (a) is the attention matrices of common prompt tuning. In (b), the prompt-input and prompt-only parts are removed, and the input-prompt attention is approximated by APT. other representative model called ViLT [14], which process the image and text information with only one end-to-end Transformer network. ### Prompt Tuning Prompt tuning [1, 15, 16, 17, 18, 19, 20, 21] is a parameter-efficient way to adapt pre-trained models to downstream tasks. Concretely, for hand-crafted prompts [2, 19], it often inserts a pre-defined prompt phrase into the input sequences, thus reminding the model of pre-trained knowledge, _e.g._, "_This is a picture of_ [X]'. However, hard prompt tuning heavily relies on manual design. To overcome this issue, soft prompt tuning [15, 16, 17, 18] is proposed to automatically learn trainable prompts via downstream task adaption. In terms of the prompt placement, soft prompt tuning can be further divided into two patterns, _i.e._, the shallow [15] and the deep [16] ones. Shallow prompt tuning methods [11, 15] only expand the input sequence with trainable vectors at the first layer, while deep prompt tuning methods [16] expand the input sequence between any two layers with trainable tokens. ### Parameter Efficient Transfer Learning Parameter Efficient Transfer Learning (PETL) [19, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28] aims to update a small number of parameters to approach the fully-tuned performance on downstream tasks. In addition to prompt tuning, a common paradigm is the adapter-based methods [19, 18, 20, 21, 22, 23, 24], called _Adapter_ for short, which insert lightweight networks into the pre-trained model to project hidden features onto downstream data space. To avoid the additional computation overhead during inference, Hu _et al._ propose a low-rank adaption (LoRA) [20] method, based on weight re-parameterization. In the field of vision-language learning, VL-adapter [23] insert low-dimensional networks into a pre-trained language model to adapt to common VL tasks [11, 12, 24]. Its key difference to this paper is that the language model is not VL pre-trained, lacking enough generalization for common VLP models. ## Preliminary Before introducing our approach, we first recap the principle of prompt tuning for VLP models. Concretely, given a pre-trained vision-language (VLP) model, denoted as \(G(\cdot)\), and the image-text example of the downstream task, denoted as \((I,T)\), the target of prompt tuning is to minimize the adaption loss with a set of prompt tokens \(\mathbf{P}\in\mathbb{R}^{p\times d}\): \[\operatorname*{argmin}_{\mathbf{P}}\mathcal{L}\big{(}G(I,T,P|\theta^{+})\big{)}, \tag{1}\] where \(\theta^{+}\) is the pre-trained weights of \(G\) and will be fixed during prompt tuning3. \(\mathcal{L}\) is the objective function of the downstream task. Footnote 3: In most case, the classifier will be trained for a specific task Considering that the parameters are fixed during adaption, the features of the input sequence are hard to update for the downstream task. In this case, prompt tokens \(\mathbf{P}\) are often used in the self-attention of VLP models for diffusing task-related information to the input sequence \(\mathbf{X}\in\mathbb{R}^{n\times d}\): \[[\mathbf{X}^{\prime}||\mathbf{P}^{\prime}]=SA(\mathbf{X}||\mathbf{P}), \tag{2}\] where \(SA(\cdot)\) represents the self-attention module. \(\mathbf{X}^{\prime}\) and \(\mathbf{P}^{\prime}\) are the corresponding outputs of \(\mathbf{X}\) and \(\mathbf{P}\), respectively. In particular, \(\mathbf{X}^{\prime}\) and \(\mathbf{P}^{\prime}\) are obtained by \[\begin{split}\mathbf{X}^{\prime}&=\mathbf{A}_{I} \mathbf{X}\mathbf{W}_{v}+\mathbf{A}_{IP}\mathbf{P}\mathbf{W}_{v},\\ \mathbf{P}^{\prime}&=\mathbf{A}_{PI}\mathbf{X} \mathbf{W}_{v}+\mathbf{A}_{P}\mathbf{P}\mathbf{W}_{v},\end{split} \tag{3}\] where \(\mathbf{A}_{I}\), \(\mathbf{A}_{IP}\), \(\mathbf{A}_{PI}\) and \(\mathbf{A}_{P}\) are the sub-attention matrices, corresponding to the _input-only_, _input2prompt_, _prompt2input_ and _prompt-only_ parts described in introduction and shown in Fig. 2. \(\mathbf{W}_{q}\), \(\mathbf{W}_{k}\) and \(\mathbf{W}_{v}\) are the weight matrices of \(Q\), \(K\), \(V\) projections in \(SA\). Under the layer-wise setting [16], the prompt tokens are initialized for each layer and will not be used in the next \(SA\). In this case, the computation of \(\mathbf{P}^{\prime}\) can be indeed removed, which can reduce the complexity by \(O(2pd^{2}+4npd+2p^{2}d)\), where \(p\) is often a large value on VL tasks. Eventually, the feature update of VLP models with prompt tokens can be relaxed to \[\begin{split}\mathbf{X}^{\prime}&=\mathbf{A}_{I} \mathbf{X}\mathbf{W}_{v}+\mathbf{A}_{IP}\mathbf{P}\mathbf{W}_{v}\\ &=\frac{\gamma_{I}}{\gamma_{I}+\gamma_{IP}}\sigma(\frac{\mathbf{ X}\mathbf{W}_{q}(\mathbf{X}\mathbf{W}_{k})^{T}}{\sqrt{d}})\mathbf{X}\mathbf{W}_{v}\\ &+\frac{\gamma_{IP}}{\gamma_{I}+\gamma_{IP}}\sigma(\frac{\mathbf{ X}\mathbf{W}_{q}(\mathbf{P}\mathbf{W}_{k})^{T}}{\sqrt{d}})\mathbf{P}\mathbf{W}_{v}, \end{split} \tag{4}\] \[\text{where}\ \ \gamma_{I}=\sum e^{\mathbf{Q}\mathbf{K}_{i}^{T}}, \gamma_{IP}=\sum e^{\mathbf{Q}\mathbf{P}_{k}^{T}}\] Here, \(\sigma(\cdot)\) is the Softmax function, and \(\gamma_{I}\) and \(\gamma_{IP}\) are the attention proportion for the input sequence and prompt tokens, respectively. In Eq. 4, the first term is the self-attention update of input features, which is the compulsory operation of VLP models. To this end, the effectiveness of prompt tuning lies in the second term, which is essentially a weighted information diffusion step from \(\mathbf{P}\) to \(\mathbf{X}\). Since the scale-dot product is still required, this diffusion step is also expensive. ## Approximated Prompt Tuning Based on the above observation, we propose _approximated prompt tuning_ (APT) to model the attention impacts of prompt tokens. In particular, we can get the prompt tuning process of APT with the following formula: \[\mathbf{X}^{\prime}=SA(\mathbf{X})+APT(\mathbf{X},\mathbf{P}). \tag{5}\] For simplicity, we regard the information diffusion from P to X as \(\Delta X\): \[\begin{split}\Delta\mathbf{X}&=APT(\mathbf{X}, \mathbf{P})\\ &=\frac{\gamma_{IP}}{\gamma_{I}+\gamma_{IP}}\sigma(\mathbf{X} \mathbf{W}_{q}(\mathbf{P}\mathbf{W}_{k})^{T})\mathbf{P}\mathbf{W}_{v}.\end{split} \tag{6}\] To approximate \(\Delta X\), we first focus on the information aggregation of prompt tokens, denoted as \(\Delta X^{\prime}\): \[\Delta\mathbf{X}^{\prime}=\sigma(\mathbf{X}\mathbf{W}_{q}\mathbf{W}_{k}^{T} \mathbf{P}^{T})\mathbf{P}\mathbf{W}_{v}. \tag{7}\] Note that, \(\mathbf{W}_{v}\) is fixed in SA, and \(\mathbf{P}\) is a trainable matrix. To this end, we can directly update the projection of prompt tokens onto the \(V\) subspace, _i.e._, put \(\mathbf{P}\mathbf{W}_{v}\) as \(\mathbf{P}^{\prime}\). Similarly, we can simplify \(\mathbf{P}\mathbf{W}_{k}\mathbf{W}_{q}^{T}\) as \(K\). Then, the **X** can be directly taken as the \(Q\) without projection, and the computation from transforming **X** and \(\mathbf{P}\) into \(Q\), \(K\) and \(V\) of \(SA\) can be saved. Next, we show that \(V\) can be linearly transformed to \(K\): \[\begin{split}\Delta\mathbf{P}&=\mathbf{P}\mathbf{W} _{k}\mathbf{W}_{q}^{T}-\mathbf{P}\mathbf{W}_{v},\\ &=\mathbf{P}(\mathbf{W}_{k}\mathbf{W}_{q}^{T}-\mathbf{W}_{v}). \end{split} \tag{8}\] Here, \(\Delta\mathbf{P}\) denotes the difference between \(V\) and \(K\).Because \(V\) can be transformed to \(K\) by a linear transformation, we approximate Eq. 7 as \[\Delta\mathbf{X}^{\prime}=\sigma\big{(}\mathbf{X}(\mathbf{P}^{\prime}\mathbf{ W}_{p}+\mathbf{P}^{\prime})^{T}\big{)}\mathbf{P}^{\prime}, \tag{9}\] where \(\mathbf{W}_{p}\in\mathbb{R}^{d\times d}\) aims at transforming prompt tokens from \(V\) to \(K\). However, calculating \(\mathbf{P}\mathbf{W}_{p}\) is still not cheap due to the high feature dimension. As the low intrinsic dimension component Li et al. (2018); Aghajanyan et al. (2021) plays a dominant role in model optimization, the rank for \(\mathbf{W}_{p}\) is finite according to the theorem of the rank of matrices: \[rank(\mathbf{W}_{p})\leq rank(\mathbf{W}_{k}\mathbf{W}_{q}^{T})+rank( \mathbf{W}_{v}), \tag{10}\] where \(rank(\cdot)\) is the rank of the matrix. We can approximate the aggregation of prompt tokens in a low-rank way: \[\Delta\mathbf{X}^{\prime}=\sigma\big{(}\mathbf{X}(\mathbf{P}^{\prime}\mathbf{ W}_{1}\mathbf{W}_{2}+\mathbf{P}^{\prime})^{T}\big{)}\mathbf{P}^{\prime}. \tag{11}\] Here, \(\mathbf{W}_{1}\in\mathbb{R}^{d\times r}\) and \(\mathbf{W}_{2}\in\mathbf{R}^{r\times d}\) are two low-dimensional matrix, where \(r\ll d\). The rank of projection matrix \(\mathbf{W}_{1}\mathbf{W}_{2}\) is limited by \(r\). The way we obtain \(Q\), \(K\) and \(V\) matrices for attention modeling is cheaper than the original global attention. Afterwards, we consider the way to merge the original output of the self-attention module \(SA(\mathbf{X})\) and the information of prompt tokens \(\Delta\mathbf{X}\). Because the calculation of attention is still related to the input sequence, it is difficult to reduce the complexity of the approximation via independent computation. In this case, to adaptively adopt the impact of each prompt token, a simple solution is activating the attention matrix with \(ReLU\) instead of \(Softmax\) and omitting the weight item. Then, Eq. 6 can be represented as \[\Delta\mathbf{X}=\psi\big{(}\mathbf{X}(\mathbf{P}^{\prime}\mathbf{W}_{1} \mathbf{W}_{2}+\mathbf{P}^{\prime})^{T}\big{)}\mathbf{P}^{\prime}, \tag{12}\] where \(\psi(\cdot)\) represent ReLU activation. In this manner, the weights for prompts depend on the norm of prompt tokens and their relation to input sequence. Furthermore, from the weight calculation in Eq. 6, we observe that the effect of prompt tokens is not only influenced by their dependency to the input sequence, but also by the sum of attentions to the input sequence. With the intrinsic of the Softmax function that the maximum value has the most impact, we re-define Eq.6 by \[\begin{split}\Delta\mathbf{X}&=\alpha\cdot\psi( \mathbf{X}(\mathbf{P}^{\prime}\mathbf{W}_{1}\mathbf{W}_{2}+\mathbf{P}^{ \prime})^{T})\mathbf{P}^{\prime},\\ \text{where}&\alpha=max\{\mathbf{P}^{\prime}\mathbf{W }_{1}\mathbf{W}_{2}+\mathbf{P}^{\prime}\},\end{split} \tag{13}\] where \(max\{\cdot\}\) is the maximum function for the weight of each token. Thus, APT can globally adjust the information diffusion from the prompt tokens. Since the activation function of Eq. 13 no longer relies on the original attention matrix, the APT is easier to deploy for VLP models. Up to now, we have fully considered the effect of prompt tokens in diffusing task-related information to the input sequence. Then, we also take into account the effect of prompt tokens on the original attention matrix. As shown in Eq. 4, the information diffusion also influences the original attention matrix by increasing the denominator of the weight for the item from VLP module. To this end, we add a learnable scale \(s\) for the entire output, and the proposed APT can be summarised as follow: \[\begin{split}\mathbf{X}^{\prime}&=\mathbf{A}_{I} \mathbf{X}\mathbf{W}_{v}+\mathbf{A}_{IP}\mathbf{P}\mathbf{W}_{v}\\ &\approx e^{s}\cdot\big{(}SA(\mathbf{X})+\alpha\cdot\psi( \mathbf{X}(\mathbf{P}^{\prime}\mathbf{W}_{1}\mathbf{W}_{2}+\mathbf{P}^{ \prime})^{T})\mathbf{P}^{\prime}\big{)},\\ &\text{where}\ \alpha=max\{\mathbf{P}^{\prime}\mathbf{W}_{1} \mathbf{W}_{2}+\mathbf{P}^{\prime}\}.\end{split} \tag{14}\] Here, the learnable value \(s\) control the total amount of information diffused by APT and also make the output of attention modules fit the following layers. Eventually, the proposed APT method separates the effect of prompt tokens from the original attention module. The independence of APT brings two main benefits: (1) Information diffusion can break the limitation of patterns from VLP model, _i.e._ not constrained by Softmax-based normalization. (2) The computational overhead is significantly reduced by about \(O(2pd^{2})\). In practice, it can save about \(82.30\%\) and \(62.62\%\) computations for ViLT Kim et al. (2021) and METER Dou et al. (2022) compare to conventional prompt tuning methods. ## Experiments ### Datasets and Experimental Setup **Dataset and Metric.** VQA2.0 Goyal et al. (2017) is one of the most popular datasets for visual question answering (VQA) task. It uses images from MS-COCO Ren et al. (2015) and has about \(443,757\), \(214,254\) and \(447,793\) VQA examples for training, validation and testing, respectively. NLVR\({}^{2}\) Suhr et al. (2019) is built for visual reasoning. It contains \(107,292\) examples of human-written English sentences for pairs of photographs. Flickr30k Plummer et al. (2017) is a widely-used benchmark dataset in this image-text matching task. The dataset consists of \(31,783\) images, and each has five corresponding captions. For CLIP, we validate APT on \(11\) popular image classification datasets, including ImageNet Deng et al. (2009), Caltech101 Fei-Fei et al. (2017), and the proposed APT method is a widely-used benchmark dataset in this image-text matching task. The dataset consists of \(31,783\) images, and each has five corresponding captions. For CLIP, we validate APT on \(11\) popular image classification datasets, including ImageNet Deng et al. (2009), Caltech101 Fei-Fei et al. (2017), and the proposed APT method is a widely-used benchmark dataset in this image-text matching task. The dataset consists of \(31,783\) images, and each has five corresponding captions. For CLIP, we validate APT on \(11\) popular image classification datasets, including ImageNet Deng et al. (2009), Caltech101 Fei-Fei et al. (2017), and the proposed APT method is a widely-used benchmark dataset in this image-text matching task. The dataset consists of \(31,783\) images, and each has five corresponding captions. For CLIP, we validate APT on \(11\) popular image classification datasets, including ImageNet Deng et al. (2009), Caltech101 Fei-Fei et al. (2017), and the proposed APT method is a widely-used benchmark dataset in this image-text matching task. ### Dataset and Metric. **Dataset and Metric.** VQA2.0 Goyal et al. (2017) is one of the most popular datasets for visual question answering (VQA) task. It uses images from MS-COCO Ren et al. (2015) and has about \(443,757\), \(214,254\) and \(447,793\) VQA examples for training, validation and testing, respectively. NLVR\({}^{2}\) Suhr et al. (2019) is built for visual reasoning. It contains \(107,292\) examples of human-written English sentences for pairs of photographs. Flickr30k Plummer et al. (2017) is a widely-used benchmark dataset in this image-text matching task. The dataset consists of \(31,783\) images, and each has five corresponding captions. For CLIP, we validate APT on \(11\) popular image classification datasets, including ImageNet Deng et al. (2009), Caltech101 Fei-Fei et al. Fergus, and Perona 2007), OxfordPets (Parkhi et al., 2012), StandfordCars (Krause et al., 2013), Flowers102 (Nilsback and Zisserman, 2008), Food101 (Bossard, Guillaumin, and Gool, 2014), FGVCAircraft (Maji et al., 2013), SUN397 (Xiao et al., 2010), DTD (Cimpoi et al., 2014), EuroSAT (Helber et al., 2019), UCF101 (Soomro, Zamir, and Shah, 2012) 4. This comprehensive benchmark comprises datasets that cover a diverse set of vision tasks, including classification on generic objects, scenes, actions, and fine-grained categories. It also includes specialized tasks like recognizing textures and satellite imagery. Footnote 4: The details of these datasets are given in Appendix. **Implementation details.** We validate APT on two deep-fusion based VLP models, namely ViLT (Kim et al., 2021) and METER (Dou et al., 2022), and one shallow-fusion based VLP network called CLIP (Radford et al., 2021). In terms of ViLT, we add APT to its each SA layer. We set the rank value \(r\) in Eq. 11 to \(4\) and the number of prompt tokens \(p=200\) as the default setting. The prompt tokens are initialized by a normal distribution with a mean of \(0.0\) and a variance of \(0.02\). And we only apply a single attention rather than the multi-head one (Devlin et al., 2019) for the proposed APT method. During the training, we update the classifier, class tokens and modal-type embeddings, while the rest parameters of ViLT are kept fixed. For each task, we follow its default settings and increase the learning rate by five times. In terms of METER, APT is inserted into its self-attention and cross-attention layers. The rest settings are the same as ViLT. For CLIP (Radford et al., 2021), we insert APT into the self-attention layers of its text encoder, and we set the rank \(r=2\) and the number of prompts \(p=4\). APT is optimized by SGD with a learning rate of \(2\times 10^{-4}\) and weight decay of \(0.3\) for \(10\) epochs. Following (Radford et al., 2021), we also use a hard prompt phrase of "_a photo of_ [X]", which is fed to the text encoder of CLIP. ### Experimental results **Comparison with prompt tuning methods.** We first compare APT with two common soft prompt tuning methods, _i.e._, deep prompt (Jia et al., 2022) and shallow prompt (Li and Liang, 2021), in Tab. 1. For all methods, the number of prompts is set to \(200\) for a fair comparison. From Tab. 1, the performance of existing prompt tuning methods is far behind the full tuning one, _i.e._\(-7.90\%\) to \(-3.62\%\) on ViLT and \(-11.26\%\) to \(-10.00\%\) on METER. These results are also worse than their performance on NLP (Li and Liang, 2021) and vision tasks (Jia et al., 2022), showing the challenge of prompt tuning on VLP models. Among these compared methods, Deep Prompt shows better results than shallow prompts at most cases, while its parameter size is larger and similar to APT. Notably, the average improvements of APT to these prompt methods are \(+2.73\%\) to \(+7.01\%\) on ViLT and \(+8.30\%\) to \(+9.56\%\) on METER, respectively, while the saved additional computations can be up to \(82.30\%\) on ViLT and \(91.95\%\) on METER. Meanwhile, the performance of APT almost approaches full tuning, _e.g._, \(-0.89\%\) and \(-1.70\%\) on average for ViLT and METER, respectively. Considering the small number of parameters updated, these results are indeed significant. To obtain more intuitive comparisons, we also visualize the attentions of these prompt tuning methods in Fig. 3. In \begin{table} \begin{tabular}{c|l|c c|c c c|c} \hline \multirow{2}{*}{**Backbone**} & \multirow{2}{*}{**Method**} & **Updated** & **Additional** & **VQA** & **NLVR\({}^{2}\)** & **Flickr30K** & \multirow{2}{*}{**Avg.**} \\ & & **Parameter** & **FLOPs** & test-dev & test-P & IR R@1 & TR R@1 \\ \hline \multirow{4}{*}{ViLT} & Full Tuning & 115.43M & 0.0 & 71.26 & 76.13 & 64.40 & 83.50 & 73.82 \\ & Shallow Prompt & 0.15M & 19.53G & 66.47 & 66.47 & 55.92 & 74.80 & 65.92 \\ & Deep Prompt & 1.84M & 5.14G & 69.30 & 73.34 & 58.64 & 79.50 & 70.20 \\ \cline{2-8} & **APT** & 1.92M & 0.91G & **70.94** & **75.92** & **63.26** & **81.60** & **72.93** \\ \hline \multirow{4}{*}{METER} & Full Tuning & 323.31M & 0.0 & 77.43 & 83.05 & 82.22 & 94.30 & 84.25 \\ & Deep Prompt & 3.68M & 13.05G & 67.57 & 65.79 & 70.90 & 87.70 & 72.99 \\ \cline{1-1} & Shallow Prompt & 0.30M & 28.71G & 68.51 & 65.69 & 74.20 & 88.60 & 74.25 \\ \cline{1-1} \cline{2-8} & **APT** & 3.83M & 2.31G & **75.45** & **80.97** & **80.88** & **92.90** & **82.55** \\ \hline \end{tabular} \end{table} Table 1: Comparisons of APT and the conventional prompt tuning methods for ViLT and METER on VQA, NLVR\({}^{2}\) and Flickr30K. The best performance is **bold** while the second one is underlined. Figure 3: The visualizations of the attention results of shallow prompt, deep prompt and our APT with ViLT on VQA2.0 dataset. The color denotes the degree of attention, while the redder the higher and _vice versa_. Compared with shallow prompt and deep prompt, APT can more effectively diffuse prompt information to the input sequence from the low layers of ViLT, see the red arrows. this figure, We select the 15 most active tokens of the visual and text inputs, and the top 30 prompt tokens for visualization, 60 tokens in total. The global attention matrices can be divided into six sub-parts, _i.e. Text-Text_, _Text-Image_, _Text-Prompt_, _Image-Text_, _Image-Image_, _Image-Prompt_. From these examples, we can first observe that in the lower layers of the VLP model, the information exchanges mainly happens among the tokens of the same modality, and the prompts barely affect the input sequence. As the inference progress, their impacts of common prompts become slightly more pronouns. In terms of shallow prompts, the impact of its tokens is still marginal, while deep prompt will be better at the last few layers of the model. The above results are also consistent with their performance on VL tasks. In stark contrast, APT can effectively diffuse prompt information to the input sequence of the VLP models, see the arrows. And its attention weights become more intensive in the higher layers, suggesting its effectiveness towards task adaption. **Comparison with existing PETL methods.** Next, we compare APT with a bunch of PETL methods, including LoRA Hu et al. (2022), VL-Adapter (Adapter) Sung et al. (2022) and Scaled Parallel Adapter (Scaled PA) He et al. (2022), of which results are given in Tab. 25. From this table, we can first see that LoRA is most efficient in both parameters and computation due to its low-rank re-parameterization scheme. Compared to pre-trained language models Liu et al. (2019); Brown et al. (2020), its performance on VLP models is much inferior, especially on the tasks that are greatly different from pre-training, _e.g._, VQA and NLVR2, suggesting the challenge of VL adaption. We can also find that although the adapter-based methods show better adapabilities than LoRA, they still perform worse than our APT. Compared with VL-Adapter, APT can achieve obvious gains on ViLT and METER, while saving about \(46.07\%\) and \(28.28\%\) parameters, respectively. In terms of the most advanced Scaled PA, APT is slightly inferior in parameter and computation costs, but its adaption performance is consistently better than Scaled PA on two VLP models. Overall, these results suggest that our APT is a competitive method in PETL with great potential. Footnote 5: These results are reproduced by us because there are no ready-made literature to refer. Details are given in Appendix. In Fig. 4, we also present the performance comparison of APT to other PETL methods with different parameter costs. It can be seen that Deep Prompt are much inferior than other methods in terms of parameter efficiency and performance, suggesting its difficulty on VL adaption. Adapter Sung et al. (2022) and Scaled PA He et al. (2022), as the advanced PETL methods, are all parameter efficient, and their adaptions are also plausible on VQA. However, the overall performance of these two methods is close, which is beyond expectation. Compared to these adapter-based methods, the performance of APT can achieve obvious gains at a scale of about 2M parameters, which becomes stable as the parameter size grows. **Ablation Study.** We first examine the impact of prompt \begin{table} \begin{tabular}{c|l|c c|c c c c|c} \hline \multirow{2}{*}{**Backbone**} & \multirow{2}{*}{**Method**} & **Updated** & **Additional** & **VQA** & **NLVR2** & \multicolumn{2}{c|}{**Flickr30K**} & \multirow{2}{*}{**Avg.**} \\ & & **Parameter** & **FLOPs** & test-dev & test-P & IR R@1 & TR R@1 \\ \hline \multirow{8}{*}{ViLT} & Full Tuning & 115.43M & 0.0 & 71.26 & 76.13 & 64.40 & 83.50 & 73.82 \\ & Classifier Only & - & 0.0 & 65.75 & 66.08 & 57.42 & 78.00 & 66.81 \\ \cline{2-9} & Deep Prompt & 1.84M & 5.14G & 69.30 & 73.34 & 58.64 & 79.50 & 70.20 \\ & LoRA & 0.15M & 0.0 & 68.44 & 72.77 & 57.44 & 77.70 & 69.09 \\ & Scaled PA & 1.80M & 0.44G & 70.40 & 75.13 & 61.88 & 79.00 & 71.60 \\ & Adapter & 3.56M & 0.86G & 70.85 & 75.51 & 62.68 & 81.40 & 72.61 \\ \cline{2-9} & **APT** & 1.92M & 0.91G & **70.94** & **75.92** & **63.26** & **81.60** & **72.93** \\ \hline \multirow{8}{*}{METER} & Full Tuning & 323.31M & 0.0 & 77.43 & 83.05 & 82.22 & 94.30 & 84.25 \\ & Classifier Only & - & 0.0 & 69.93 & 73.23 & 78.80 & 89.00 & 77.74 \\ \cline{1-1} \cline{2-9} & Deep Prompt & 3.68M & 13.05G & 67.57 & 65.79 & 70.90 & 87.70 & 72.99 \\ \cline{1-1} & LoRA & 0.29M & 0.0 & 74.00 & 78.82 & 79.86 & 92.60 & 81.32 \\ \cline{1-1} & Adapter & 5.34M & 1.64G & 74.70 & 79.93 & 80.38 & 91.90 & 81.73 \\ \cline{1-1} & Scaled PA & 3.82M & 1.12G & 75.36 & 79.86 & 80.30 & 91.80 & 81.83 \\ \cline{1-1} \cline{2-9} & **APT** & 3.83M & 2.31G & **75.45** & **80.97** & **80.88** & **92.90** & **82.55** \\ \hline \end{tabular} \end{table} Table 2: Comparisons of APT and the state-of-the-art PETL methods for ViLT and METER on VQA, NLVR2 and Flickr30K. The best performance is **bold** and the second best is underlined. Figure 4: The comparison between APT and other PETL methods in terms of performance and parameter size. APT has a better trade-off between performance and parameter costs. number and the rank value in Eq. 11, of which results are given in Tab. 3. Here, "_identify_" denotes that directly using prompt tokens as \(K\) and \(V\) in Eq. 9, while "_dense_" means that low-rank transformation is not used in Eq. 11. The first observation from Tab. 3 is that the increase of prompt tokens is beneficial to VLP models, which can obtain improvements on both tasks, _e.g._, 100 to 200. However, when exceeding 200, its gains are marginal in contrast to other prompt methods as shown in Fig. 4, which also suggests the effectiveness of APT for VLP models. In terms of the rank value, the performance of "identity" suggests that directly using prompt tokens for attention is suboptimal. And the low-rank approximation can better trade off performance and parameter cost, _e.g._ rank value \(r=4\), which is even superior than the dense transformation on NLVR\({}^{2}\). Overall, these results well confirm the effectiveness of APT towards efficient VL adaption. More experiments can refer to our **Appendix**. **Generalization on CLIP.** We further examine the generalization ability of APT on the shallow-fusion based VLP model, _i.e._, CLIP [12], under the base-to-new classification task [12], of which results are given in Tab. 4. In this task, the model needs to adapt to the base dataset, and will be further evaluated on unseen data (new dataset). The compared methods include zero-shot CLIP, CoOp [12] and CoOp [12]. The detailed settings are given in **Appendix**. From this table, we first observe that zero-shot CLIP has a strong transferring learning ability. Due to its large-scale pre-training, it can obtain superior performance under the new task evaluations. However, without tuning on base datasets, its performance is much inferior than PETL methods. In terms of CoOp, it can achieve satisfactory performance for base task adaption. However, its generalization is limited to new tasks, only \(63.22\%\) on average, suggesting the over-fitting problem. In stark contrast, APT can obtain a good performance in adapting base tasks while generalizing well to new ones. Compared to latest PETL methods for CLIP, _i.e._ CoCoOp, its performance is also consistently better under two settings. These results confirm the generalization of APT. **Generalization on StableDiffusion.** We also examine the generalization ability of APT on the StableDiffusion [12] following the setting of DreamBooth [13], and the compared method is LoRA, of which results are given in Fig. 5. The detailed setting is given in Appendix. From the image, we can find out that the dogs generated with different prompts can keep the same attribute. Similar to LoRA, APT binds the attributes of the dog in the training set to specific vocabulary "sks". These results suggest that APT is also capable of text-to-image generation. ## Conclusion In this paper, we focus on the issues of high computation overhead and inefficient adaption of prompt tuning on vision-language pre-trained (VLP) models. By revisiting the principle of prompt tuning, we can figure out that the key to improve prompt tuning lies in its information diffusion to the input sequence, which can be indeed independent to the expensive global self-attention via effective approximations. Motivated by this observation, we propose a novel _approximation prompt tuning_ (APT) approach towards effective VL adaption. APT approximates the impacts of prompt tokens to the input sequence via a low-rank token aggregation design, reducing the computation cost to a large extent. We validate APT on 2 VLP models and 3 VL benchmarks, and also generalize VPT to CLIP for image classification and StableDiffusion for subject-driven image generation. The quantitative and qualitative results not only show the obvious merits of APT over existing prompt tuning methods in both computation efficiency and performance, but also outperform the compared PETL methods on these VL tasks. \begin{table} \begin{tabular}{l|c c c|c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multicolumn{6}{c}{Method} \\ \cline{2-9} & \multicolumn{4}{c|}{Base} & \multicolumn{4}{c}{New} \\ \cline{2-9} & CLIP & CoOp & CoCoOp & APT & CLIP & CoOp & CoCoOp & APT \\ \hline ImSet & 72.43 & **76.47** & 75.98 & 75.97 & 68.14 & 67.88 & 70.43 & **71.23** \\ Cal101 & 96.84 & **98.00** & 97.96 & 97.93 & 94.00 & 89.81 & 93.81 & **94.13** \\ Pets & 71.97 & 93.67 & **95.20** & 94.97 & 97.76 & 95.29 & **97.69** & 97.60 \\ Cars & 63.77 & **78.12** & 70.49 & 76.10 & **94.89** & 60.40 & 73.59 & 75.13 \\ Flowers & 72.08 & **97.60** & 94.87 & 94.97 & **77.80** & 59.67 & 71.75 & 70.13 \\ Food & 90.10 & 88.33 & **90.70** & 90.17 & 91.22 & 82.26 & **91.29** & 90.70 \\ Aircraft & 27.19 & **40.44** & 33.41 & 38.63 & **36.29** & 22.30 & 23.71 & 33.97 \\ SUN & 69.36 & 80.60 & 79.74 & **81.50** & 75.35 & 65.89 & 76.86 & **78.20** \\ DTD & 53.24 & 79.44 & 77.01 & **81.00** & **59.90** & 41.18 & 56.00 & 48.53 \\ SAT & 56.48 & **92.19** & 87.49 & 91.10 & **64.05** & 54.74 & 60.04 & 62.13 \\ UCF & 70.53 & 84.69 & 82.33 & **85.13** & 77.50 & 56.05 & **77.64** & 76.93 \\ \hline Average & 69.34 & 82.69 & 80.47 & **82.72** & 74.22 & 63.22 & 71.69 & 72.64 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of zero-shot CLIP (_CLIP_), CoOp, CoCoOp and APT on the _base to new_ classification task. Figure 5: The comparison between APT and LoRA on StableDiffusion under the setting of subject-driven image generation. The red boxes denote the failure cases. Compared with LoRA, APT can better customize the generation based on the reference images. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Prompt & Rank value & Additional & VQA & NLVR\({}^{2}\) \\ & Parameter & test-dev & test-P \\ \hline 200 & Identity & 1.84M & 70.42 & 75.27 \\ 200 & dense & 8.93M & 71.07 & 75.67 \\ \hline 100 & 4 & 0.99M & 70.11 & 74.81 \\ 150 & 4 & 1.46M & 70.49 & 74.87 \\ \hline 400 & 4 & 3.76M & 70.64 & 75.68 \\ 400 & 16 & 3.98M & 71.01 & 75.95 \\ \hline 200* & 4* & 1.92M & 70.94 & 75.92 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study on different constructions and the number of prompt tokens. *the default setting.
2310.19802
Stochastic Thermodynamics of Learning Parametric Probabilistic Models
We have formulated a family of machine learning problems as the time evolution of Parametric Probabilistic Models (PPMs), inherently rendering a thermodynamic process. Our primary motivation is to leverage the rich toolbox of thermodynamics of information to assess the information-theoretic content of learning a probabilistic model. We first introduce two information-theoretic metrics: Memorized-information (M-info) and Learned-information (L-info), which trace the flow of information during the learning process of PPMs. Then, we demonstrate that the accumulation of L-info during the learning process is associated with entropy production, and parameters serve as a heat reservoir in this process, capturing learned information in the form of M-info.
Shervin Sadat Parsi
2023-10-04T01:32:55Z
http://arxiv.org/abs/2310.19802v5
# Stochastic Thermodynamics ###### Abstract We have formulated generative machine learning problems as the time evolution of Parametric Probabilistic Models (PPMs), inherently rendering a thermodynamic process. Then, we have studied the thermodynamic exchange between the model's parameters, denoted as \(\Theta\), and the model's generated samples, denoted as \(X\). We demonstrate that the training dataset and the action of the Stochastic Gradient Descent (SGD) optimizer serve as a work source that governs the time evolution of these two subsystems. Our findings reveal that the model learns through the dissipation of heat during the generation of samples \(X\), leading to an increase in the entropy of the model's parameters, \(\Theta\). Thus, the parameter subsystem acts as a heat reservoir, effectively storing the learned information. Furthermore, the role of the model's parameters as a heat reservoir provides valuable thermodynamic insights into the generalization power of over-parameterized models. This approach offers an unambiguous framework for computing information-theoretic quantities within deterministic neural networks by establishing connections with thermodynamic variables. To illustrate the utility of this framework, we introduce two information-theoretic metrics: Memorized-information (M-info) and Learned-information (L-info), which trace the dynamic flow of information during the learning process of PPMs. Generative models, Machine Learning, Thermodynamics of Information, Entropy Production, Information Theory ###### Contents * 1 Introduction * 2 Preliminary: learning problem with PPMs * 3 Elements of learning PPMs in thermodynamic context * 3.1 The Learning Trajectory * 4 ###### Abstract We consider the _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _generalized_ (generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) _(generalized) (generalized) (generalized) (generalized) (generalized) (generalized) (generalized) _(generalized Introduction Starting from nearly half a century ago, physicists began to learn that information is a physical entity [1; 2; 3]. Today, the information-theoretic perspective has significantly impacted various fields of physics, including quantum computing [4], cosmology [5], and thermodynamics [6]. Simultaneously, recent years have witnessed the remarkable success of an algorithmic approach known as machine learning, which is adept at learning information from data. This paper is propelled by a straightforward proposition: if "information is physical", then the process of learning information must inherently be a physical process. The concepts of memory, prediction, and information exchange between subsystems have undergone extensive exploration within the realms of Thermodynamics of Information [6] and Stochastic Thermodynamics [7]. For instance, Still et al. [8] delved into the thermodynamics of prediction. And, the role of information exchange between thermodynamic subsystems has been studied by Sagawa and Ueda [9], and Esposito et al. [10]. In the realm of machine learning, Goldt and Seifert [11] explored the stochastic thermodynamics of learning using neural networks and its efficiency [12]. Additionally, Salazar-Gatzimas [13] investigated the application of non-equilibrium thermodynamics in the context of self-supervised learning, while Zohar et al. [14] proposed a thermodynamic framework for describing feature learning. In this study, our objective is to explore the synergy between generic machine learning problems, which we define as the evolution of a parametric probabilistic model, and the realms of information theory and thermodynamics. We pursue this investigation within the context of _generative_ machine learning problem. Examples of generative learning encompass various well-known techniques such as deep Energy-Based Models (EBMs) [15], Generative Adversarial Networks (GANs) [16], Variational Autoencoders (VAEs) [17], and others. These approaches collectively constitute a significant body of literature within the field of machine learning. The organization of this paper is as follows: In Section 2, we establish our notation and definitions by reviewing the fundamental elements of learning generative models. Then, in Section 3, we investigate the thermodynamic interpretation of these elements of learning, examining the time evolution of the model, the loss function, the model's parameters, and the model's generated samples in the thermodynamic context. Moving on to Section 4, we introduce and tailor two information-theoretic measurements, referred to as M-info and L-info, designed to capture information gain and generalization error of the model throughout the learning process. In Section 5, we shift our focus to the stochastic dynamics of the model's parameters and the model's generated samples, emphasizing the role of the model's parameters as a heat reservoir and the model's generated samples as a subsystem evolving during sampling while in contact with this reservoir. Lastly, in Section 6, we put the thermodynamic framework into action. Here, we apply the fluctuation theorem to the time evolution of the model, allowing us to compute the heat dissipation of the model as a source of learned information and the work performed by the optimizer as the thermodynamic cost of encoding this information into the model's parameters. Our aim in this study is to ensure that it is accessible to machine learning researchers who may not have a background in the thermodynamics of information. To achieve this, we have included introductory explanations for the information-theoretic aspects of the computed thermodynamic quantities throughout the paper. This approach may make the content appear repetitive to experts in the field, but it is intended to enhance clarity and understanding for those less familiar with the subject. ## 2 Preliminary: learning problem with PPMs The learning problem with Parametric Probabilistic Models (PPMs) consists of three main ingredients: 1) the parametric model to learn, 2) the training dataset to be learned, and 3) the learning protocol. We now review these ingredients for learning a _generative model_, where the goal is to generate new data with the same statistics as the training dataset. Large Language Models (LLMs), Energy-based models (EBMs), and Variational Autoencoders (VAEs) are among examples of generative models. **The Model:** A PPM is defined by a family of distributions \(\mathcal{P}=\{p_{\theta}(X)|\ \theta\in\Omega_{\Theta}\}\). Both \(x\) and \(\theta\) represent multidimensional vectors. We often refer to this PPM simply as the model, that fully specified by a finite set of parameters \(\theta\) drawn from the parameter space \(\Omega_{\Theta}\). We choose this space to be continues, denoted as \(\Omega_{\Theta}=\mathbb{R}^{M}\), where \(M=\text{dim}(\theta)\). Each PPM selected from \(\mathcal{P}\) can be written as: \[p_{\theta}(X=x)=e^{-\phi_{\theta}(x)} \tag{1}\] where \(\phi_{\theta}(x):=-\ln(p_{\theta}(X=x))\) is a generic function. Reader's familiar with information theory[18], recognize \(\phi_{\theta}(x)\) as the information content, or surprisal, of observing \(x\). Moreover, we interpret the model as a conditional distribution, denoted as \(p(X=x|\ \theta)\equiv p_{\theta}(X=x)\), which is the standard interpretation in Bayesian inference. This interpretation is crucial for the formulation of the learning process we are about to present in this study. Lastly, we note that in the context of machine learning, the PPM is realized with the use of a parametric deep neural network. **The Training Dataset:** The model learns from a finite set of samples known as "the training dataset", which is drawn from an unknown distribution \(p^{*}\). It's important to note that this unknown distribution may or may not belong to \(\mathcal{P}\). The training dataset is denoted as \(B:=\{\eta_{1},\eta_{2},\ldots,\eta_{K}\}\), consists of \(K\) data points. The learning objective is to develop a model such that samples drawn from it share the same statistical properties as the training dataset \(B\). **The Learning Protocol:** In its most generic form, the loss function can be defined as the negative likelihood of the model based on a set of random samples drawn from the training dataset, so-called mini-batch, denoted as \(b\sim B\): \[\ell(b,\theta):=-\frac{1}{|b|}\sum_{x\in b}\log(p_{\theta}(X=x))=\phi_{\theta }(b), \tag{2}\] where the second equality arises from the definition of the model, as given in Eq. 1, and \(\phi_{\theta}(b):=\frac{1}{|b|}\sum_{x\in b}\phi_{\theta}(x)\). Consequently, the objective of learning can be achieved through log-likelihood maximization, which is equivalent to minimizing the negative log-likelihood or the Kullback-Leibler (KL) divergence between the empirical distribution of the training dataset and the model. The standard approach for carrying out this optimization is the Stochastic Gradient Descent (SGD) algorithm and its various variants [19]. As presented in pseudocode 1, SGD iteratively takes gradient descent steps, with learning rate of \(r\), to minimize the loss function, based on i.i.d. samples drawn from the training dataset. We refer to this algorithmic approach for training the model as the learning protocol. ``` Input:\(\theta_{0}\) and \(B\) for\(t=1\) to \(n\)do \(b_{t}^{\ i.i.d.}\cdot B\) \(\theta_{t+1}=\theta_{t}-r\nabla_{\theta}\phi_{\theta}(b_{t})|_{\theta=\theta_ {t}}\) endfor return\(\theta_{n}\) ``` **Algorithm 1** SGD (The generic learning protocol for PPMs) This concludes our preliminary review on learning PPMs. It's important to emphasize that the process illustrated here is the backbone of many popular machine learning algorithms. Any machine learning problem that features a loss function based on an information-theoretic measure can be categorized as a learning problem within this framework. Examples include generative problems like Energy-Based Models (EBM) [15], GANs [16], Variational autoencoder [17], Large Language Models [20], reinforcement learning [21], and Supervised Classification problems [22]. Even regression problems can be explored within this context, given their well-established connection to probabilistic models [23]. The main takeaway from this section is that the learning process in machine learning algorithms can be fundamentally understood as the minimization of the model surprisal \(\phi_{\theta}(b)\) according to the training dataset. This observation motivates us to approach the learning problem from an information-theoretic perspective. While this perspective is prevalent in machine learning [24, 25, 26, 27, 26], it is not without its critics [28]. ## 3 Elements of learning PPMs in thermodynamic context The learning problem, as discussed in the last section, revolves around the selection of a model from a family of distributions denoted as \(\mathcal{P}\). To bridge this algorithmic process to a thermodynamic process, it's essential to clarify a few fundamental concepts. In this section, we furnish generic definitions of these concepts, while reserving more in-depth exploration of certain aspects for subsequent sections. ### The Learning Trajectory Consider a discretized time interval \([0,t_{n}]\), which represents the time needed for \(n\) optimization steps of the parameters. During this time, the SGD algorithm draws a sequence of i.i.d samples from the training dataset. We denote this sequence by \(\mathbf{b}_{n}:=\{b_{t_{1}},b_{t_{2}},\ldots,b_{t_{n}}\}\), and refer to it as the "input trajectory". Then, the outcome of the optimization defines a sequence of parameters, call it the "parameters' trajectory": \(\mathbf{\theta}_{n}:=\{\theta_{0},\theta_{t_{1}},\theta_{t_{2}},\ldots,\theta_{t_ {n}}\}\). Each realization of parameters defines a specific PPM. For example, \(p(X=x|\theta_{t_{i}})=e^{-\phi_{t_{i}}(x)}\) at \(t=t_{i}\). Consequently, the parameters' trajectory produces a sequence of PPMs, depicted in figure 3.1 : \[\mathcal{T}:=\{p(X|\theta_{0}),\;p(X|\theta_{t_{1}}),\;p(X|\theta_{t_{2}}), \ldots,\;p(X|\theta_{t_{n}})\} \tag{3}\] We refer to this sequence as the _learning trajectory_. The learning trajectory encapsulates the time evolution of a distribution, that can be characterized by a master equation. Moreover, it has been demonstrated that the thermodynamic framework can be reconstructed from the ground up, starting by a master equation that portrays the time evolution of a distribution [29]. This observation equates the learning process with a thermodynamic process, as learning is merely time evolution of a distribution. Additionally, we can sample the model along the learning trajectory to achieve a sequence of model generated samples, \(\mathbf{x}_{n}:=\{x_{t_{1}},x_{t_{2}},\ldots,x_{t_{n}}\}\). We refer to this as the "samples' trajectory". To avoid confusion with our notation, consider the probability functions \(p(x_{t_{i}}|\theta_{t_{i}})\) and \(p(x_{t_{i-1}}|\theta_{t_{i}})\), which respectively represent the probability of observing \(x_{t_{i}}\in\mathbf{x}_{n}\) and \(x_{t_{i-1}}\in\mathbf{x}_{n}\) at time \(t=t_{i}\). Here, the time index of \(\theta\) aligns with the time index of the PPM, i.e., \(p_{t_{i}}(X|\theta_{t_{i}})\equiv p(X|\theta_{t_{i}})\), because the PPM is fully defined upon observing the parameters. In contrast, the time index on \(x\) denotes a specific observation within \(\mathbf{x}_{n}\). To simplify our notation, the absence of a time index on \(x\) denotes a generic realization of the random variable \(X\), and we write \(p(x|\theta_{t_{i}})\) instead of \(p(X|\theta_{t_{i}})\). Figure 3.1: The learning trajectory \(\mathcal{T}\) depicts the thermodynamic process that take the initial model state to final state. The green area shows the space of family of distribution accessible to the PPM. The red area considers the possibility that the target distribution, \(p^{*}\), is not in this family. ### Thermodynamic interpretation of the loss function We have mentioned how the time evolution of the model, represented by the learning trajectory 3, can be interpreted as a thermodynamic process. However, the question remains: what is the thermodynamic protocol that governs this process? In other words, can we provide a thermodynamic interpretation for log-likelihood maximization? The key to answering this question relies on the concept of non-equilibrium free energy. Let's consider a gas system, described by an energy function \(H(x)\). When in contact with a heat reservoir at temperature \(\beta^{-1}\), this system eventually reaches its equilibrium state, known as the canonical state: \[p^{eq}(x)=\frac{e^{-\beta H(x)}}{Z} \tag{4}\] Here, \(Z=\int_{\Omega_{X}}e^{-H(x)}\ dx\) is the normalization factor, and \(\Omega_{X}\) represents the configuration space of the \(X\) degrees of freedom. The characteristic of the equilibrium state is the minimum free energy, defined as: \[F[p^{eq}(x)]=<H(x)>_{p^{eq}(x)}-\beta^{-1}S[p^{eq}(x)]=-\beta^{-1}\log(Z) \tag{5}\] Here, we have used the notation in table 3.1, and the second equality is due to the definition of Shannon entropy. Now, any state that is not the equilibrium state, and consequently holds more free energy than the minimum free energy \(F[p^{eq}(x)]\), is considered a non-equilibrium state. For example, by injecting heat into the gas, we can rapidly change its state from \(p^{eq}(x)\) to a non-equilibrium state \(p^{neq}(x)\). The non-equilibrium free energy of this state is defined as follows [30]: \[F[p^{neq}(x)]=F[p^{eq}(x)]+\beta^{-1}D(p^{neq}(x)||p^{eq}(x)) \tag{6}\] This definition has a profound consequence, as it connects the change of free energy due to being away from the equilibrium state to an information-theoretic measure, i.e., the KL divergence. The above KL divergence measures the extra bits of surprise [18], one would face by observing the microscopic state of the system \(x\) far from equilibrium, having in hand the equilibrium distribution, rather than the true non-equilibrium distribution. Having briefly explored the concept of non-equilibrium free energy and its profound connection to information theory, we now refocus our attention on the learning problem. The learning process is governed by minimization of the loss function 2. This loss function can be expressed in the form of KL-divergence as follows: \[\ell(b_{t_{i}},\theta_{t_{i}})=D(\hat{p}_{b_{t_{i}}}||p_{\theta_{t_{i}}}) \tag{7}\] Here, \(\hat{p}_{b_{t_{i}}}\) represents the empirical distribution based on the mini-batch \(b_{t_{i}}\). \begin{table} \begin{tabular}{|l l|} \hline \(\Delta_{t_{n}}f(t):=f(t_{n})-f(0)\) & Change over the interval \([0,t_{n}]\) \\ \(<f(x)>_{p(x)}:=\int dx\ p(x)\ f(x)\) & Average over \(p(x)\) \\ \(s_{X}(t):=s[p_{t}(x)]:=-\ln p_{t}(x)\) & Surprisal of \(p_{t}(x)\) \\ \(S_{X}(t):=S[p_{t}(x)]:=<-\ln p_{t}(x)>_{p_{t}(x)}\) & Shannon entropy of \(p_{t}(x)\) \\ \(I_{X;\Theta}(t):=I[X_{t};\Theta_{t}]:=S_{X}(t)-S_{X|\Theta}(t)\) & Mutual information between \(X\) and \(\Theta\) at time t \\ \hline \end{tabular} \end{table} Table 3.1: A list of notations used in this paper Assuming that the mini-batch serve as a good proxy for the target distribution \(p^{*}\), we can express the loss function as \(\ell(\theta_{t_{i}})=D(p^{*}||p_{\theta_{t_{i}}})\). Then, utilizing the definition of non-equilibrium free energy, the loss function can be interpreted as the additional free energy of the model state for being far from the equilibrium state defined by the target distribution: \[\ell(\theta_{t_{i}})=F[p_{\theta_{t_{i}}}]-F[p^{*}] \tag{8}\] The optimization process takes the model from its initial state \(p_{\theta_{0}}\), with the non-equilibrium free energy \(F[p_{\theta_{0}}]=F[p^{*}]+D(p^{*}||p_{\theta_{0}})\), to the final state with the non-equilibrium free energy \(F[p_{\theta_{t_{n}}}]=F[p^{*}]+D(p^{*}||p_{\theta_{t_{n}}})\), as depicted in Fig. 3.1. Thus, the change in free energy of the model reads: \[\Delta_{t_{n}}F[\theta]:=F[p_{\theta_{t_{n}}}]-F[p_{\theta_{0}}]=-\Big{(}D(p^{ *}||p_{\theta_{0}})-D(p^{*}||p_{\theta_{t_{n}}})\Big{)} \tag{9}\] Given that the loss function of a well-defined machine learning problem should converge to a constant, without loss of generality, we set the value \(D(p^{*}||p_{\theta_{t_{n}}})=0\). Therefore, the minimization of the loss function (log-likelihood maximization) is equivalent with a thermodynamic protocol that demands maximization of free energy: \[\underset{\theta}{min}D(p^{*}||p_{\theta})=\underset{\theta}{min}(-\Delta F( \theta))\Rightarrow\underset{\theta}{max}\Delta F(\theta), \tag{10}\] such that it reaches the convergence of the non-equilibrium free energy of the model with respect to the target distribution. Note that we have set the temperature equals to one in writing the free energy of the model. We postpone discussing the concept of temperature and heat bath in the context of learning PPM to section 5. ### Ensemble view and conditional view In the final step of establishing the foundation for the thermodynamic approach, we introduce the concept of the ensemble view in the learning problem.To understand the necessity of this ensemble view, let's first review it in the context of thermodynamics. Let us consider a physical system, such as a gas confined in a container, with \(X\) degrees of freedom, such as the position of gas molecules. We refer to the configuration of these degrees of freedom as the microscopic state of the system. It's important to note that while we're discussing a single system, the configuration of its degrees of freedom is deterministic, even though it may be challenging to measure. The concept of an ensemble facilitates a statistical approach to this problem by considering multiple instances of the same system, in this case, an ensemble of gases. In this context, the \(X\) degrees of freedom become a random variable, characterized by a statistical state \(p(x)\), from which we can draw samples. This approach, used for over 200 years since the groundbreaking work of Boltzmann and Maxwell in thermal physics. We use the same idea to form the bedrock of our methodology to study the thermodynamics of learning with PPMs. Consider an ensemble of computers, each solving the same learning problem. That is, they are processing the same training dataset, utilizing the same neural network structure, following the same optimization algorithm, and operating with the same set of hyperparameters. Due to the stochastic nature of the input trajectory in the SGD algorithm, the model parameters exhibit stochastic behavior across this ensemble. We denote the random variable \(\Theta_{t_{i}}\) as representing the statistics of the parameters sampled from the ensemble at time \(t_{i}\). Similarly, \(X_{t_{i}}\) is the random variable representing the model's generated samples at time \(t_{i}\). From a physics perspective, we can view these as two physical subsystems (akin to two containers of gas), with their degrees of freedom symbolized by \(X\) and \(\Theta\). These two subsystems constitute a joint system referred to as the _learning machine_: \[\mathcal{M}:=(X,\Theta)\] The learning machine embodies the physical system that undergoes the learning process. This simple, yet powerful, ensemble perspective effectively bridges the learning problem from the realm of algorithms to the world of thermal physics. To simplify our terminology, moving forward, we will refer to the model's parameters subsystem as the "parameter subsystem" and the model's generated samples as the "model subsystem." Complementing the ensemble view, we introduce the conditional view. In this perspective, we select one computer from the ensemble to measure the microscopic state of the parameters' subsystem. In the conditional view, the sample generated by the model \(X\) remains stochastic, conditioned on the known parameters. This state aligns with the definition of the PPM \(p(x|\theta)\). This is the common practice of machine learning practice, as we typically train a model once. In this study, we oscillate between these two perspectives to investigate different aspects of the learning problem. ## 4 Information content of PPMs In the preceding sections, we initially defined the learning problem with PPMs as an information-theoretic process and subsequently as a thermodynamic process. The profound link between thermodynamics and information theory, rooted in the age-old problem of Maxwell's demon, and as mentioned in the introduction, has given rise to the thriving field of thermodynamics of information. Consequently, our primary motivation for framing the learning problem as a thermodynamic one is to leverage this connection in order to evaluate the information flow during the learning process. To accomplish this, we begin by establishing definitions for two information-theoretic quantities that comprehensively describe the learning process. Localizing information learned by the model is a fundamental question in machine learning [31]. The learned information, often referred to as learned 'features' or'representations' encompasses relevant patterns or representations in the training data that contribute to the learning task, such as classification or generating new samples. Here, we make the assumption that all learned information by the model has been stored in parameters. While this seems a trivial or intuitive assumption, directly measuring the information content of parameters is highly non-trivial and challenging. As a result, a significant portion of the literature has shifted its focus away from regarding parameters as the sole repository of learned data. Instead, it explores "latent spaces," typically referring to the activations of hidden neurons [32; 33]. It is important to reiterate that we view parameters as the primary carriers of learned information (features or representations), and it is in this direction that we construct our information-theoretic measurements. We begin by reformulating the learning protocol 1 as a feature map for the parameters. To facilitate our discussion, let's abuse our notation by representing \(B\) as the random variable responsible for generating samples in the training dataset. Now, recall the ensemble of computers, each train on an i.i.d trajectory of inputs, and each output a parameters' trajectory. Then, for \(n\) steps optimization, the actions of the SGD optimizer across this ensemble of computers can be reinterpreted as a mapping: \[\Theta_{t_{n}}=\Lambda_{t_{n}}(B) \tag{11}\] The random variable \(\Theta_{t_{n}}\) is referred to as the _statistic_ of the random variable \(B\). In this definition, we have made the assumption that the random variable \(\Theta_{t_{n}}\) quickly loses its memory of the initial parameter value as \(n\gg 1\).Therefore, we have omitted \(\Theta_{0}\) from the argument of the feature map. The action of the optimizer manifests as a feature map, mapping the features or representations of the training dataset, \(B\), onto the parameters, \(\Theta_{t_{n}}\). We also note that the model's generated samples, denoted by \(X_{t_{n}}\), is independent of \(B\), given \(\Theta_{t_{n}}\). Thus, the learning process is governed by following Markov chain: \[B\rightarrow\Theta_{t_{n}}\xrightarrow{}X_{t_{n}}. \tag{12}\] Consequently, the Data Processing Inequality (DPI) governing above Markov chain states: \[I_{\Theta;B}(t_{n})\geq I_{X;B}(t_{n}) \tag{13}\] We have used notations presented in table 1. The left-hand side of this inequality quantifies the accumulation of mutual information between the parameters and the training dataset, while the right-hand side characterizes the performance of the generative model, it gauges the accumulation of mutual information between the model's generated samples and the training dataset. We refer to the former as Memorized Information (M-info) and the latter as Learned-information (L-info). We also note that both of these quantities start at zero before the training begins. In the context of the learning problem, the DPI means what is _Memorized_ is always greater or equal to what is _Learned_. The L-info metric is task-oriented; for instance, it pertains to the generation of samples in the current context. If the task were classification, then the L-info would only include the information necessary for predicting labels. Conversely, M-info can house information that isn't directly relevant to the task at hand. The above DPI neatly illustrate this concept, that a model can learn more than what is strictly necessary to execute a specific task. The irrelevant information with respect to a particular task can be calculated as \(I_{\Theta;B}(t_{n})-I_{X;B}(t_{n})\), as a measurement of over-fitting or generalization error. The importance of constraining the amount of information in the model's parameters has been explored in Ref. [34], supported by the Minimum Description Length Principle [35]. Furthermore, research has indicated that the SGD optimizer exhibits a bias towards learning models with minimal information content in their parameters [36]. More recently, Ref. [37] has established an upper bound for minimizing parameter information content to improve generalization power. These insights imply that the learning process aims to minimize the left-hand side of the DPI inequality while maximizing the right-hand side to enhance the model's performance. This leads us to an ideal scenario where \(I_{\Theta;B}(t_{n})=I_{X;B}(t_{n})\), signifying that all memorized information is relevant to the learning task. Before ending this section, we note that the existence of the feature map 11 allow us to express M-info and L-info in more useful form. First, given that the random variable \(\Theta_{t_{n}}\) is a function of \(B\), it allows us to write: \[\text{M-info}:=I_{\Theta;B}(t_{n})=S(\Theta_{t_{n}}) \tag{14}\] Thus, the parameters naturally emerge as the model's _memory_, where its Shannon entropy measures the stored information during the learning process. Second, we swap \(B\) for \(\Theta_{t_{n}}\), in the definition of L-info in cost of losing some information: \[\text{L-info}:=I_{X;B}(t_{n}) =I_{X;\Lambda_{t_{n}}(B)}+\epsilon \tag{15}\] \[=I_{X;\Theta}(t_{n})+\epsilon\] where \(\epsilon\) is a non-negative number that equals zero only when the \(\Lambda\) outcome is a sufficient statistic for \(B\). For the above expression, the condition of sufficient statistic can be eased as \(\Theta\) to be sufficient with respect to \(X\). This means the map \(\Lambda\) preserve all information in \(B\) that is also mutual in \(X\). We refer to such map as preservative map. In the machine learning problems, we are interested in preservative maps that their action on training dataset preserve task-related information. Therefore, we consider \(I_{X;\Theta}\) as a reasonable proxy to L-info, and we use the two interchangeably: \[\text{L-info}:=I_{X;\Theta}(t_{n}) \tag{16}\] ## 5 Stochastic dynamics of learning PPMs In the previous chapter, we introduced the ensemble view, which allows us to consider the model's parameters \(\Theta\), and the model's generated samples \(X\), as two distinct subsystems. Then, the joint degrees of freedom \((X,\Theta)\) constitute the learning machine (system) that studying its stochastic dynamics is the main subject of this section. In order to begin, we first need to clarify the concept of time in the context of machine learning problems. At first glance, establishing a consistent notion of time may appear challenging, given that the duration of a learning process can vary depending on the computer and its processor speed. To tackle this issue, we utilize a computational _complexity parameter_ associated with complexity theory [38], and the resources necessary to execute the learning process. Consider a computer with a processor speed of \(1/\delta t\), where \(\delta t\) represents the time interval for executing the most basic computational step on this processor. We then measure the time interval between two parameter updates, denoted by \(\alpha\). Next, we define the complexity parameter as the number of basic computational steps required to execute one optimization step: \[\tau:=\frac{\alpha}{\delta t} \tag{17}\] The complexity parameter is, in fact, independent of the computer's processor speed and serves as a measure of the computational cost of the specific learning problem at hand. It quantifies the inherent difficulty of minimizing the loss function, which encompasses tasks such as computing the loss function value, sampling the model in the case of generative tasks, and executing the backpropagation algorithm to update the parameters. We will elaborate more on the role of complexity parameter when discussing the dynamics of the model's subsystem. For now, we focus on the dynamics of the parameter subsystem, whose time evolution is determined by the timescale \(\alpha=\tau\times\delta t\). ### The parameter subsystem The stochastic dynamic of the subsystem \(\Theta\) is dictated by the learning protocol (the SGD optimizer)1. To render this dynamic in the form of a conventional overdamped Langevin dynamic, we introduce the following conservative potential, defined by the entire training dataset \(B\): \[U_{B}(\theta):=\frac{1}{|B|}\sum_{x\in B}\phi_{\theta}(x). \tag{18}\] The negative gradient of this potential gives rise to a deterministic vector force. Additionally, we define the fluctuation term, that represents the source of random forces due to each mini-batch: \[\eta(t_{n}):=-\nabla_{\theta}\;\phi_{\theta_{t}}(b_{t_{n}})+\nabla_{\theta}\; U_{B}(\theta_{t_{n}}).\] We now reformulate the SGD optimizer, presented earlier in the pseudocode 1, in the guise of overdamped Langevin dynamics, dividing it by the parameters' update timescale \(\alpha\) to convert the learning protocol into a dynamic over time: \[\frac{\theta_{t_{n+1}}-\theta_{t_{n}}}{\alpha}=-\mu\nabla_{\theta}U_{B}(\theta _{t_{n}})\;+\mu\;\eta(t_{n}), \tag{19}\] where \(\mu:=r/\alpha\) is known as the mobility constant, in the context of Brownian motion. We note that Eq. 19 is simply a rearrangement of the standard SGD. For us to interpret Eq. 19 as a Langevin equation, the term \(\eta(t_{n})\) must represent a stationary stochastic process to serve as the _noise_ term in the Langevin equation. To demonstrate this property of \(\eta(t_{n})\), we must examine the characteristic of its Time Correlation Function (TCF)[39]: \(C_{i,j}(t,t-t^{\prime}):=\delta_{i,j}<\eta_{i}(t)\eta_{j}(t^{\prime})>\), where indices \(i,j\) represent different components of the vector \(\theta\), and \(\delta_{i,j}\) is the Kronecker delta. If the fluctuation term, \(\eta\), satisfies the condition of the white noise (uncorrelated stationary random process), and assuming that Eq. 19 describes a motion akin to Brownian motion, we can apply the fluctuation-dissipation theorem to write: \[<\eta_{i}(t)\eta_{j}(t^{\prime})>=\frac{2k_{B}T}{\mu}\delta(t-t^{\prime})\delta _{i,j} \tag{20}\] Here, \(\delta(t-t^{\prime})\) is a delta Dirac, and the constant \(T\) symbolizes the temperature. The constant \(k_{B}\) stands for the Boltzmann constant. To render our framework unitless, we treat the product of the Boltzmann factor and temperature as dimensionless. Moreover, regardless of the noise width we set \(T=1\), and henceforth it will not appear in our formulation. This is possible by adjusting the Boltzmann factor according to the noise width, i.e., \(k_{B}=\mu<\eta_{i}(t)\eta_{i}(t)>/2\). We still need to investigate if the fluctuation term indeed describes an uncorrelated stationary random process, as presented in Eq. 20. To this end, we conducted an experiment by training an ensemble of 50 models for the classification of the MNIST dataset. To induce different level of stochastic behavior, i.e., different "temperatures", we consider three different mini-batch sizes. A smaller mini-batch size leads to a bigger deviation in the fluctuation term, consequently amplifying the influence of random forces. Results are presented in Fig. 5.1. The plot 5.1c represents the TCF function at no time lag \(t=t^{\prime}\), i.e., variance of \(\eta(t)\), as a function of time. The constant value of variance suggests the stationary property of \(\eta(t)\). Moreover, Fig. 5.1d illustrates the autocorrelation of \(\eta(t)\) at different time lags, indicating white noise characteristic for this term. However, it would be naive to draw a generic conclusion regarding the nature of the fluctuation term as an uncorrelated stationary random process solely based on a simple experiment. Indeed, research has demonstrated that the noise term can be influenced by the Hessian matrix of the loss function [40]. This observation aligns with our definition of the fluctuation term presented in Eq. 19, where \(\eta\) is defined in relation to the gradient of the loss itself. Consequently, as Figure 5.1: This experiment contrasts the parameter dynamics with three different mini-batch sizes: \(|b_{t}|=1\),\(|b_{t}|=10\), and \(|b_{t}|=100\). The model under consideration is a four-layer feedforward neural network with a uniform width of 200 neurons. It was trained on the MNIST classification task using a vanilla SGD optimizer. The experiment was replicated over 50 trials to generate an ensemble of parameters. a) One random parameter from the model’s last layer is chosen for each batch size scenario, and four of its dynamic realizations are depicted. b) Illustrates both the average accuracy (solid line) and the variance of accuracy within the ensemble (shaded area), emphasizing the low-variance condition, which asserts that macroscopic quantities such as accuracy have low variance statistics across the ensemble. c) Displays the noise variance averaged over all parameters, i.e., \(\frac{1}{\mathrm{dim}(\theta)}\sum_{i=0}^{\mathrm{dim}(\theta)}C_{i,i}(t,0)\), for each mini-batch size scenario, underscoring the stationary nature of \(\eta\). This part also highlights the role of mini-batch size in determining the noise width, i.e., the temperature of the environment. The horizontal dashed line indicates the maximum absolute value observed from \(\nabla_{\theta}\,U_{B}(\theta_{t_{n}})\), serving as a reference point for the magnitude of the noise. d) Exhibits the autocorrelation of the term \(\eta\) averaged over all parameters. For instance, computing this quantity at step 1000 reads: \(\frac{1}{\mathrm{dim}(\theta)}\sum_{i=0}^{\mathrm{dim}(\theta)}C_{i,i}(t=1000,t^{\prime}-t)\). The rapid decline in autocorrelation with time lag indicating the white noise characteristic of \(\eta\). the optimizer explorers the landscape of the loss function, the characteristics of the fluctuation term \(\eta\) can vary. We can grasp this concept in the context of Brownian motion by envisioning a Brownian particle transitioning from one medium to another, each with distinct characteristics. This implies that there could be intervals during training where \(\eta\) stays independent of the loss function and exhibits a stationary behavior. Moreover, we overlooked the fact that \(\eta(t)\) is also a function of \(\theta\) itself. This could potentially jeopardize its stationary property. To address this issue, we refer to the slow dynamic (lazy dynamic) [41, 42] of over-parameterized models under SGD optimization. This slow dynamic allows us to write the Taylor expansion1 of the loss function around a microscopic state \(\theta^{*}\), sampled from its current state \(p_{t}(\theta)\): Footnote 1: Similar to what has been done in Neural tangent kernel theory [43], but with a different purpose. \[\phi_{\theta_{t}}(b_{t})=\phi_{\theta^{*}}(b_{t})+(\theta_{t}-\theta^{*}) \nabla_{\theta}\phi_{\theta^{*}}(b_{t}) \tag{21}\] As a result, the gradient of the loss \(\nabla_{\theta}\phi_{\theta_{t}}(b_{t})=\nabla_{\theta}\phi_{\theta^{*}}(b_{t})\), signifying an independent behavior from the specific value of the parameters \(\theta_{t}\) at a given time \(t\). We can extend this concept to the deterministic force \(-\nabla_{\theta}U_{B}(\theta_{t})=-\nabla_{\theta}U_{B}(\theta^{*})=F(\theta^ {*})\), which indicates a conservative force in lazy dynamics regime, denoted as \(F(\theta^{*})\). The key point here is that the value of this force is not dependent on the microscopic state of \(\theta_{t}\), but rather on any typical sample, \(\theta^{*}\), from \(\Theta_{t}\). In Appendix 7, we illustrate how the condition of lazy dynamics leads to a thermodynamically reversible dynamic of the subsystem \(\Theta\). #### 5.1.1 Naive parametric reservoir The stationary state of subsystem \(\Theta\), under the dynamic of Eq. 19, satisfying the fluctuation-dissipation relation in Eq. 20, corresponds to the thermal equilibrium state (the canonical state): \[p^{eq}=e^{-U_{B}(\theta)+F_{\Theta}} \tag{22}\] where \(F_{\Theta}:=-\log(\int d\theta e^{-U_{B}(\theta)})\) is the free energy of the subsystem \(\theta\). Recall that, the temperature has been set to one. This state, also, satisfies the detailed balance condition, that define the log ratio between forward and backward transition probability as follows: \[\log\frac{p(\theta_{t_{i}}|\theta_{t_{i-1}})}{p(\theta_{t_{i-1}|\theta_{t_{i}}} )}=-\Big{(}U_{B}(\theta_{t_{i}})-U_{B}(\theta_{t_{i-1}})\Big{)} \tag{23}\] The standard plot of the loss function versus optimization steps in machine learning practice can help us to visualize the dynamics of the subsystem \(\Theta\). A rapid decline in the loss function signals a swift relaxation of the subsystem \(\Theta\) to its equilibrium state. It is important to note that this _self-equilibrating property_ is determined by the training dataset \(B\) through the definition of the potential function \(U_{B}(\theta)\). These swift and self-equilibrating properties mirror the characteristics of a heat reservoir in thermodynamics [44]. Hence, we refer to the subsystem \(\Theta\) as the _parametric reservoir_. After a swift decline, a gradual reduction of the loss function, can be sign of a quasi-statistic process, when subsystem \(\Theta\) evolve from one equilibrium state to another. This can be due to the lazy dynamic condition, as discussed in Appendix 7. Additionally, the requirement of a high heat capacity for the reservoir, represented as \(dim(\Theta)>>dim(X)\), offers a thermodynamic justification for the use of over-parameterized models in machine learning. #### 5.1.2 Realistic parametric reservoir We refer to the assumption of the parametric reservoir with an equilibrium state expressed in Eq. 22 as the "naive assumption" due to several issues that were previously sidestepped. The first issue stems from the assumption that all components of the parameter vector \(\theta\) are subject to the same temperature, i.e., \(<\eta_{i}(t)\eta_{i}(t)>=\frac{2k_{B}T}{\mu}\) for all index \(i\). In practice, we might find different values of noise width, particularly with respect to different layers of a deep neural network. Furthermore, the weights or biases within a specific layer might experience different amounts of fluctuation. This scenario is entirely acceptable, if we consider each group of parameters as a subsystem that contributes to the formation of the parametric reservoir \(\Theta\). Consequently, each subsystem possesses different environmental temperatures and distinct stationary states. This observation may explain, in thermodynamic terms, why a deep neural network can offer a richer model. As it encompasses multiple heat reservoirs at varying temperatures, it presents a perfect paradigm for the emergence of non-equilibrium thermodynamic properties. Second, the fluctuation term \(\eta\) may exhibit an autocorrelation property that characterizes colored noise, as presented in Ref [45]. While this introduces additional richness to the problem, potentially displaying non-Markovian properties, it does not impede us from deriving the equilibrium state of the subsystem \(\Theta\), as demonstrated in [46]. We also overlooked the irregular behavior of the loss function, such as spikes or step-like patterns. These irregularities are considered abnormal as we typically expect the loss function to exhibit a monotonous decline, but in practice, such behaviors are quite common. These anomalies may be associated with a more intricate process, such as a phase transition or a shock, experienced by the reservoir. Nevertheless, we can still uphold the basic parametric reservoir assumption during the time intervals between these irregular behaviors. The mentioned issues are attributed to a richer and more complex dynamic of subsystem \(\Theta\), and do not fundamentally contradict the potential role of subsystem \(\Theta\) as a reservoir. Examples of these richer dynamics can be fined in a recent study [47], that shows the limitation of Langevin formulation of SGD, and Ref. [48] that investigates exotic non-equilibrium characteristic of parameters' dynamics under SGD optimization. #### 5.1.3 The low-variance learning condition The experimental result, presented in figure 5.1, suggests a low-variance stochastic dynamic for the subsystem \(\Theta\). For instance, panel (a) shows that even in the high noise regime (\(|b_{t}|=1\)), the dynamics of parameters remain confined to a small region across the ensemble. Furthermore, panel (b) demonstrates the low-variance characteristics of the model's performance accuracy. Finally, the large magnitude of deterministic force (dashed line in panel (c)) to random force, is an evidence of low-variance dynamics. In machine learning practice, this is indeed a favorable property, as it asserts that a well-defined machine learning algorithm has a robust learning outcome, regardless of who is running the code. In thermodynamic language, this means that although the microscopic trajectory of parameters is stochastic, but their macroscopic quantities like model's accuracy, L-info, and M-info must be low variance across the ensemble. We refer to this condition as the _low-variance_ condition. The low-variance condition becomes extremely helpful in computing information-theoretic measurements introduced in section 4. In practice, we always work in the conditional view, defined in 3.3, where we do not have access to an ensemble of computers, and we train the learning problem only once. Computing the M-info \(I_{B;\Theta}\) and L-info \(I_{X;\Theta}\), on the other hand, requires averaging over the ensemble. However, we can overcome this challenge by virtue of the low-variance condition, which allows us to approximate any function of parameters, \(f(\theta)\), averaged over the ensemble, with statistical state \(p(\theta)\), with its non-averaged value according to a single sample: \[<f(\theta)>_{p_{t}(\theta)}\approx f(\theta^{*}),\;\;\forall\theta^{*}\sim p_{ t}(\theta) \tag{24}\] According to this property, we introduce the Conditional L-info \[I_{X;\Theta}(\theta_{t})=\int dx\;p(x|\theta_{t})\ln\frac{p(x|\theta_{t})}{p_{ t}(x)} \tag{25}\] that serve as a proxy to the L-info: \(I_{X:\Theta}(t)=<I_{X:\Theta}(\theta)>_{p_{t}(\theta)}\approx I_{X:\Theta}( \theta_{t})\). To conclude, the low-variance condition of \(\Theta\), allows us to compute macroscopic quantities (averaged of \(X\) degrees of freedom) conditioned on microscopic state of \(\Theta\), as a good proxy to the same macroscopic quantities fully averaged over both \(X\) and \(\Theta\) degrees of freedom. ### The model subsytem The SGD learning protocol, represented in the pseudocode 1, consists of two sets of dynamic rules: one explicitly governing the model's parameters \(\Theta\) and the other implicitly governing the model's generated samples \(X\). We discussed the dynamics of parameters in the previous subsection, we now turn our attention to the dynamics of samples, which we believe have typically been overlooked in the statistical mechanics formulation of machine learning problems. In the context of machine learning, the implicit dynamics of the subsystem \(X\) is related to the temporal evolution of the model's generated samples along the learning trajectory \(\mathcal{T}\). For instance, the transition from the initial noise to the emergence of patterns, such as a face in the case of image generation. It is critically important to note that the process of sampling is an integral part of training generative models in various machine learning problems, as it is a necessary step for computing the loss function after each optimization step. For example, in Energy-Based Models (EBMs) [15], the loss function is computed using both samples drawn from the training dataset (referred to as positive samples) and samples generated by the model itself (referred to as negative samples). As a result, each optimization step in training EBMs is followed by the sampling of subsystem \(X\). Furthermore, in the case of Generative Adversarial Networks (GANs) [16], the computation of their loss function requires sampling from the adversarial model, while Variational Autoencoders (VAEs) [17] necessitate sampling from the prior distribution of latent space during training. We formulate the stochastic dynamic of the subsystem \(X\), by following overdamped Langevin equation, that represent the dynamic of \(X\) under potential function \(\phi_{\theta}(x)\), and a Gaussian noise \(\zeta(t)\): \[\frac{x_{t_{n}+\delta t}-x_{t_{n}}}{\delta t}=\ -\mu_{x}\partial_{x}\phi_{ \theta}(x)+\mu_{x}\zeta(t) \tag{26}\] This dynamical equation aligns with the conditional view presented in 3.3. In this view, the stochastic trajectory of parameters, serves as the control parameters that drive subsystem \(X\) through the time-dependent (parameter-dependent) potential \(\phi_{\theta_{t}}(x)\). Within this framework, the actions of the optimizer, updating parameters \(\theta_{t}\rightarrow\theta_{t+1}\), manifest as stochastic work, denoted as \(\delta W_{X}:=\phi_{\theta_{t+1}}(x)-\phi_{\theta_{t}}(x)\), applied to the model's subsystem \(X\). This work is powered by the information derived from the input trajectory \(\mathbf{b}_{n}\). To reframe this in the thermodynamic context, we can introduce a third thermodynamic subsystem with \(B\) degrees of freedom. The microscopic realizations of this subsystem represent the ongoing samples drawn from the training dataset, during the learning process. Then, subsystem \(B\) functions as an ideal work reservoir[44], as it does not lead to entropy production and remains unaffected by the subsystem \(X\). Similarly, the parameter trajectory \(\mathbf{\theta}_{n}\) serves as the work parameters as long as the time evolution of subsystem \(\Theta\) is quasi-static, meaning it does not itself result in entropy production [49]. This interpretation equates the action of the SGD optimizer with that of a Maxwell demon, which effectively transfers information into useful work. This is a classic example of an information engine [50]. #### 5.2.1 Dual timescale dynamics We want to reiterate that the dynamics of subsystem \(X\), presented in Eq. 26, is not a conjecture in this study but rather an integral part of training generative models. For instance, Eq. 26 represents the operation of the Langevin Monte Carlo sampler utilized in Ref. [51], in the case of continuous \(X\). In practice, the optimizer's actions are interspersed with pauses, during which fresh samples are generated from the new model using the sampler. This interplay between the optimizer and the sampler introduces a bipartite dynamic rule for the joint degrees of freedom \((X,\Theta)\). This implies that simultaneous transitions in the states of \(X\) and \(\Theta\) are not permitted. Moreover, we utilize the complexity parameter introduced in Eq. 17 to establish a relationship between the dynamical timescale of subsystem \(X\) and subsystem \(\Theta\). To simplify matters, let's assume that subsystem \(X\) evolves with the timescale of the processor \(\delta t\), which represents the time needed to execute the most basic operation. Now, if we require \(\tau\) Monte Carlo steps to sample fresh data points from the new model, the optimizer's action will be delayed by an amount \(\alpha=\tau\times\delta t\). Consequently, the subsystem \(\Theta\) operates on a dynamical timescale of \(\alpha\). In the thermodynamic context, \(\alpha\) corresponds to the relaxation time for subsystem \(X\) under fixed parameter values. It signifies the mixing time required for the sampler to obtain samples. This mixing time serves as a measurement of the computational cost involved in training the model. Therefore, this revised definition of \(\tau\) is indeed consistent with the earlier definition we provided for this parameter as the complexity parameter. ## 6 Stochastic thermodynamics of learning PPMs The Fluctuation Theorem (FT) [52] is a fundamental tool in non-equilibrium thermodynamics. It exists in various versions, but in essence, it establishes a connection between the Entropy Production (EP) of a process and the logarithm of the ratio of probabilities of observing the forward and time-reversed trajectories. If the forward and backward (time-reversal) processes occur with equal probability, then the EP is zero, indicating a thermodynamically reversible process. Conversely, if the probabilities are different, the process is irreversible and leads to a non-zero EP. Before setting up the scene for applying the FT to PPMs, let's review [10] the concept of EP and its close relation to information theory. Consider a physical system, such as a gas, with X degrees of freedom in contact with a heat reservoir, with \(\Theta\) degrees of freedom, whose primary role is to allow the subsystem \(X\) (gas) to exchange heat with the environment (reservoir). The very first version of the second law of thermodynamics, formulated by Clausius in 1862, can be expressed in modern terms as follows: _In any thermodynamic process (including heat transfer or work), the total change in the entropy of a closed system, which includes both the subsystem and the reservoir, is always greater than or equal to zero.._ Mathematically, this can be expressed as: \[\Sigma:=\Delta S_{X}+\Delta S_{\Theta}\geq 0 \tag{27}\] We refer to \(\Sigma\) as the Entropy Production (EP) during the process, and as mentioned, it is zero only in reversible processes. However, modern readers (familiar with information theory [18]), can immediately spot a flaw in the above formulation: _This is not the correct way to compute the joint entropy._ The correct change in the joint entropy reads: \[\Delta S_{X,\Theta}=\Delta S_{X}+\Delta S_{\Theta}-\Delta I_{X:\Theta} \tag{28}\] Moreover, for a closed system, one would expect the conservation of entropy 2. This leads to \(\Delta S_{X,\Theta}=0\), for the closed joint system \((X,\Theta)\). Combining this assumption with Eq. 27 and Eq. 28, we recast the second law as follows: Footnote 2: For systems obeying Hamiltonian mechanics, this conservation is known as the Liouville theorem. \[\Sigma=I_{X:\Theta}(t)-I_{X:\Theta}(0)\geq 0 \tag{29}\] Now, we can observe the profound connection between the second law, the concept of entropy production, and information theory. If the subsystem \(X\) is set in contact with the reservoir \(\Theta\), at \(t=0\), one would expect \(I_{X:\Theta}(0)=0\). Then, the second law states that if the subsystem \(X\) learns "some bits" about the reservoir (i.e., \(I_{X:\Theta}(t)>0\)), the process is irreversible, and that "some bits" appears as EP. Moreover, the non-negativity of EP is granted, by non-negativity of the mutual information. On the other hand, if the exchange of information between the subsystem \(X\) and the reservoir \(\Theta\) takes only in the form \(\Delta S_{X}=-\Delta S_{\Theta}\), meaning they do not share mutual information but only exchange bits, then the process is reversible with zero EP. We recall the concept of L-info, \(\Delta I_{X:\Theta}\), defined in section 4, and the role of parameters as the heat reservoir from section 5. Thus, the above understanding of the second law suggests that: _What is learned by the model must be measurable as the EP of the learning process._ ### Forward and backward trajectories In section 3.3, we introduced the ensemble view as studying the learning problem on an ensemble of computers, and the conditional view as studying the learning process conditioned on one selected computer. In the ensemble view, the trajectory probability is defined as the probability of observing the joint model's generated samples and parameters trajectories: \[P[\mathbf{x}_{n},\mathbf{\theta}_{n}]:=p(x_{0},x_{t_{1}},\dots,x_{t_{n}},\theta_{0}, \theta_{t_{1}},\dots,\theta_{t_{n}}) \tag{30}\] Additionally, we can consider the time reversal of samples' trajectory and parameters' trajectory, respectively, as \(\mathbf{\tilde{x}}_{n}:=\{x_{t_{n}},x_{t_{n-1}},\dots,x_{t_{1}}\}\) and \(\mathbf{\tilde{\theta}}_{n}:=\{\theta_{t_{n}},\theta_{t_{n-1}},\dots,\theta_{t_{ 1}}\}\). Then, the probability of observing the backward trajectory is denoted by \(P[\mathbf{\tilde{x}}_{n},\mathbf{\tilde{\theta}}_{n}]\). In practice, however, we train our model on only one computer. This means we are conditioning on the observation of one specific parameters' trajectory \(\mathbf{\theta}_{n}\). As a result, the trajectory probability in the conditional view is defined as: \[P[\mathbf{x}_{n}|\mathbf{\theta}_{n}]:=\frac{P[\mathbf{x}_{n},\mathbf{\theta}_{n}]}{P[\mathbf{ \theta}_{n}]} \tag{31}\] where, \[P[\mathbf{\theta}_{n}]=p(\theta_{0},\theta_{t_{1}},\dots,\theta_{t_{n}}). \tag{32}\] Similarly, the backward conditional trajectory probability is the probability of observing the time-reversal samples' trajectory, conditioned on observation of the time-reversal parameters' trajectory: \(P[\mathbf{\tilde{x}}_{n}|\mathbf{\tilde{\theta}}_{n}]=\frac{P[\mathbf{\tilde{x}}_{n},\mathbf{ \tilde{\theta}}_{n}]}{P[\mathbf{\tilde{\theta}}_{n}]}\). #### 6.1.1 Markov chain for dual timescale bipartite dynamics We revisit the dual timescale bipartite dynamics of the joint \((X,\Theta)\), introduced in subsection 5.2.1, this time in the context of a Markovian process. To begin, we write the following Markov chain for dynamics of the joint \((X,\Theta)\), with the time resolution of \(\delta t\), that is the timescale of sampling, within the interval of two parameters' updates \([t_{i},t_{i+1}]\): \[(x_{t_{i}},\theta_{t_{i}})\rightarrow(x_{t_{i}},\theta_{t_{i+1}})\rightarrow( x_{t_{i}+\delta t},\theta_{t_{i+1}})\;\dots\;\rightarrow(x_{t_{i}+\tau\delta t },\theta_{t_{i+1}})\equiv(x_{t_{i+1}},\theta_{t_{i+1}}). \tag{33}\] The above Markov chain illustrates the bipartite dynamics, as \(X\) evolve under fixed value of parameters. It also demonstrates the relaxation (thermalization) time \(t_{i+1}-t_{i}=\alpha=\tau\times\delta\) for subsystem \(X\) due to the sampler steps, represented in Eq. 26. We can also study the dynamics of the joint \((X,\Theta)\) with a lower time resolution of \(\alpha\), this time within the interval \([t_{0},t_{n}]\): \[(x_{0},\theta_{0})\dashrightarrow(x_{t_{1}},\theta_{t_{1}})\;\dots\;(x_{t_{ n-1}},\theta_{t_{n-1}})\dashrightarrow(x_{t_{n}},\theta_{t_{n}}). \tag{34}\] In the above Markov chain, the dashed arrows remained us the _ignorance_ of intermediate steps in the high resolution picture 33. Figure 6.1, illustrates the Bayesian Network of the joint microscopic states. An important observation is that the learning trajectory of the PPM (\(\mathcal{T}\) defined in 3), parameters' trajectory \(\mathbf{\theta}_{n}\) and samples' trajectory \(\mathbf{x}_{n}\), all are written in the low resolution picture. Therefore, studying the learning trajectory means studying the dynamics of the machine \((X,\Theta)\) in the low resolution picture. We now use the Markov property to decompose the conditional trajectory probability (in the low resolution picture), and the marginal parameters' trajectory probability as fallows: \[\begin{split} P[\mathbf{x}_{n}|\mathbf{\theta}_{n}]&=p(x_{t_{ n}}|x_{t_{n-1}},\theta_{t_{n}})\ldots p(x_{t_{1}}|x_{0},\theta_{t_{1}})p(x_{0}| \theta_{0}),\\ P[\mathbf{\theta}_{n}]&=p(\theta_{t_{n}}|\theta_{t_{n-1 }})\ldots p(\theta_{t_{1}}|\theta_{t_{0}})p(\theta_{0}),\end{split} \tag{35}\] where the expressions such as \(p(x_{t_{n}}|x_{t_{n-1}},\theta_{t_{n}})\) and \(p(\theta_{t_{n}}|\theta_{t_{n-1}})\) represent the transition probabilities that determine the probability of moving from one microscopic state to another. Additionally, we define two probability trajectories, conditioned on the initial conditions, which will be used in the formulation of FT: \[\begin{split} P[(\mathbf{x}_{n}|\mathbf{\theta}_{n})|(x_{0}|\theta_{0})]& :=P[(\mathbf{x}_{n}|\mathbf{\theta}_{n})]/p(x_{0}|\theta_{0}),\\ P[\mathbf{\theta}_{n}|\theta_{0}]&:=P[\mathbf{\theta}_{n}| \theta_{0}]/p(\theta_{0}).\end{split} \tag{36}\] #### Local Detailed Balance (LDB) for learning PPMs The Markov chain of joint degrees of freedom \((X,\Theta)\), as described in the low resolution picture 34, is governed by transition probabilities \(p(x_{t_{i}}|x_{t_{i-1}},\theta_{t_{i}})\) and \(p(\theta_{t_{i}}|\theta_{t_{n-1}})\), that represents the probability of transition from \((x_{t_{i-1}},\theta_{t_{i-1}})\) to \((x_{t_{i}},\theta_{t_{i}})\). Notably, if \(\tau>>1\) with the virtue of Markov property (i.e., memoryless process), we expect: \[p(x_{t_{i}}|x_{t_{i-1}},\theta_{t_{i}})=p(x_{t_{i}}|\theta_{t_{i}}). \tag{37}\] The above expression suggests that the transition rate between two microscopic states \(x_{t_{i-1}}\) and \(x_{t_{i}}\) under the fixed \(\theta_{t_{i}}\), to be equivalent to the PPM itself at \(t=t_{i}\). To reiterate, this is the Markov property that suggests the element inside \(\mathbf{x}_{n}\), are independently and freshly drawn from the PPM specified with given parameters along the learning trajectory \(\mathcal{T}\). We can generalize this observation for the backward transition probability \(p(x_{t_{i-1}}|x_{t_{i}},\theta_{t_{i}})\), that represent probability of the backward transition \((x_{t_{i}},\theta_{t_{i}})\dashrightarrow(x_{t_{i-1}},\theta_{t_{i}})\) under fixed \(\theta_{i}\), as follows: \[p(x_{t_{i-1}}|x_{t_{i}},\theta_{t_{i}})=p(x_{t_{i-1}}|\theta_{t_{i}}). \tag{38}\] The above expression tells us that the probability of backward transition is equivalent with the probability of observing the sample generated at \(t=t_{i-1}\) in \(\mathbf{x}_{n}\) with the PPM at time \(t=t_{i}\). Finally, we write the log ratio of forward and backward transitions: \[\ln\frac{p(x_{t_{i}}|x_{t_{i-1}},\theta_{t_{i}})}{p(x_{t_{i-1}}|x_{t_{i}}, \theta_{t_{i}})}=\ln\frac{p(x_{t_{i}}|\theta_{t_{i}})}{p(x_{t_{i-1}}|\theta_{ t_{i}})}=-\Big{(}\phi_{\theta_{t_{i}}}(x_{t_{i}})-\phi_{\theta_{t_{i}}}(x_{t_{i-1}}) \Big{)}, \tag{39}\] where the second equality is due to definition of the PPM 1. The above expression resembles the celebrated Local Detailed Balance (LDB) [53] that relates the log ratio of forward and backward transition probabilities to the difference Figure 6.1: This figure shows Bayesian network for joint trajectory probability \(P[\mathbf{x}_{n},\mathbf{\theta}_{n}]\), based on a dual timescale bipartite dynamics. in potential energy of initial and final state in the transition. The heat reservoir that supports the legitimacy of the above LBD expression for learning PPM is the parametric reservoir, whose temperature has been set to one, as discussed in section 5.1.1. We emphasize that the above LBD has emerged naturally under assumption of the Markov property and a relaxation time for learning a generic generative PPM. It is also important to note that the above LBD is only valid in the low resolution picture. Deriving the LBD relation for the PPM in Eq. 39, has a profound consequence. It allows us to write the forward conditional probability trajectory, \(P[\mathbf{x}_{n}|\mathbf{\theta}_{n}]\), and the backward conditional probability trajectory, \(P[\mathbf{\tilde{x}}_{n}|\mathbf{\tilde{\theta}}_{n}]\), solely based on the series of PPMs in the learning trajectory \(\mathcal{T}\): \[\begin{split} P[\mathbf{x}_{n}|\mathbf{\theta}_{n}]&=\prod_{ p\in\mathcal{T}}p=p(x_{t_{n}}|\theta_{t_{n}})\;\dots\;p(x_{t_{1}}|\theta_{t_{1}}) \;p(x_{0}|\theta_{0})\\ P[\mathbf{\tilde{x}}_{n}|\mathbf{\tilde{\theta}}_{n}]&= \prod_{p\in\mathcal{\tilde{T}}}p=p(x_{0}|\theta_{t_{1}})\;\dots\;p(x_{t_{n-1}}| \theta_{t_{n}})\;p(x_{t_{n}}|\theta_{t_{n}})\end{split} \tag{40}\] This is significant because it renders the application of the FT framework to the learning PPMs practical, as we have access to elements of the learning trajectory. ### L-info from fluctuation theorem The version of the fluctuation theorem we are about to apply to the learning PPMs is known as the Detailed Fluctuation Theorem (DFT)[54]. We also note the machinery we are about to present for measuring information flow in PPMs, has been developed to study information exchange between thermodynamic subsystems[9]. In this section, we extensively use notations presented in table 3.1. Also, note that the temperature of the parametric reservoir is set to one (by adjusting the Boltzmann factor), as discussed in section 5.1. Applying DFT in the conditional view, i.e., using the set of conditional forward and backward trajectories defined in Eq. 40, results in: \[\begin{split}\sigma_{\mathbf{x}_{n}|\mathbf{\theta}_{n}}&= \ln\frac{P[\mathbf{x}_{n}|\mathbf{\theta}_{n}]}{P[\tilde{\mathbf{x}}_{n}|\mathbf{\tilde{\theta }}_{n}]}\\ &=\ln\frac{P[(\mathbf{x}_{n}|\mathbf{\theta}_{n})|(x_{0}|\theta_{0})]}{P[ (\tilde{\mathbf{x}}_{n}|\mathbf{\tilde{\theta}}_{n})](x_{t_{n}}|\theta_{t_{n}})]}+\ln \frac{p(x_{0}|\theta_{0})}{p(x_{t_{n}}|\theta_{t_{n}})}\\ &=-q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n})+s[p(x_{t_{n}}|\theta_{t_{n}})] -s[p(x_{t_{0}}|\theta_{t_{0}})]\end{split} \tag{41}\] The first line is due to DFT, which defines the stochastic EP to be the logarithm of the ratio of the forward and backward trajectory probabilities. The second line is due to the decomposition presented in Eq. 36. Finally, the third line is the consequence of LDB relation 39, and the definition of the stochastic heat flow \(q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n})\), as the change in the energy of subsystem \(X\) due to alterations in its microscopic state configuration: \[q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n}):=-\ln\frac{P[(\mathbf{x}_{n}|\mathbf{\theta}_{n})|(x_{ 0}|\theta_{0})]}{P[(\tilde{\mathbf{x}}_{n}|\mathbf{\tilde{\theta}}_{n})|(x_{t_{n}}| \theta_{t_{n}})]}=\sum_{i=1}^{n}\;\phi_{\theta_{t_{i}}}(x_{i})-\phi_{\theta_{ t_{i}}}(x_{i-1}). \tag{42}\] Note that our sing convention defines \(q_{\mathbf{x}_{n}}>0\) as the heat observed by the subsystem \(X\). The second law arises from averaging Eq.41 over the forward trajectory distribution \(P_{F}[\mathbf{x}_{n}|\mathbf{\theta}_{n}]\), and recalling the non-negativity property of the Kl-divergence to establish non-negativity of averaged EP: \(\Sigma_{X|\Theta}(\mathbf{\theta}_{n}):=<\ln\frac{P_{F}[\mathbf{x}_{n}|\mathbf{\theta}_{n} ]}{P_{F}[\mathbf{\tilde{x}}_{n}|\mathbf{\tilde{\theta}}_{n}]}>_{P_{F}[\mathbf{x}_{n}|\mathbf{ \theta}_{n}]}\quad\geq 0\). We note that the averaged EP is still conditioned on the stochastic trajectory of parameters, thus we refer to this as the conditional EP. This is indeed the consequence of working in the conditional view. Motivated to compute L-info, in the next step, we rearrange Eq. 41 as follows: \[\mathcal{I}[x_{t_{n}}:\theta_{t_{n}}]-\mathcal{I}[x_{0}:\theta_{0}]=-q_{\mathbf{x }_{n}}(\mathbf{\theta}_{n})+s[p(x_{t_{n}})]-s[p(x_{t_{0}})]-\sigma_{\mathbf{x}_{n}| \mathbf{\theta}_{n}}, \tag{43}\] where \(\mathcal{I}[x_{t_{n}}:\theta_{t_{n}}]:=s[p(x_{t_{n}})]-s[p(x_{t_{n}}|\theta_{t_{n}})]\) is the mutual content (or stochastic mutual information) at \(t=t_{n}\). This is the fundamental connection between the second law, entropy production, and accumulation of mutual information between the subsystem and the reservoir, that we discussed at the beginning of this section. We now arrive at the conditional L-info 25 by averaging Eq. 43 over \(P_{F}[\mathbf{x}_{n}|\mathbf{\theta}_{n}]\): \[\begin{split} I_{X;\Theta}(\theta_{t_{n}})-I_{X;\Theta}(\theta_ {0})&=-Q_{X}(\mathbf{\theta}_{\mathbf{n}})+\left(S_{X}(\theta_{t_{n}})-S_ {X}(\theta_{0})\right)-\Sigma_{X|\Theta}(\mathbf{\theta}_{\mathbf{n}})\\ &=\Sigma_{X}(\mathbf{\theta}_{\mathbf{n}})-\Sigma_{X|\Theta}(\mathbf{\theta}_ {\mathbf{n}})\end{split} \tag{44}\] that defines the Partially Averaged (PA) quantities, \[\begin{split} Q_{X}(\mathbf{\theta}_{\mathbf{n}})&:=\sum_{i =1}^{n}\ <\phi_{\theta_{t_{i}}}(x)>_{p(x|\theta_{t_{i}})}-<\phi_{\theta_{t_{i}}}(x)>_{ p(x|\theta_{t_{i-1}})}\end{split}\] (PA Heat flow) \[\begin{split} S_{X|\Theta}(\theta_{t_{i}})&:=< -log(\,p(x|\theta_{t_{i}})^{\,\prime\prime\prime\prime})>_{p(x|\theta_{t_{i}})} \end{split}\] (PA Conditional Entropy) \[\begin{split} S_{X}(\theta_{t_{i}})&:=<-log(\,p(x )\,)>_{p(x|\theta_{t_{i}})}\end{split}\] (PA Marginal Entropy) \[\begin{split}\Sigma_{X}(\mathbf{\theta}_{\mathbf{n}})&:= \left(S_{X}(\theta_{t_{n}})-S_{X}(\theta_{0})\right)-Q_{X}(\mathbf{\theta}_{\mathbf{n} })\end{split}\] (PA Marginal EP) We note that all PA quantities are conditioned on the parameters' trajectory, i.e., the choice of computer from the ensemble. This is a direct consequence of working in the conditional view. However, this also signifies that all thermodynamic quantities mentioned above are computable in the practice of machine learning, as they only require access to the time evolution of one PPM. Fortunately, thanks to the low-variance condition 5.1.3, we can use the conditional L-info as proxy to the L-info, given that: \(I_{X;\Theta}(\theta_{t_{n}})\approx<I_{X;\Theta}(\theta)>_{p_{t}(\theta)},\ \forall \theta_{t_{n}}\sim p_{t}(\theta)\). #### 6.2.1 Origin of L-info Eq. 44, equates the (conditional) L-info to the difference between the Marginal EP, and the Conditional EP. We refer to this difference as the _ignorance_ EP: \[\Sigma_{ign}(\mathbf{\theta}_{\mathbf{n}}):=\Sigma_{X}(\mathbf{\theta}_{\mathbf{n}})-\Sigma_{ X|\Theta}(\mathbf{\theta}_{\mathbf{n}}) \tag{45}\] It is important to note that both the Marginal EP and the Conditional EP measure the EP of the same process, which is the time evolution of the subsystem \(X\), i.e., the model's generated samples. However, the conditional EP measures this quantity with a lower time resolution of \(\alpha\), and conditioned on the parametric reservoir. On the other hand, the marginal EP measures this quantity with a higher time resolution of \(\delta t\), while the parameters remain static within each \(\alpha\) time interval after optimization. Therefore, the term "ignorance" refers to ignorance of the full dynamic of \(X\), and the origin of L-info is the EP between each consecutive parameters' update, i.e., the EP of sampling steps represented in the Markov chain 33. ### M-info and the role of the parametric reservoir We can also apply the DFT to obtain the stochastic EP of parameters' trajectory: \[\begin{split}\sigma_{\mathbf{\theta}_{\mathbf{n}}}&=\log \frac{P[\theta_{n}]}{P[\hat{\theta_{n}}]}\\ &=-q_{\mathbf{\theta}_{\mathbf{n}}}+s[p(\theta_{t_{n}})]-s[p(\theta_{t_{ 0}})].\end{split} \tag{46}\] In the above expression, the second line is due to the decomposition in Eq. 36, and definition of the stochastic heat flow for parameter subsystem: \(q_{\mathbf{\theta}_{\mathbf{n}}}:=\log P[\theta_{n}|\theta_{0}]/P[\hat{\theta_{n}}| \theta_{t_{n}}]\). If we adopt the naive ideal parametric reservoir assumption, as presented in section 5.1.1, which assumes the subsystem \(\Theta\) to function as an ideal heat reservoir at its stationary thermal equilibrium state, satisfying the DBC outlined in Eq. 23, then it becomes evident that the above DFT would lead to: \[q_{\mathbf{\theta}_{n}}=\Delta_{t_{n}}s[p(\theta_{t})]=U_{B}(\theta_{t_{n}})-U_{B}( \theta_{0}) \tag{47}\] Furthermore, in the closed system of \((X,\Theta)\), the heat flow of the subsystem \(X\) must be provided with an inverse flow of the subsystem \(\Theta\), i.e., \(q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n})=-q_{\mathbf{\theta}_{n}}\). Thus, we arrive at the stochastic version of Clausius' relation for the heat reservoir: \[\Delta_{t_{n}}s[p(\theta_{t})]=-q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n}) \tag{48}\] This relation states that the heat dissipation in subsystem \(X\) (\(q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n})<0\)) is compensated with an increase of information in subsystem \(\Theta\). Since heat dissipation is a source of L-info accumulation (see Eq. 44), the above Clausius' relation states that this information is stored in the parameters by increasing the entropy of this subsystem, confirming the role of parameters as the memory space for learned information. We can also take the ensemble average of Eq. 48 (i.e., averaging over \(P[\mathbf{x}_{n},\mathbf{\theta}_{n}]\)): \[\Delta_{t_{n}}S[\Theta_{t}]=-Q_{X}(t_{n}), \tag{49}\] where \(Q_{X}(t_{n}):=\sum_{\mathbf{x}_{n},\mathbf{\theta}_{n}}P[\mathbf{x}_{n},\mathbf{\theta}_{n}] \;q_{\mathbf{x}_{n}}(\mathbf{\theta}_{n})=\sum_{\mathbf{\theta}_{n}}P[\mathbf{\theta}_{n}]\;Q _{X}(\mathbf{\theta}_{n})\) is the fully averaged dissipated heat from the subsystem \(X\). However, under the low-variance condition of learning 5.1.3, we expect \(Q_{X}(\mathbf{\theta}_{n})\) to be independent of choice of parameters' trajectory from the ensemble of computers. Thus, we can write \(Q_{X}(t_{n})\approx Q_{X}(\mathbf{\theta}_{n})\). ### The ideal learning process The learning objective necessitates an increase in L-info to enhance the model's performance while simultaneously reducing M-info to minimize generalization error and prevent overfitting. As previously mentioned in Section 4, the ideal scenario is achieved when all the stored information in the parameters (M-info) matches the task-relevant information learned by the model (L-info). Now that we have studied the machinery for computing these two information-theoretic quantities through the computation of entropy production, we can formally examine this optimal learning condition. Maximizing L-info, as described in Eq. 44, is equivalent to maximizing the marginal EP while minimizing the conditional EP. Given that the conditional EP is always non-negative, the "ideal" scenario would involve achieving a conditional EP of zero, i.e., \(\Sigma_{X|\Theta}(t_{n})=0\). This condition can be realized through a quasi-static time evolution of the PPM occurring on the lower-resolution timescale \(\alpha\), presented in the Markov chain 34. In the context of generative models, this condition is akin to achieving perfect sampling. Under these circumstances, all EP of the subsystem \(X\) transforms into L-info, resulting in \(\Delta_{t_{n}}I_{X;\Theta}(\theta_{t})=\Sigma_{X}(t_{n})\). Furthermore, in the case that all EP is due to heat dissipation \(\Sigma_{X}(t_{n})=-Q_{X}(\mathbf{\theta}_{n})\), we can write: \[\Delta_{t_{n}}I_{X;\Theta}(\theta_{t})=-Q_{X}(\mathbf{\theta}_{n})=\Delta_{t_{n}} S[\Theta_{t}] \tag{50}\] where the last equality is due to the Clausius' relation for the parameter reservoir, and the low-variance learning condition. Eq. 50 represents the ideal learning condition, where the L-info \(\Delta_{t_{n}}I_{X;\Theta}(\theta_{t})\) becomes equal to the M-info. This equation summarizes the learning process in thermodynamic terms: _The model learns by dissipating heat from \(X\) degrees of freedom, and the dissipated heat increases the entropy of parameters, which act as a memory space for learned information. The thermodynamic cost of dissipation is provided from work of the optimizer._ Thermodynamically, the condition of quasi-static time evolution of the PPM (and consequently zero conditional EP) can be realized by having a large relaxation parameter \(\tau=\frac{\alpha}{\delta t}\gg 1\), which allows the model to reach equilibrium after each optimization step. However, a high relaxation parameter comes at the cost of requiring more computational resources and longer computation times. This introduces a fundamental trade-off between the time required to run a learning process and its efficiency - a concept central to thermodynamics and reminiscent of the Carnot cycle, representing an ideal engine that requires an infinite operation time. ## 7 Discussion In this study, we delved into the thermodynamic aspects of training a generative model on a training dataset in machine learning practice. Our approach involved first formulating the learning problem as the time evolution of a PPM. As a result, the learning process naturally emerged as a thermodynamic process driven by a work source provided by the optimizer and fueled by the training dataset. This process entailed a thermodynamic exchange between two subsystems: \(X\), representing the model's generated samples, and \(\Theta\), representing the model parameters. Finally, we demonstrated how this thermodynamic framework can be used to study the information-theoretic aspects of the learning process, leveraging the toolbox of stochastic thermodynamics. One of the main challenges in adopting an information-theoretic approach to machine learning problems is the ambiguity in defining and practically measuring information-theoretic quantities. For instance, the mutual information defined by Shwartz-Ziv and Tishby [33] between the model's degrees of freedom and hidden layer activities is ill-defined, as the activities of hidden neurons are deterministic functions of inputs [28], and the measurement scheme is dependent on arbitrary choices. In contrast, we have defined the learned information (L-info) as \(I_{X;\Theta}\). Here, the random variable of the parameters, \(\Theta\), acts as a stochastic mapping of the training dataset, as presented by the feature map 11. Similarly, the model's generated samples, \(X\), are established as a stochastic map of parameters, defined by the model's distribution: \(p(x|\theta)\). The Markov chain relationship between the training dataset, the model's parameters, and the model's generated samples is expounded in the Markov chain 12. Consequently, L-info serves as a well-defined information metric for evaluating the model's performance. Moreover, we have proposed a thermodynamic framework to address the challenge of measuring information-theoretic quantities, including L-info, in standard machine learning practices. It is important to note that despite the wealth of literature highlighting the significance of information content in parameters [34; 36; 37], calculating these quantities remains difficult due to the lack of access to the parameter distribution. In contrast, the thermodynamic approach provides an indirect method for computing these information-theoretic quantities by associating them with measurable thermodynamic variables such as heat and work. It is worth noting that work and heat can be practically computed along the learning trajectory \(\mathcal{T}\), as outlined below: \[W_{X}(\mathbf{\theta}_{n}) =\sum_{t=1}^{n}<\phi_{\theta_{t}}(x)>_{p(x|\theta_{t-1})}-<\phi_{ \theta_{t-1}}(x)>_{p(x|\theta_{t-1})} \tag{51}\] \[Q_{X}(\mathbf{\theta}_{n}) =\sum_{t=1}^{n}<\phi_{\theta_{t}}(x)>_{p(x|\theta_{t})}-<\phi_{ \theta_{t}}(x)>_{p(x|\theta_{t-1})} \tag{52}\] At the same time, we are aware of the strong assumptions made during this study. Addressing each of these assumptions or providing justifications for them represents a direction for future research. For instance, we assumed slow dynamics of parameters for the over-parameterized regime under the SGD optimizer. This formed the basis for treating the parameters' degrees of freedom as an ideal heat reservoir, evolving in a thermodynamically reversible manner. Breaking this assumption due to rapid changes in parameter values would violate this assumption. However, it should be noted that exploring more complex scenarios would only serve to enrich the thermodynamics of the problem. We have also sidestepped the role of changes in the marginal entropy of the model's subsystem, \(\Delta_{t_{n}}S_{X}(t)\). This term can be estimated by computing the entropy of the empirical distribution of generated samples. For a model initialized randomly, this term is always negative, as the initial model produces uncorrelated patterns with maximum entropy. Then, the negative value of this term must converge when the entropy of the generated patterns reaches the entropy of the training dataset. However, if we look at Eq. 44 as an optimization objective to maximize L-info, then an increase in the model's generated samples, \(S_{X}(t)\), is favorable. This might act as a regularization term to improve the generalization power of the model by forcing it to avoid easy replication of the dataset. ## Appendix A: Reversibility under lazy dynamic regime In this appendix, we establish the thermodynamic reversibility of parameter evaluation as a consequence of training an over-parameterized model with lazy dynamics. The forward action of the optimizer can be summarized as follows: \(\mathbf{b_{n}}\rightarrow\mathbf{\theta_{n}}\), where the optimizer samples an i.i.d trajectory of inputs from the training dataset \(\mathbf{b_{n}}=\{b_{t_{1}},b_{t_{2}},\ldots,b_{t_{n}}\}\) to generate a trajectory of updated parameters \(\mathbf{b_{n}}=\{\theta_{0},\theta_{t_{1}},\ldots,\theta_{t_{n}}\}\). The backward (time-reversal) action of the optimizer is defined as: \(\tilde{\mathbf{b}_{n}}\rightarrow\mathbf{\theta_{n}^{\dagger}}\), where \(\tilde{\mathbf{b}_{n}}=\{b_{t_{n}},b_{t_{n-1}},\ldots,b_{t_{1}}\}\) represents the time-reversal of the input trajectory, and gradient descent is reversed to gradient ascent, resulting in a new parameters' trajectory \(\mathbf{\theta_{n}^{\dagger}}\). In general, the backward action of SGD does not yield the time-reversal of forward parameters' trajectory: \[\mathbf{\theta_{n}^{\dagger}}\neq\tilde{\mathbf{\theta}_{n}}=\{\theta_{t_{n}},\theta_ {t_{n-1}},\ldots,\theta_{t_{0}}\}\] To illustrate this, let's examine a single forward and backward action of the optimizer: \[\theta_{t+1} =\theta_{t}-r\nabla_{\theta}\phi_{\theta_{t}}(b_{t})\] (Forward step) \[\theta_{t}^{\dagger} =\theta_{t+1}+r\nabla_{\theta}\phi_{\theta_{t+1}}(b_{t})\] (Backward step) This discrepancy arises due to the gradient step's dependence on the current value of parameters in both the forward and backward optimizations, i.e., \(\nabla_{\theta}\phi_{\theta_{t}}(b_{t})\neq\nabla_{\theta}\phi_{\theta_{t+1} }(b_{t})\). However, the key observation here is that under the lazy dynamic regime (as described in Eq. 21), this dependency vanishes, and we have \(\nabla_{\theta}\phi_{\theta^{*}}(b_{t})\neq\nabla_{\theta}\phi_{\theta^{*}}(b _{t})\), where \(\theta^{*}\) is a typical sample from the stationary state (or slowly varying state) of parameters. Under such conditions, the backward action of SGD (running the learning protocol backward) results in a time-reversal of the parameters' trajectory: \(\mathbf{\theta_{n}^{\dagger}}=\tilde{\mathbf{\theta}_{n}}\), signifying the thermodynamic reversibility of the parameters' subsystem under lazy dynamic conditions. See Ref. [55] on distinction between logical and thermodynamic reversibility. As discussed in the paper, the lazy dynamics lead to a quasi-static evolution of the parameter subsystem, meaning that the subsystem \(\Theta\) itself does not contribute to entropy production and acts as an ideal heat reservoir. Furthermore, the independence of the gradient step from the exact microscopic state of parameters aligns with path-independent forces in physics, which do not lead to dissipation and entropy production. This provides an alternative explanation for the reversibility of the parameter subsystem from a different perspective.
2303.13581
Fast and Not-so-Furious: Case Study of the Fast and Faint Type IIb SN 2021bxu
We present photometric and spectroscopic observations and analysis of SN 2021bxu (ATLAS21dov), a low-luminosity, fast-evolving Type IIb supernova (SN). SN 2021bxu is unique, showing a large initial decline in brightness followed by a short plateau phase. With $M_r = -15.93 \pm 0.16\, \mathrm{mag}$ during the plateau, it is at the lower end of the luminosity distribution of stripped-envelope supernovae (SE-SNe) and shows a distinct $\sim$10 day plateau not caused by H- or He-recombination. SN 2021bxu shows line velocities which are at least $\sim1500\,\mathrm{km\,s^{-1}}$ slower than typical SE-SNe. It is photometrically and spectroscopically similar to Type IIb SNe during the photospheric phases of evolution, with similarities to Ca-rich IIb SNe. We find that the bolometric light curve is best described by a composite model of shock interaction between the ejecta and an envelope of extended material, combined with a typical SN IIb powered by the radioactive decay of $^{56}$Ni. The best-fit parameters for SN 2021bxu include a $^{56}$Ni mass of $M_{\mathrm{Ni}} = 0.029^{+0.004}_{-0.005}\,\mathrm{M_{\odot}}$, an ejecta mass of $M_{\mathrm{ej}} = 0.61^{+0.06}_{-0.05}\,\mathrm{M_{\odot}}$, and an ejecta kinetic energy of $K_{\mathrm{ej}} = 8.8^{+1.1}_{-1.0} \times 10^{49}\, \mathrm{erg}$. From the fits to the properties of the extended material of Ca-rich IIb SNe we find a trend of decreasing envelope radius with increasing envelope mass. SN 2021bxu has $M_{\mathrm{Ni}}$ on the low end compared to SE-SNe and Ca-rich SNe in the literature, demonstrating that SN 2021bxu-like events are rare explosions in extreme areas of parameter space. The progenitor of SN 2021bxu is likely a low mass He star with an extended envelope.
Dhvanil D. Desai, Chris Ashall, Benjamin J. Shappee, Nidia Morrell, Lluís Galbany, Christopher R. Burns, James M. DerKacy, Jason T. Hinkle, Eric Hsiao, Sahana Kumar, Jing Lu, Mark M. Phillips, Melissa Shahbandeh, Maximilian D. Stritzinger, Eddie Baron, Melina C. Bersten, Peter J. Brown, Thomas de Jaeger, Nancy Elias-Rosa, Gastón Folatelli, Mark E. Huber, Paolo Mazzali, Tomás E. Müller-Bravo, Anthony L. Piro, Abigail Polin, Nicholas B. Suntzeff, Joseph P. Anderson, Kenneth C. Chambers, Ting-Wan Chen, Thomas de Boer, Michael D. Fulton, Hua Gao, Mariusz Gromadzki, Cosimo Inserra, Eugene A. Magnier, Matt Nicholl, Fabio Ragosta, Richard Wainscoat, David R. Young
2023-03-23T18:00:06Z
http://arxiv.org/abs/2303.13581v2
# Fast and Not-so-Furious: Case Study of the Fast and Faint Type IIb SN 2021bxu ###### Abstract We present photometric and spectroscopic observations and analysis of SN 2021bxu (ATLAS210dv), a low-luminosity, fast-evolving Type IIb supernova (SN). SN 2021bxu is unique, showing a large initial decline in brightness followed by a short plateau phase. With \(M_{r}=-15.93\pm 0.16\) mag during the plateau, it is at the lower end of the luminosity distribution of stripped-envelope supernovae (SE-SNe) and shows a distinct \(\sim\)10 day plateau not caused by H- or He-recombination. SN 2021bxu shows line velocities which are at least \(\sim 1500\) km s\({}^{-1}\) slower than typical SE-SNe. It is photometrically and spectroscopically similar to Type IIb SNe during the photospheric phases of evolution, with similarities to Ca-rich IIb SNe. We find that the bolometric light curve is best described by a composite model of shock interaction between the ejecta and an envelope of extended material, combined with a typical SN IIb powered by the radioactive decay of \({}^{56}\)Ni. The best-fit parameters for SN 2021bxu include a \({}^{56}\)Ni mass of \(M_{\rm Ni}=0.029^{+0.004}_{-0.005}\) M\({}_{\odot}\), an ejecta mass of \(M_{\rm ej}=0.57^{+0.04}_{-0.03}\) M\({}_{\odot}\), and an ejecta kinetic energy of \(K_{\rm ej}=9.3^{+0.7}_{-0.6}\times 10^{49}\) erg. From the fits to the properties of the extended material of Ca-rich IIb SNe we find a trend of decreasing envelope radius with increasing envelope mass. SN 2021bxu has \(M_{\rm Ni}\) on the low end compared to SE-SNe and Ca-rich SNe in the literature, demonstrating that SN 2021bxu-like events are rare explosions in extreme areas of parameter space. The progenitor of SN 2021bxu is likely a low mass He star with an extended envelope. keywords: supernovae:general - supernovae: individual: SN 2021bxu - stars: massive ## 1 Introduction Core-collapse (CC) supernovae (SNe) mark the explosive ends of the lives of massive stars (\(M\gtrsim 8\,\mathrm{M}_{\odot}\)) via gravitational collapse of their stellar cores. Some CC SNe occur from progenitors that have lost their outer envelopes and are classified as stripped-envelope supernovae (SE-SNe; Clocchiat et al., 1996; Matheson et al., 2001). Optical spectral signatures of H and He are mainly used to distinguish between the different types of SE-SNe (Filippenko, 1997). The lack of H and Si ii\(\lambda\)6150 features defines Type Ib/c SE-SNe. Furthermore, if there is He i\(\lambda\)5876 absorption present, the SN is a Type Ib and if there is weak or no He i\(\lambda\)5876 absorption, the SN is a Type Ic. In addition to these types, SNe that show transient lines of H, making their late-time spectra appear more like SNe Ib, are classified as Type IIb SNe. The exact nature of progenitor scenarios is unclear but different types of SE-SNe may be explained by various mass loss mechanisms in the progenitor star (Filippenko et al., 1994). SNe Ic likely result from stars that have lost both their H and He envelopes. SNe Ib from stars with less extreme stripping, that have lost H envelope. And SNe Ib result from stars that have lost most of their H envelope, showing weak H lines at early times (e.g., Filippenko, 1997; Shivvers et al., 2017; Prentice and Mazzali, 2017). It is unclear if is a continuum between each type of SE-SN or whether this points towards multiple mass-loss mechanisms. Some common proposed mass loss mechanisms which may be responsible for stripping the stellar envelopes) include: (a) outbursts in Luminous Blue Variables (LBVs; e.g., Smith and Owock, 2006), (b) radiation-driven winds (e.g., Heger et al., 2003; Pauldrach et al., 2012), (c) envelope stripping due to close binary interactions (e.g., Podsiadlowski et al., 1993; Woosley et al., 1995; Wellstein and Langer, 1999; Wellstein et al., 2001; Podsiadlowski et al., 2004; Benvenuto et al., 2013), (d) mass-loss in rapidly rotating Be stars (e.g., Massa, 1975; Kogure and Hirata, 1982; Owock, 2006), or a combination of these. The low-mass end of SE-SN progenitors (zero-age main-sequence mass \(\approx 8-12\,\mathrm{M}_{\odot}\)) is not well understood (e.g., Janka, 2012). Their end stages of evolution depend strongly on metallicity and mass-loss history. Some low-mass stars may experience core collapse induced by electron-capture reactions resulting in the production of low-luminosity electron-capture (EC) SNe (Miyaji et al., 1980; Nomoto, 1984, 1987; Hillebrandt et al., 1984; Miyaji and Nomoto, 1987). Given a stellar initial mass function (IDF) that drops steeply towards higher masses, we should expect a significant fraction of CC SNe to have lower-mass progenitors (Sukhbold et al., 2016). However, due to their expected lower luminosities and rapid photometric evolution, SNe resulting from low-mass stars are likely more difficult to observe and follow up, leading to an observational bias (Nomoto et al., 1982; Janka, 2012). Recently, Calcium-rich transients have emerged as a new class of SNe showing faster photometric evolution than normal SNe, lower luminosities, and a nebular phase dominated by calcium emission (Perets et al., 2011; Kasliwal et al., 2012; Shen et al., 2019; Das et al., 2022). Ca-rich transients consist of two main sub-types: I and II. The Type I Ca-rich transients may come from the explosion of white dwarfs (WDs) or highly-stripped CC events (Kawabata et al., 2010; Tauris et al., 2015). They do not show hydrogen in their spectra and are usually found in the outskirts of early-type galaxies, in old, metal-poor environments (Perets et al., 2011). The Type II Ca-rich transients, which possibly come from a CC event, show hydrogen features. In particular, Ca-rich Type IIb have spectra similar to SNe IIb near peak light, but rapidly evolve into nebular phase (\(\sim\)30 days after explosion) and show a [Ca ii]/[O i] ratio of \(\geq 2\)(Das et al., 2022). Contrary to the Ca-rich Type I transients, Ca-rich Type IIb SNe are found in star-forming regions and suggest a new class of strongly-stripped SNe (SS-SNe) which have ejecta masses less than \(\sim 1\,\mathrm{M}_{\odot}\) and stripped low-mass He stars as their progenitors. One of the most promising ways to study the progenitor and its outermost layers is through early-time observations of SN explosions (e.g., Bersten et al., 2012; Piro and Nakar, 2013; Vallely et al., 2021). Shock-breakout is an early-time phenomenon that occurs on a timescale of a few minutes to hours when the shock from a CC explosion of a massive star breaks through the stellar surface (Waxman and Katz, 2017). For CC events, shock-breakout is a promising tool for measuring the properties of the exploding star through early-time observations of the outer layers. This manifests as X-ray and ultraviolet (UV) brightening, followed by a post-shock breakout cooling phase where most of the radiation is emitted in UV and optical and the envelope expands while cooling down. The time-scale of this phenomenon depends on the type of progenitor and the presence or absence of extended material around the star. For example, for normal SNe Ib or Ic this lasts a few hours (e.g., Xiang et al., 2019) and for SNe IIb with extended envelopes this is on the order of a few days (e.g., Nomoto et al., 1993). Since the shock-breakout and cooling depend on properties of the outermost layers of the progenitor, observing those via early-time observations can provide measurements of the temperature, radius, and mass of the envelope or circumstellar material (e.g., Nakar and Piro, 2014; Piro et al., 2021). Few SE-SNe have been observed to exhibit so-called "double-peaked" light curves with an initial decline due to shock cooling and a second peak due to the radioactive decay of \({}^{56}\)Ni, which normally powers the light curve. Generally, these objects were classified as SNe IIb. The first of this class was the well-studied SN 1993J (Richmond et al., 1994), followed by other SE-SNe, e.g., SN 2011dh (Arcavi et al., 2011), SN 2011fu (Morales-Garoffolo et al., 2015), SN 2013df (Morales-Garoffolo et al., 2014), and some others as shown in Prentice et al. (2020). The initial decline in SN 1993J was explained as a result of a lengthened shock cooling due to the shock passing through an extended envelope around the star instead of "breaking out" at an abrupt surface, producing a longer initial decline. Similar to that observed in various SNe IIb, a subset of Ca-rich transients have also been shown to have double-peaked light curves with an initial decline of a few days that are well fit by an extended envelope model (Das et al., 2022). For example, Ertini et al. (2023) showed that SN 2021gno, a Ca-rich SN Ib with double-peaked light curves, can be well modelled by a CC explosion of a highly-stripped, massive star. Exotic classes of SNe are being discovered by untargeted surveys such as the All-Sky Automated Survey for SuperNovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017), the Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018; Smith et al., 2020) and the Zwicky Transient Facility (ZTF; Masci et al., 2019; Bellm et al., 2019), and studied with high-precision multi-band early follow-up by programs such as the Precision Observations of Infant Supernova Explosions (POISE; Burns et al., 2021) collaboration. SN 2021csp (Fraser et al., 2021), SN 2021fxy (DerKacy et al., 2022), SN 2021gno (Ertini et al., 2023) and SN 2021aefx (Ashall et al., 2022) are all SNe followed-up by POISE, and each of them demonstrates the power of obtaining high-precision early-time observations by providing insight into their sub-class of SNe. In this work we study SN 2021bxu1 (ATLAS21dov), a peculiar SE-SN discovered by ATLAS on UT 06.3 Feb 2021 (Tonry et al., 2021) and later classified as a Type IIb SN (DerKacy, 2021). Early-time observations from POISE show a fast declining light curve followed by a \(\sim\)10 day plateau and an unusually low peak luminosity. Here we present a detailed study of this unique event with high-precision photometry and spectroscopy, explore a new parameter space for SNe and their progenitors, and derive physical quantities and compare them with other known SE-SNe. In Section 2, we provide the properties of the host galaxy. In Section 3, we describe the photometric and spectroscopic observations of SN 2021bxu. In Section 4, we analyze and compare the multi-band light curves and colour curves of SN 2021bxu with SE-SNe and Ca-rich IIb SNe from literature. We also derive and compare the pseudo-bolometric and bolometric light curves of SN 2021bxu with a sample of SE-SNe. In Section 5, we identify the observed spectroscopic lines, compare the line velocities to a sample of SE-SNe and Ca-rich IIb SNe, estimate the time of explosion, and compare SN 2021bxu with similar SNe. In Section 6, we outline the models used to fit the bolometric and pseudo-bolometric light curves, describe the fitting method and present the best-fit results for the explosion parameters. In Section 7, we compare and contextualize the physical parameters of SN 2021bxu with a sample of SE-SNe and Ca-rich IIb SNe. Finally, in Section 8, we summarize our work and present the conclusions. ## 2 Host Galaxy Properties The host galaxy ESO 478- G 006 is classified as an Sbc galaxy with luminosity class II-III (de Vaucouleurs et al., 1991). SN 2021bxu was discovered in one of the spiral arms of its host galaxy. Using the Fitting and Assessment of Synthetic Templates (FAST; Kriek et al., 2009), we fit stellar population synthesis models to the archival photometry (see Table A3 in Appendix A) of the host galaxy ESO 478- G 006 to obtain the global age, the total stellar mass, the star formation rate (SFR), and the specific SFR as listed in Table 1. Our fit assumes a Cardelli et al. (1989) extinction law with \(R_{V}=3.1\), a Salpeter IMF (Salpeter, 1955), an exponentially declining star-formation rate, the Bruzual & Charlot (2003) stellar population models, and allows for a variable average host galaxy extinction higher than the Galactic foreground value of \(A_{V,\rm MW}=0.045\) mag. The value of host extinction from this fit indicates an average across the entire galaxy and is not necessarily the value at the site of the SN. To estimate the local host extinction and other properties, we used integral field spectroscopy (IFS) obtained with MUSE mounted to the 8.2m Very Large Telescope (VLT) in October 2016 as a part of the All-weather MUse Supernova Integral-field of Nearby Galaxies (AMUSING; Galbany et al., 2016; Lopez-CobA et al., 2020)) survey. Two pointings were observed covering the position of SN 2021bxu (see Figure 1). Following previous analysis of IFS data (e.g., Galbany et al., 2014, 2018)), we extracted a 2'' aperture spectrum from the IFS cube centered at the SN location, and performed spectral synthesis with STARLIGHT (Cid Fernandes et al., 2005). By subtracting the best STARLIGHT fit from the observed spectrum, we get the gas-phase spectrum, where we fit Gaussian profiles to the strongest emission lines to measure the main properties of the ionized gas. In particular, the dust extinction along the line of sight is estimated using the colour excess \((E(B-V))\) from the ratio of the Balmer lines, assuming an intrinsic ratio \((I({\rm H}_{\alpha})/I({\rm H}_{\beta})=2.86\), valid for case B recombination with \(T=10,000\) K and electron density \(10^{2}\,{\rm cm}^{-3}\)(Osterbrock & Ferland, 2006), and using a Cardelli et al. (1989) extinction law. Local properties are also listed in Table 1. For the purposes of magnitude corrections for SN 2021bxu, we use a null line-of-sight host extinction at the SN site, as inferred from the absence of strong narrow Na i D line in the SN spectra. We do not use \(A_{V,\rm host}=0.4\pm 0.2\) mag from local spectroscopic analysis of the host galaxy because this measurement is of the extinction from the total line-of-sight dust column. Since we do not know if the SN exploded in front of the dust column or behind it, using the galaxy measurement is not optimal. However, we note that there may be up to 0.4 mag of host extinction and including this in the analysis does not change the conclusions of this study. ## 3 Data ### Photometry We present high-precision multi-band photometry of SN 2021bxu obtained with the Henrietta Swope 1.0 m telescope at Las Campanas Observatory, ASAS-SN, ATLAS, and the Panoramic Survey Telescope & Rapid Response System (Pan-STARRS; Flewelling et al., 2020; Chambers et al., 2016, see Table A1 in Appendix A). The Swope photometry, in \(BVugri\) bands, was produced using custom reduction and calibration procedures as described in Krisciunas et al. (2017) and Phillips et al. (2019) via the POISE collaboration, which builds on the legacy of the Carnegie Supernova Project (CSP; Phillips et al., 2019). The science images were host-galaxy template subtracted and the nightly zero-points were obtained on photometric \begin{table} \begin{tabular}{l c c} \hline \hline & Global (phot) & Local (spec) \\ \hline Age [yr] & \(3.2^{+5.8}_{-1.3}\times 10^{8}\) & \(9^{+11}_{-8}\times 10^{8}\) \\ \(M_{*}\) [\(M_{\odot}\)] & \(2.5^{+0.8}_{-0.3}\times 10^{10}\) & \((3.2\pm 0.2)\times 10^{6}\) \\ SFR [\({\rm M_{\odot}\,yr^{-1}}\)] & \(30^{+4}_{-7}\) & \((7.38\pm 0.09)\times 10^{-5}\) \\ sSFR [yr\({}^{-1}\)] & \(1.1^{+0.3}_{-0.1}\times 10^{-9}\) & \(2.3^{+1.2}_{-0.4}\times 10^{-11}\) \\ \(A_{V,\rm host}\) [mag] & \(1.4^{+0.3}_{-0.3}\) & \(0.4\pm 0.2\) \\ \(12+\log_{10}({\rm O/H})\) & – & \(8.71\pm 0.14\) \\ \hline \end{tabular} \end{table} Table 1: Global and Local Properties of the Host Galaxy ESO 478- G 006 Figure 1: Finding chart of SN 2021bxu in its host galaxy ESO 478- G 006 obtained with MUSE. The background image is a stellar-continuum subtracted H\(\alpha\) emission map obtained from the two MUSE pointings. The different conditions of the observations are evident, the west half of the galaxy where SN 2021bxu exploded presents a better spatial resolution (0.72\({}^{\prime\prime}\) seeing) than the east half (2.01\({}^{\prime\prime}\) seeing). nights by observing photometric standards from the Landolt (1992) and Smith et al. (2002) catalogs. Using these zero-points, we computed natural magnitudes of the local sequence stars (listed in Table A2 in Appendix A) in the field, which is then used to calibrate the Swope photometry of SN 2021bxu. The Swope photometry is ultimately in the CSP natural system. The ATLAS photometry, in \(o\) and \(c\) bands, was obtained from the ATLAS Forced Photometry server2(Shingles et al., 2021), with photometry produced as outlined in Tonry et al. (2018) and Smith et al. (2020). The ASAS-SN \(g\)-band light curve was produced using subtracted aperture photometry from ASAS-SN Sky Partor3. Finally, the Pan-STARRS (PS) observations were taken with both 1.8 m telescope units located at the summit of Haleakala (Chambers et al., 2016), in an SDSS-like filter system, denoted as \(gri\)\({}_{\rm PS}\), and a broad \(w\)\({}_{\rm PS}\) filter, which is a composite of the \(gri\)\({}_{\rm PS}\) filters. Pan-STARRS data are processed in real-time as described in Magnier et al. (2020), Magnier et al. (2020) and Waters et al. (2020). The data are subject to difference imaging with the Pan-STARRS1 3\(\pi\) sky survey data (Chambers et al., 2016) used as references, and photometric zero-points on the target images were set with field stars from the Pan-STARRS1 3\(\pi\) catalogue (Flewelling et al., 2020). Footnote 2: [https://fallingstar-data.com/forcedphot/](https://fallingstar-data.com/forcedphot/) Footnote 3: [https://asas-sn.osu.edu/](https://asas-sn.osu.edu/) All light curves from Swope, ATLAS, ASAS-SN, and Pan-STARRS are shown in Figure 2. The Pan-STARRS \(w\) band has a non-detection with a 5\(\sigma\) upper limit of \(m_{\rm wps}>22.3\) mag 33 days before discovery showing no previous outbursts and an upper limit of \(m_{\rm wps}>21.7\) mag 216 days after the last measurement from Swope. The ATLAS \(o\) band has a non-detection with a 5\(\sigma\) upper limit of \(m_{o}>19.6\) mag 6.02 days before discovery and an upper limit of \(m_{o}>19.8\) mag 101 days after the last measurement from Swope. ASAS-SN also has a non-detection with a 5\(\sigma\) upper limit of \(m_{g}>17.6\) mag 6.16 days before discovery along with the first detection at a maximum of \(m_{g}=17.17\pm 0.09\) mag 0.2 days before discovery. The non-detection and the first detection can help constrain the time of explosion (see Section 5.3). We obtained UV observations from the Ultra-Violet Optical Telescope (UVOT; Roming et al., 2005) on the Neil Gehrels Swift Observatory (Gehrels et al., 2004) about 20 days after the estimated explosion but did not detect the SN. Pre-explosion imaging from 2018 and 2019 is available in the \(U\) and \(UVW1\) filters because of the Swift Gravitational Wave Galaxy Survey, the intent of which is to obtain galaxy template images before the detection of transients (Klingler et al., 2019). Using the pipeline from the Swift Optical Ultraviolet Supernova Archive (SOUSA; Brown et al., 2014), we measure upper limits on MJD 59266.3 of \(UVW1>19.1\) mag and \(U>19.1\) mag. These magnitude limits are in the UVOT/Vega system using zero points from Hereveld et al. (2011), the time-dependent sensitivity correction from September 20204, and an aperture correction updated in 2022. Subsequent observations over the next 10 days yield similar limits and are available from the SOUSA. Footnote 4: [https://heasarc.gsfc.nasa.gov/docs/heassarc/caldb/swift/docs/uvot/uvotcaldb_throughput_86.pdf](https://heasarc.gsfc.nasa.gov/docs/heassarc/caldb/swift/docs/uvot/uvotcaldb_throughput_86.pdf) There are no data from ZTF or the Transi \begin{table} \begin{tabular}{l r} \hline RA (J2000) & 02:09:16.47 \\ \(\delta\) (J2000) & \(-23\):24:45.15 \\ \(z\) 1 & 0.0178 \\ Host Galaxy & ESO 478- G 006 \\ Host offset 2 & 9.2 \(\pm\) 0.6 kpc \\ \(E\,(B-V)_{\rm MW}\)3 & 0.014 mag \\ \(E\,(B-V)_{\rm Host}\) & – 0 mag \\ \(\mu\)4 & 34.28 \(\pm\) 0.16 mag \\ \(D_{\rm L}\)5 & 72 \(\pm\) 5 Mpc \\ Last Non-Detection (MJD) & 59245.12 days \\ Discovery (MJD) & 59251.28 days \\ Estimated Explosion (MJD) 6 & 59246.3 \(\pm\) 0.4 days \\ \hline \end{tabular} \end{table} Table 2: Properties of SN 2021bxu Figure 2: Multi-band light curves of SN 2021bxu from Swope \(BVugri\), ATLAS \(oc\), ASAS-SN \(g\) and Pan-STARRS \(grizy\). Non-detections in ASAS-SN \(g\) and ATLAS \(o\) are shown as points with downward-facing arrows. Vertical red line-segments at the bottom of the plot mark the epochs of spectroscopic observations and the blue line-segment marks the discovery. The estimated time of explosion is further explained in Section 4.3. Satellite (TESS; Ricker et al., 2016; Fausnaugh et al., 2021; Fausnaugh et al., 2022) for this object during the time period of interest. To convert from apparent to absolute magnitudes we need the distance modulus and extinction corrections. The distance modulus for SN 2021bxu (\(\mu=34.28\pm 0.16\) mag) is measured using precise redshift-independent distance measurement using the Type Ia SN 2009le that exploded in the host galaxy (ESO 478- G 006; Scolnic et al., 2018). We infer no host-galaxy extinction at the site of the SN due to a lack of narrow Na i D lines in the SN spectra (Phillips et al., 2013). Therefore, using \(\mu\) and the Galactic extinction correction (\(E(B-V)_{\rm MW}=0.014\) mag; Schlafly & Finkbeiner, 2011), we obtain the absolute magnitudes listed in Table A1. Some of the properties of SN 2021bxu are listed in Table 2 and the photometric analysis is further discussed in Section 4. ### Spectroscopy Along with the photometry, we also have a total of ten spectroscopic observations for SN 2021bxu. These include optical spectra with spectral range of roughly \(4000-9000\) A obtained from the Dual Imaging Spectrograph (DIS) on the Apache Point Observatory (APO), the ESO Faint Object Spectrograph and Camera v.2 (EFOSC; Buzzoni et al., 1984) on the New Technology Telescope (NTT), the Alhambra Faint Object Spectrograph and Camera (ALFOSC) on the Nordic Optical Telescope (NOT), the Gemini Multi-Object Spectrograph (GMOS; Hook et al., 2004) on the Gemini North telescope, and the Inamori-Magellan Areal Camera and Spectrograph (IMACS; Dressler et al., 2011) on the Magellan Baade telescope. The spectra from APO and Baade were reduced using standard ira5 packages with the methods as described in Hamuy et al. (2006) and Palatelli et al. (2013). The NOT data were taken as a part of the NUTS2 collaboration6 and reduced using the FOSCGUI pipeline7. The NTT spectra were obtained through the Public European Southern Observatory Spectroscopic Survey of Transient Objects (PESSTO) program, and reduced using the data reduction pipeline described in Smartt et al. (2015). The Gemini spectra were reduced using a custom-made iraf routine. Dates and phases of all spectra are listed in Table 3. Footnote 5: The Image Reduction and Analysis Facility (iraf) is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. Footnote 6: [https://muts.sine.ie/](https://muts.sine.ie/) Footnote 7: FOSCGUI is a Python-based graphic user interface (GUI) developed by E. Cappellaro and aimed at extracting SN spectroscopy and photometry obtained with FOSC-like instruments. A package description can be found at [https://sngroup.oapd.inaf.it/foscgui.html](https://sngroup.oapd.inaf.it/foscgui.html) Figure 3 shows the spectral sequence. The original unbinned spectra are in gray and the higher signal-to-noise ratio (SNR), binned spectra are in colours. The unbinned spectra are resampled at a resolution of 10 A using SpectRes (Carnall, 2017) to produce the binned spectra. The two NTT spectra on UT 22 Feb 2021 are taken only one half-hour apart and, since they are on the same telescope and instrument, we average them to produce a combined spectrum with a higher SNR. This combined spectrum is used for all following analysis. ## 4 Photometric Analyses ### Multi-band Light Curves The multi-band light curves of SN 2021bxu are presented in Figure 2, ranging from the \(u\) band to the \(i\) band. There is a decrease of \(\sim\)1.7 mag in the \(u\) band from \(\sim\)17.9 to \(\sim\)19.5 mag in the first 3 days and thereafter it keeps decreasing almost linearly, but with a shallower slope. However, moving to the optical bands, a plateau starts appearing, which becomes more prominent in the redder bands. For example, in the \(g\) band, the brightness declines by \(\sim\)1.5 mag in the first \(\sim\)5 days after discovery and it is followed by a plateau where the brightness stays roughly constant for the next \(\sim\)10 days before declining again. Looking at the \(i\) band, where it gets moderately bright again, this plateau may be interpreted as a second peak. Using the distance modulus \(\mu=34.28\pm 0.16\) mag from Table 2, we obtain a peak \(r\)-band magnitude of \(M_{r}=-16.86\pm 0.16\) mag at the first epoch. After the initial decline, the absolute magnitude during the plateau phase is \(M_{r}=-15.93\pm 0.16\) mag as estimated from the \({}^{56}\)Ni peak (see Section 6). In Figure 4, we show the light curves of SN 2021bxu in \(BVugri\) bands normalized to the bolometric \({}^{56}\)Ni peak (discussed in Section 6) in each band along with a sample of various types of SE-SNe (IIb, Ib, and Ic) from Stritzinger et al. (2018), a sample of SNe II from Anderson et al. (2014), and a sample of Ca-rich Type IIb SNe from Das et al. (2022). The most obvious distinction between SN 2021bxu and the rest of the sample is the presence of a strong initial decline in brightness. Although dissimilar to most SNe II having the typical long plateau phase, SN 2021bxu shows similarities to the light curves of Type II-L SN 2001fa and SN 2007fz (Faran et al., 2014) including an initial decline and rise to a second peak. However, SN 2021bxu has much less H in the spectra than the two SNe II-L SN 2001fa and SN 2007fz have been proposed as a link between SNe II and SNe IIb because of the intermediate strength H\(\alpha\) P-Cygni profile between those SN Types. Nevertheless, Pessi et al. (2019) finds no evidence of continuum between SNe II and SNe IIb. On the other hand, the overall shape of SN 2021bxu's later decline matches well with the SE-SNe and especially well with the Ca-rich IIb SNe and with SNe IIb. The Ca-rich IIb SNe show a wide range of initial declines depending on the properties of the external layers of the progenitor (Das et al., 2022). However, the later decline in the \(g\) band for SN 2021bxu is almost identical to that of the Ca-rich IIb and SNe IIb samples. ### Colour Curves With the available multi-band photometry, we produce the Milky Way reddening corrected \(B-V\), \(u-g\), \(g-r\), and \(r-i\) colour curves \begin{table} \begin{tabular}{c c c c c} \hline \hline Date & MJD & Phasea & Telescope & Instrument \\ (UT) & (days) & (days) & & \\ \hline 08 Feb 2021 & 59253.07 & 6.7 & APO & DIS \\ 12 Feb 2021 & 59257.02 & 10.7 & NTT & EFOSC2 [ENDFOOTNOTE] & EFOSC2 [ENDFOOTNOTE] \\ 14 Feb 2021 & 59259.84 & 13.5 & NOT & ALFOSC [ENDFOOTNOTE] \\ 17 Feb 2021 & 59262.22 & 15.9 & Gemini & GMOS-N \\ 22 Feb 2021 & 59267.01 & 20.7 & NTT & EFOSC2 [ENDFOOTNOTE] \\ 22 Feb 2021 & 59267.04 & 20.7 & NTT & EFOSC2 [ENDFOOTNOTE] \\ 22 Feb 2021 & 59267.05 & 20.7 & Baade & IMACS \\ 02 Mar 2021 & 59275.04 & 28.7 & Baade & IMACS \\ 03 Mar 2021 & 59276.01 & 29.7 & NTT & EFOSC2 [ENDFOOTNOTE] \\ 04 Mar 2021 & 59277.00 & 30.7 & NTT & EFOSC2 [ENDFOOTNOTE] \\ \hline \end{tabular} \end{table} Table 3: Spectroscopy of SN 2021bxu for SN 2021bxu, presented in Figure 5. The \(u-g\) colour quickly reaches a reddest value of \(\sim\)1.7 mag \(\sim\)15 days after explosion. The \(B-V\) and \(g-r\) colours follow a trend similar to each other and get to their reddest value of \(\sim\)1.1 mag \(\sim\)25 days after explosion. The \(B-V\) colour shows an unexpected change of slope around \(\sim\)16-19 days after explosion corresponding to the time of the plateau phase. On the other hand, starting off at the bluest colour, the \(r-i\) colour shows a slow but steady increase from \(\sim-0.1\) mag to \(\sim\)0.3 mag over the \(\sim\)40 days after explosion. We compare the colour curves of SN 2021bxu to templates from Stritzinger et al. (2018b) for SE-SNe Ib, Ic, and IIb as well as SNe II. Figure 5 shows the colour curves of SN 2021bxu together with the templates. The \(B-V\) colour of SN 2021bxu is consistent with SNe Ib until \(\sim\)5 days after \(B\)-band maximum but it is bluer by \(\sim\)0.4 mag around day 12. The \(u-g\) colour follows SNe Ib up to \(\sim\)5 days after \(g\)-band maximum, but resembles SNe IIb thereafter. On the other hand, the \(g-r\) colour does not seem to match any of the types except the first \(\sim\)3 days where it matches with SNe II; it is too blue for SE-SNe and too red for SNe II after that. The \(B-V\) and the \(g-r\) colours peak \(\sim\)10 days after the templates peak, although the templates only go up to 20 days past maximum. Finally, the \(r-i\) colour matches the trend of SNe IIb given the error bars on the data and the model. Although Figure 3: Spectra of SN 2021bxu from APO (green), NTT (red), NOT (orange), Gemini (blue), and Baade (purple) from 6.74 to 30.68 days after its estimated explosion (MJD 59246.3), all listed in Table 3. The spectrum from NTT at +20.7 days is an average of two spectra taken on the same night using the same instrument only half-hour apart. SN 2021bxu shows a plateau in its light curve, the colour curves do not match those of SNe II. ### Bolometric Light Curves Our multi-band photometry is used to determine the properties of the SN such as its luminosity, radius and photospheric temperature evolution. We construct the bolometric light curve for SN 2021bxu by using the multi-band photometry in the Swope \(BVugri\) bands after converting magnitudes to monochromatic fluxes, correcting for the Milky-Way extinction of \(E(B-V)_{\rm MW}=0.014\) mag, and using the distance of \(D_{\rm L}=72\pm 5\) Mpc from Table 2. We use the magnitude offsets and filter profiles from the CSP webpage8. If a certain band lacked observations on a given epoch, we interpolated using Gaussian Processes with the scikit-learn(Pedregosa et al., 2011) Python library. We fit each epoch's spectral energy distribution (SED) with a Planck blackbody function. The bolometric luminosity is computed by directly integrating the flux density into the available bands and using the blackbody fits to extrapolate to the unobserved wavelengths. The effective photospheric temperature is the best-fit temperature from the blackbody fits, and the radius is then computed from luminosity and temperature using the Stefan-Boltzmann law. The errors on luminosity, temperature, and radius are derived from a Monte Carlo procedure using the errors of the original photometry. Photometric precision is high with errors \(<1\%\) which means majority of the systematic errors come from the uncertainty in the distance estimate and the assumption of a blackbody SED. The uncertainty in the distance corresponds to a fractional uncertainty in luminosity of \(L_{-16\%}^{+13\%}\), which would cause the light curve to shift up or down sys Figure 4: The gray light curves show SNe II, IIb, Ib, and Ic from the Carnegie Supernova Project (Stritzinger et al., 2018; Anderson et al., 2014) and Ca-rich Type IIb SNe from Das et al. (2022). The red light curves in each subplot are SN 2021bxu. The \(g\)-band panel shows the last non-detection from ASAS-SN as the downward-pointing arrow. The epoch of maximum light for normalization is chosen as the peak of the \({}^{56}\)Ni component of the light curve (see Section 6). tematically. Figure 6 shows the bolometric luminosity, temperature, and radius evolution. The bolometric luminosity starts at \(L_{\rm bol}\sim 3.5\times 10^{42}\) erg s\({}^{-1}\) at discovery, drops down to \(L_{\rm bol}\sim 6.6\times 10^{41}\) erg s\({}^{-1}\) during the initial decline, stays flat at that value for \(\sim\)10 days defining a plateau, and then declines again. The radius increases almost linearly with time initially until \(\sim\)25 days past explosion, indicating that the black-body extrapolation is reasonable and the black-body radius roughly follows the photospheric radius. At later times, the black-body radius starts declining, although this may lack physical meaning since the black-body approximation is not good at these times. The photospheric temperature drops from 13,500 K to 7000 K during the initial photometric decline, suggesting a rapid cooling of the ejecta, and steadily declines thereafter. Martinez et al. (2022) show that SNe II have a roughly constant temperature evolution during their plateau at \(\sim\)6000 K due to H-recombination. The temperature evolution of SN 2021bxu during the plateau is not constant; it decreases from \(\sim\)7000 K to \(\sim\)5000 K. Combined with the lack of H\(\alpha\) emission, this suggests that H-recombination is likely not responsible for the observed plateau in the light curve, unlike SNe II. Pseudo-bolometric light curves are used for a direct comparison to similar SNe. Instead of the full wavelength range, pseudo-bolometric light curves are defined only within a finite range usually covering the wavelengths of the observed bandpasses used. We compute the pseudo-bolometric light curve by integrating the fluxes in the range \(4000-10000\) A and compare it with a sample from Prentice et al. (2020) and SN 2021gno (Ertini et al., 2023), due to their potential similarities in the light curve shape, as seen in Figure 7. The slope of the initial decline is similar to that of SN 1993J and SN 2021gno; however, they both have a distinct second peak, whereas SN 2021bxu shows a plateau. SN 2021gno has the lowest luminosity with rapid photometric evolution and low explosion energy, \({}^{56}\)Ni mass and ejecta mass, characteristic of Ca-rich SNe (Ertini et al., 2023). We note again that SN 2021bxu is unique due to its low peak luminosity at \(\log(L_{\rm pseudo}/{\rm erg\,s}^{-1})=42.0\) and a distinct plateau phase at \(\log(L_{\rm pseudo}/{\rm erg\,s}^{-1})\sim 41.6\) from \(\sim\)10 to \(\sim\)20 days post-explosion. This plateau is possibly due to an underlying secondary peak from the radioactive decay of \({}^{56}\)Ni similar to SN 1993J and SN 2021gno (further discussed in Sections 6 and 7). ## 5 Spectroscopic analysis ### Line IDs Due to the homologous expansion of the ejecta, measuring line velocities as a function of time allows us to examine the chemical composition of the ejecta, and understand the structure and the mixing within the explosion. We identify the lines by comparing the Figure 5: Colour curves of SN 2021buu using Swope photometry compared to the Type Ib, Ic, and IIb colour curve templates from CSP SE-SNe sample. We create a template for \(B-V\) colour of a Type II SN using SN 2014G (Bose et al., 2016) and for \(g-r\) and \(r-i\) colours of a Type II SN using DES15E1iuh (de Jager et al., 2020). There were no \(u\)-band data available for Type II SNe, so we do not have a Type II template for the \(u-g\) colour. We do not plot colour curves for Ca-rich IIb SNe due to the lack of a statistical sample or a representative SN. Figure 6: _Top:_ Bolometric light curve of SN 2021bwu from integrating the flux in available band with blackbody extrapolation. The systematic error due to the uncertainty in distance is shown as \(L_{-16\%}^{+13\%}\). _Middle:_ Best-fit temperature from the blackbody fits. _Bottom:_ Radius calculated using the best-fit blackbody temperature and bolometric luminosity. spectral features to literature and cross-checking with measured velocities (Section 5.2). There is a total of ten optical spectra ranging from 7 to 31 days after estimated time from explosion. This allows us to explore the velocity evolution of the absorption features. The main absorption features are labeled in the spectra shown in Figure 3. We identify strong absorption features from He i\(\,\lambda\)5876 and He i\(\,\lambda\)6678 along with weaker hydrogen Balmer series (H\(\alpha\)\(\,\lambda\)6563, H\(\beta\)\(\,\lambda\)4861, and H\(\gamma\)\(\,\lambda\)4340). The presence of strong helium with some hydrogen is characteristic of a Type IIb SN and confirms the typing of SN 2021bva as Type IIb. Absorption features from heavier elements such as O i\(\,\lambda\)7774, Si ii\(\,\lambda\)41430, 5972, 6355, Ca ii\(\,\lambda\)He\(\,\lambda\)\(\lambda\)3934, 3969, IR-triplet \(\lambda\)48498, 8542, 8662, and a forest of Fe ii lines including the strong Fe ii\(\,\lambda\)5169, are also present. We also identify absorption features from neutron capture elements such as Sc ii\(\,\lambda\)45527, 5698, 6280 and Ba ii\(\,\lambda\)6142 in SN 2021bxu. Disentangling whether these are r- or s-process elements is beyond the scope of this study. Interestingly, the neutron capture elements are usually found in SN 1987A-like objects (Williams, 1987; Tsujimoto & Shigeyama, 2001). Sc and Ba have also been observed in the sub-luminous SN Ia PTF 09dav (Sullivan et al., 2011). This demonstrates that, although SN 2021bxu is formally classified as a SN IIb, it has some spectroscopic similarities to SNe II and SNe Ia. ### Line Velocities We measure the Doppler shift of the minima of the spectral features to determine the velocities and chemical composition of the ejecta. The velocities for He i\(\,\lambda\)\(\lambda\)5876, 6678, 7065, H\(\alpha\)\(\,\lambda\)6563, Fe ii\(\,\lambda\)5169, and O i\(\,\lambda\)7774 are computed using misfits9(Holmbo, 2020), which is an interactive tool used to measure spectral features in spectra of transients and calculate their errors. Within misfits, we smooth the spectrum by applying a low-pass filter to the Fourier-transformed data as described in Marion et al. (2009) and obtain best-fit Gaussians to the absorption features with a fixed local continuum. The best-fit mean of the Gaussian with the associated error from Monte Carlo iterations is taken to be the absorption feature's observed wavelength which is then converted to a line velocity. Footnote 9: [https://github.com/sholmbo/misfits](https://github.com/sholmbo/misfits) Figure 8 shows the velocity evolution of the selected features. The velocity of the H\(\alpha\)\(\,\lambda\)6563 feature stays constant at \(\sim 7200\) km s\({}^{-1}\) from \(\sim\)10 to \(\sim\)30 days after explosion, demonstrating that at these early phases the photosphere has already reached the bottom of the H layer. Interestingly, the feature from Si ii\(\,\lambda\)6355 as seen in Figure 3 could have some contribution from a high velocity H\(\alpha\)\(\,\lambda\)6563 component at \(\sim 13,\)000 km s\({}^{-1}\). This also matches with the small feature just bluer to H\(\beta\)\(\,\lambda\)4861. Therefore, there could be a detached high velocity H component in the ejecta. It may be the case that this comes from interaction with an extended envelope. The velocity of the He i\(\,\lambda\)5876 line decreases as a function of time as the photosphere recedes, from \(\sim 7000\) km s\({}^{-1}\) at day +7 to \(\sim 5700\) km s\({}^{-1}\) at day +21. At these phases the He i\(\,\lambda\)5876 line velocity follows the photospheric velocity, after which it plateaus, demonstrating that the base of the He layer is at \(\sim 5500\) km s\({}^{-1}\). The velocity of the He i\(\,\lambda\)7065 line is only measured in the later two spectra, thus not showing its early evolution and making it unreliable for analysis. In SE-SNe the He lines require non-thermal excitation. Therefore they get stronger over time as the density of the ejecta decreases and the mean free path of the \(\gamma\)-rays can increase (Lucy, 1991). The O i\(\,\lambda\)7774 line shows the lowest velocity at \(<4700\) km s\({}^{-1}\) throughout the time range. This is expected as the progenitor star would have a layered structure prior to explosion where heavier elements are further in, towards the center of the star. The Fe ii\(\,\lambda\)5169 feature shows a similar decline in velocity to that of other lines at later times. However, it has slightly higher velocities than oxygen, possibly caused by the fact that there could be primordial Fe-mixing throughout the progenitor star, leading to Fe in the top layers. Due to the Einstein coefficient values of Fe ii\(\,\lambda\)5169, only a small abundance is sufficient to produce the opacity needed for a strong line. This Figure 8: Evolution of line velocities of He i\(\,\lambda\)5876, \(\lambda\)6678, \(\lambda\)7065, H\(\alpha\)\(\,\lambda\)6563, Fe ii\(\,\lambda\)5169, and O i\(\,\lambda\)7774 measured from the spectra using misfits. Only those epochs are included for each line where a clear measurement was possible. Note that the Fe ii\(\,\lambda\)5169 velocity evolution may be uncertain because it comes from a blend of spectral features in the forest of Fe lines. Figure 7: Pseudo-bolometric light curve of SN 2021bxu shown in red compared with a sample of Type IIb SNe from Prentice et al. (2020) and Ca-rich SN 2021gno from Ertimi et al. (2023) in the range 4000 – 10000 Å. The time axis for all SNe has the zero-point at the estimated time of explosion. Fe ii\(\lambda\)5169 line is used to break the Arnett degeneracy in modelling the bolometric light curve (see Section 6). In Figure 9, we compare our measurements of line velocities with Liu et al. (2016), which provides a spectroscopic sample of SE-SNe, and with Das et al. (2022), which provides spectroscopic measurements for Ca-rich IIb SNe. We compute a rolling median for each SN type with a bin size of five days where the shaded regions represent the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles of the distribution in the bin indicating the dispersion of velocities. The sample of SNe IIb has velocities lower than those of SNe Ib for most of the lines with Ca-rich IIb SNe showing velocities similar to those of SNe IIb. All line velocities for SN 2021bxu are consistently lower than the median of all types by at least \(\sim 1500\) km s\({}^{-1}\), emphasizing its uniqueness and implying that the kinetic energy of SN 2021bxu is lower than that of typical SE-SNe. ### Time of Explosion Using line velocities of He i\(\lambda\)5876 from the spectra (see Section 5.2) along with radii computed from the black-body fits, we trace back to the time of explosion. Assuming a homologous expansion of ejecta, the relation is given by \(t_{\rm exp}\propto R_{\rm BB}/v_{\rm ph}\), which becomes \(t_{\rm exp}\approx R_{\rm BB}/v_{\rm ph}\) for small initial radius compared to the post-explosion radius of the ejecta. We expect the estimated explosion time to fall between the last non-detection (6.01 days before discovery) and the discovery. We choose measurements from two spectra where the He i\(\lambda\)5876 velocity is still linearly declining and has not plateaued, ensuring that these values are representative of the photosphere and not the base of the He layer. Using the values \(v_{\rm ph}=\{6600\pm 300,6400\pm 200\}\) km s\({}^{-1}\) and \(R_{\rm BB}=\{(6.1\pm 0.2)\times 10^{14},(7.6\pm 0.3)\times 10^{14}\}\) cm, we obtain the averaged time of explosion within the expected window, at \(5.0\pm 0.4\) days before discovery, on MJD 59246.3 \(\pm\) 0.4 days. This is the estimated time of explosion we adopt throughout this paper. The uncertainty on the time of explosion is purely statistical and is appropriately propagated from the uncertainties on radius and velocity. ### Spectral Comparison with Similar Supernovae In this section, we compare the spectrum of SN 2021bxu near the peak due to \({}^{56}\)Ni with similar SNe near their peaks. Figure 10, along with the line IDs, shows spectra for a Type II-pec SN 1987A, a Type IIb SN 1993J, a Ca-rich Type Ib SN 2016hgs and SN 2021gno, and Ca-rich Type IIb SNe 2018gix, and 2019pof. These SNe, except SN 1987A, are chosen for comparison because of their similarities either in light curve shape or spectral features. SN 1987A is included for its strong H lines showing recombination. SN 2021bxu is dissimilar to SN 1987A due to the lack of strong hydrogen Balmer features (H\(\alpha\)\(\lambda\)6563, H\(\beta\)\(\lambda\)4861, and H\(\gamma\)\(\lambda\)4340) and also due to the absence of the Na i\(\lambda\)1\(\lambda\)5890,5896 doublet lines. However, SN 2021bxu does show features from the neutron capture elements Sc ii\(\lambda\)\(\lambda\)5527, 5698, 6280 and Ba ii\(\lambda\)6142 which are typi Figure 9: Evolution of line velocities of SN 2021bxu compared to a sample of SNe IIb, Ib, Ic from Liu et al. (2016) and Ca-rich IIb SNe from Das et al. (2022). The open markers show a rolling median for each SN type with a bin size of five days where the shaded regions represent the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles of the distribution in the bin indicating the dispersion of velocities. In the case of double-peaked light curves, the phases are relative to the second bolometric peak. Values for SN 2021bxu are shown as filled red circles. cally seen in SN 1987A-like objects. There are also no signatures of strong H emission, which is usually seen in SNe IIP, hence providing further evidence that the plateau in SN 2021bxu is unlikely to be caused by H-recombination. SN 2021bxu is more similar to SN 1993J, reinforcing the typing of SN 2021bxu as a Type IIb. Comparing the spectral time-series of SN 2021bxu with SN 1993J at similar epochs in Figure 11, we note that both SNe display similar absorption lines in their spectra. SN 1993J shows broader features at higher velocities between 10,000 and 16,000 km s\({}^{-1}\)(Garnavich & Ann, 1994) compared to the velocities of SN 2021bxu that we measure in the range \(4000<v<7000\) km s\({}^{-1}\). This points to SN 2021bxu having lower energies and masses than SN 1993J. Although they both show similar features, SN 2021bxu evolves more quickly showing strong metal features from Ca ii [IR-triplet] and O i \(\lambda\)7777 to day 30. The He i \(\lambda\)5876 and He i \(\lambda\)6678 features also quickly get deeper, showcasing the fast evolution of SN 2021bxu. Moreover, SN 1993J Figure 10: NOT spectrum of SN 2021bxu near the \({}^{56}\)Ni peak shown in red compared to the NOT spectrum of Ca-rich Ib SN 2021gno (Ertini et al., 2023), the Asiago Observatory spectrum of Type IIb SN 1993J (Barbon et al., 1995), the Keck-LRIS spectrum of Ca-rich Type Ib SN 2016hgs (De et al., 2018), the Palomar-20inch spectrum of Ca-rich Type IIb SN 20199f (Das et al., 2022), the NTT spectrum of Ca-rich Type IIb SN 2018gjx (Prentice et al., 2020), and the International Ultraviolet Explorer (IUE) spectrum of Type II SN 1987A (Pun et al., 1995). Major absorption features in SN 2021bxu are marked on the plot along with the two telluric regions shown in gray bands. All phases in this figure are relative to the \({}^{56}\)Ni peak of each SN. shows weaker He i\(\lambda\)5876 absorption than SN 2021bxu but H\(\alpha\)\(\lambda\)6563 absorption of similar depth. Due to their spectral similarities, it may be the case that SN 2021bxu and SN 1993J are similar objects, both with a large initial decline but with different amount of \({}^{56}\)Ni, energy and mass. We discuss this further in Section 7. The Ca-rich SNe show characteristically strong Ca ii compared to O i. During the photometric phases, SN 2021bxu shows comparably strong Ca ii absorption to that of the Ca-rich Ib and IIb SNe. One of the defining properties of Ca-rich transients is that they quickly transition to nebular phase marked by Ca ii and O i emission. For instance, SN 2019ehk (Ca-rich IIb) started exhibiting nebular features as early as 30 days after explosion. SN 2021bxu does not show any nebular features within \(\sim\)30 days after explosion. Due to the lack of late phase spectra for SN 2021bxu, we cannot directly compare it to Ca-rich SNe using the [Ca ii]/[O i] ratio. However, the hydrogen-rich SN 2021bxu shows dissimilarities to the Ca-rich Ib SNe (SN 2021gno, SN 2016hgs), which lack hydrogen in their spectra, but is spectroscopically similar to IIb (SN 1993J) and hydrogen- and Ca-rich IIb SNe (SN 2018gjx) near peak. ## 6 Modelling To understand the origin of the unique shape of SN 2021bxu's light curve, we analyze the explosion by fitting the bolometric and pseudo-bolometric light curves of SN 2021bxu with SN explosion models from the literature. We do this by using a two-component model where the first component is the initial cooling phase, and the second component is the radioactive decay of \({}^{56}\)Ni including \(\gamma\)-ray leakage. The analytic model we consider for the initial cooling phase is from Piro et al. (2021, hereafter P21). P21 describes the shock interaction with the extended material surrounding the progenitor star once it is in thermal equilibrium and in homologous expansion phase, given a two-component density profile with steep radial dependence in the outer region and shallower radial dependence in the inner region. The free parameters in this model are the mass and radius of the extended material (\(M_{e}\) and \(R_{e}\), respectively) with \(E_{e}\), the energy imparted by the SN shock to the extended material, depending on the total explosion energy, ejecta mass, and mass of the extended material. The analytic model from P21 is an improvement over previous similar models (e.g., Piro & Nakar, 2013; Piro, 2015) as it better matches the observations in the early shock-cooling emission and is tested against numerical models. Together with the P21 model, we use the analytic models from Arnett (1982, hereafter A82) for the plateau/secondary peak from \({}^{56}\)Ni decay, which provides an estimate of the total ejecta mass \(\left(M_{\rm ej}\right)\), \({}^{56}\)Ni mass (\(M_{\rm Ni}\)), and total explosion energy of the SN \(\left(K_{\rm ej}\right)\). The A82 model fits for a degenerate parameter that depends on \(M_{\rm ej}\) and \(K_{\rm ej}\) as \(M_{\rm ej}^{3/4}/K_{\rm ej}^{1/4}\). This degeneracy is broken by using the photospheric velocity of the Fe ii\(\lambda\)5169 line near maximum of the \({}^{56}\)Ni peak (Taddia et al., 2018). After interpolating between the spectral epochs, we use \(v_{\rm ph,Fe}=5200\pm 100\,{\rm km\,s^{-1}}\) near peak. We include an additional correction for \(\gamma\)-ray leakage at late times as shown by Wheeler et al. (2015, hereafter W15) with a multiplicative factor of \(1-e^{-(T_{0}/t)^{2}}\) where \(T_{0}\) is the characteristic time-scale for the \(\gamma\)-ray leakage, depending on \(M_{\rm ej}\) and \(K_{\rm ej}\) as \[T_{0}=\left(\frac{C_{\kappa_{\gamma}}M_{\rm ej}^{2}}{K_{\rm ej}}\right)^{1/2}, \tag{1}\] where \(C\) is a dimensionless structure constant dependent on the slope of the density profile (typically \(C\sim 0.05\)) and \(\kappa_{\gamma}\) is the opacity to \(\gamma\)-rays (fiducial value of \(\kappa_{\gamma}=0.03\,{\rm cm^{2}\,g^{-1}}\)). We adopt a chi-squared minimization approach to fit the bolometric and pseudo-bolometric light curves of SN 2021bxu and obtain the best-fit parameters. Nakar & Piro (2014) give a relation between the P21 parameters \(E_{e}\) and \(M_{e}\) along with a dependency on the A82 Figure 11: Spectral comparison of SN 2021bxu with SN 1993J. The spectra for SN 1993J are shown in black with the epochs labeled on the left. The spectra for SN 2021bxu are shown in red with the epochs labeled on the right. For both objects, the epochs are relative to the estimated time of explosion. The spectra at +5.35, +10.55, +21.5, and 27.5 days for SN 1993J are from Barbon et al. (1995), +15.5 days is from Matheson et al. (2000b), and +31.5 days is from Matheson et al. (2000a). Figure 12: Bolometric light curve of SN 2021bxu shown in black fit using a two-component model: shock cooling and \({}^{56}\)Ni decay including \(\gamma\)-ray leakage. The initial decline is fit with the shock interaction with extended envelope model from Piro et al. (2021) shown as the dotted-dashed line and \({}^{56}\)Ni decay comes from the model from Arnett (1982) following an additional correction for \(\gamma\)-ray leakage as shown by Wheeler et al. (2015) shown as the dashed line. The best-fit model is the solid red line and the gray lines show 100 randomly selected parameters from the Monte Carlo samples signifying the uncertainty on the best-fit. parameters \(M_{\rm ej}\) and \(K_{\rm ej}\), \[E_{e}\approx 2\times 10^{49}\left(\frac{K_{\rm ej}}{10^{51}\ {\rm erg}}\right) \left(\frac{M_{\rm ej}}{3\,M_{\odot}}\right)^{-0.7}\left(\frac{M_{e}}{0.01\,M_{ \odot}}\right)^{0.7}\,{\rm erg}. \tag{2}\] Therefore, we eliminate \(E_{e}\) from the fitting routine and later solve for it using Eq. 2. With only \(\sim\)5 data points in the initial decline, the parameters from P21 for the early light curve are difficult to constrain. However, the model fits well the later part where A82+W15 dominates and there are more points to fit. Changing the initial decline of the light curve barely affects the second component assumed from \({}^{56}\)Ni decay. The best-fit parameters are given by the maximum-likelihood values and the uncertainties on best-fit parameters are given by the 16\({}^{\rm th}\) and 84\({}^{\rm th}\) percentiles from fitting the model to Monte Carlo resamples of the light curve. The uncertainties for the parameters derived using the best-fit values are propagated appropriately from the uncertainties on the best-fit values. The Monte Carlo resampling ensures that statistical uncertainties from the photometry as well as systematic uncertainties from distance measurement are appropriately considered. When fitting the bolometric and pseudo-bolometric light curves, we find that the P21+A82+W15 model can successfully describe the data. Figure 12 shows the two components separately as well as the combined fit in red for the bolometric light curve. The surrounding gray region is randomly drawn Monte Carlo samples showing the uncertainty on the best-fit model. The best-fit parameters for the bolometric and pseudo-bolometric light curve fits are listed in Table 4. The parameters for the pseudo-bolometric light curve are provided because they are useful in direct comparison with literature values. ## 7 Discussion In this section we attempt to put our results in context relative to other SN explosions and models. The best-fit parameters for the bolometric light curve are \(M_{\rm Ni}=0.029^{+0.004}_{-0.005}\)\({\rm M}_{\odot}\), \(M_{\rm ej}=0.57^{+0.04}_{-0.04}\)\({\rm M}_{\odot}\), \(K_{\rm ej}=9.3^{+0.7}_{-0.6}\times 10^{49}\) erg, \(E_{e}=2.2^{+0.2}_{-0.2}\times 10^{49}\) erg, \(M_{e}=0.064^{+0.003}_{-0.003}\)\({\rm M}_{\odot}\), and \(R_{e}=1500^{+300}_{-300}\)\({\rm R}_{\odot}\) (listed in Table 4). SN 2021bxu is a Type IIb SN with low \(M_{\rm Ni}\), low luminosity, and low explosion energy. By comparison with known classes of SNe and models explaining the observed features of SN 2021bxu, we can infer the details of the explosion and the progenitor system. The fit to the P21 model shows that the extended material surrounding the progenitor of SN 2021bxu had a large radius (\(R_{e}\)) and low mass (\(M_{e}\)). We compare this to the best-fit masses and radii of the Ca-rich IIb sample from Das et al. (2022), who use the same model for the initial decline. Figure 13 shows SN 2021bxu along with the sample of Ca-rich IIb SNe. We see a clear trend of decreasing radius with increasing mass. The Ca-rich Ib SN 2021gno also falls along this trend with \(R_{e}=1100\)\({\rm R}_{\odot}\) and \(M_{e}=0.01\)\({\rm M}_{\odot}\). For a simple check, we show a linear best-fit and a negative correlation using the Kendall \(\tau\) test, which gives \(\tau=-0.6\) with a \(p\)-value of 0.01. Modelling and possible physical origins of this correlation will be the subject of a future work. Theory suggests that a plateau can arise in the light curve of a SE-SN \(\sim\)1 day after shock-breakout when the cooling of the photosphere slows down, allowing the recombination of ejecta layers, primarily He (Dessart et al., 2011). Observed in simulations by Dessart et al. (2011), this plateau is found at \(\log(L_{\rm bol})\)/\(\sigma\) s\({}^{-1}\)\(\sim\) 41 and lasts for \(\sim\)10 days until the SN either re-brightens for \({}^{56}\)Ni-rich SNe or fades away for \({}^{56}\)Ni-poor SNe. The plateau in SN 2021bxu is observed for a time-scale comparable to that of He-recombination, however, the timing of occurrence differs. The plateau caused by He-recombination occurs soon after explosion, whereas, the plateau in SN 2021bxu's light curve is not apparent until \(\sim\)10 days after explosion. Moreover, the luminosity of SN 2021bxu at the plateau is \(\log(L_{\rm bol}/{\rm erg\,s}^{-1})\sim\) 41.8, almost an order of magnitude higher. This suggests that the observed plateau in SN 2021bxu is likely not due to He-recombination. SN 2021bxu shows photometric and spectroscopic similarities to SN 1993J but with lower total mass and explosion energy. SN 1993J shows an initial decline due to the post shock cooling through a thin H-rich envelope of extended material and the main peak from \({}^{56}\)Ni \begin{table} \begin{tabular}{l c c} \hline \hline Parameter & Bolometric & Pseudo-bolometric \\ & & (\(4000-10000\) Å) \\ \hline \(M_{\rm Ni}\) [\({\rm M}_{\odot}\)] & \(0.029^{+0.004}_{-0.005}\) & \(0.016^{+0.002}_{-0.003}\) \\ \(M_{\rm ej}\) [\({\rm M}_{\odot}\)] & \(0.57^{+0.04}_{-0.03}\) & \(0.52^{+0.08}_{-0.04}\) \\ \(K_{\rm ej}\) [\(10^{49}\) erg] & \(9.3^{+0.7}_{-0.6}\) & \(8.5^{+1.4}_{-0.7}\) \\ \(E_{e}\) [\(10^{49}\) erg] & \(2.2^{+0.2}_{-0.2}\) & \(2.5^{+0.6}_{-0.3}\) \\ \(M_{e}\) [\({\rm M}_{\odot}\)] & \(0.064^{+0.003}_{-0.003}\) & \(0.081^{+0.013}_{-0.001}\) \\ \(R_{e}\) [\({\rm R}_{\odot}\)] & \(1500^{+300}_{-300}\) & \(320^{+50}_{-70}\) \\ \hline \end{tabular} \end{table} Table 4: Best-fit parameters after fitting the P21+A82+W15 model to the bolometric and pseudo-bolometric light curves Figure 13: Mass and radius of the extended material (\(M_{e}\) and \(R_{e}\), respectively) from fitting the initial decline of SN 2021bxu compared with the sample of Ca-rich SNe IIb (Das et al., 2022) and the Ca-rich Ib SN 2021gno (Eritani et al., 2023). The values for Ca-rich SNe IIb (orange squares) and SN 2021bnu (red circle) are obtained from fitting the P21 model and the values for SN 2021gno (orange triangle) are obtained using a hydrodynamic code from Bersten et al. (2011). We note a trend of decreasing radius with increasing mass with a linear best-fit shown as the black line and Kendall \(\tau\) correlation statistic given. decay. Bersten et al. (2012) and Prentice et al. (2016) estimate \(M_{\rm Ni}\) in the range \(0.084-0.15\,\rm M_{\odot}\) and \(M_{\rm GJ}=3.3\,\rm M_{\odot}\) from fitting the bolometric light curve of SN 1993J. This is \(\sim 3-5\) times more \({}^{56}\)Ni mass than in SN 2021bxu and \(\sim\)6 times more ejecta mass. Higher \(M_{\rm Ni}\) and \(M_{\rm GJ}\) lead to the main \({}^{56}\)Ni powered light curve to be more luminous and broader, respectively, which is seen in the light curve of SN 1993J. The light curve of SN 1993J has been described by a binary model with both stars having a mass of \(\sim 15\,\rm M_{\odot}\)(Podsiadlowski et al., 1993; Nomoto et al., 1993). The authors conclude that SN 1993J had a G8-K0 yellow supergiant progenitor with a binary companion. The strong initial decline can be explained by the explosion of this progenitor that has experienced mass-loss due to winds, or, more likely, through mass transfer to a companion star. Given that SN 2021bxu is a similar object to SN 1993J at least in the initial decline and a two-component light curve, SN 2021bxu may be conspiring to show a plateau instead of an evident second peak owing to the small amount of \({}^{56}\)Ni produced and the overall lower energies. For a direct comparison with other studies without dealing with the SED flux extrapolation problems, we use the \({}^{56}\)Ni mass and the ejecta mass derived from pseudo-bolometric light curves. Figure 14 shows the best-fit parameters for SN 2021bxu from the pseudo-bolometric light curve compared to a sample of SNe IIb, Ib, Ic from Prentice et al. (2016) and Prentice et al. (2019). In order to obtain the pseudo-bolometric light curve in the range \(4000-10000\,\rm\AA\), Prentice et al. (2016) and Prentice et al. (2019) make use of \(BVRI\)-bands and fit the A82 model to find the best-fit parameters, mainly \(M_{\rm Ni}\) and \(M_{\rm GJ}\). SN 2021bxu shows the lowest value of the entire SE-SNe sample for \(M_{\rm Ni}\) and \(M_{\rm GJ}\) as derived from the pseudo-bolometric light curves. average absolute magnitude during the plateau of \(M_{r}\sim-15.9\) mag. The pseudo-bolometric luminosity is also fainter compared to most other SE-SNe, with a peak of \(\log(L_{\rm pseudo}/{\rm erg\,s^{-1}})=42.0\) and a distinct plateau phase at \(\log(L_{\rm pseudo}/{\rm erg\,s^{-1}})\sim 41.6\). The overall light curve shape in \(gi\)-bands matches most closely to that of Ca-rich IIb SNe, most of which show an initial decline and a second peak, and to that of SNe IIb. The initial decline in the light curve of SN 2021bxu has a similar slope to the initial decline of SN 1993J but SN 1993J shows a distinct second peak from \({}^{56}\)Ni at \(\log(L_{\rm pseudo}/{\rm erg\,s^{-1}})=42.23\) and has a slower evolution at late-times indicating a larger ejecta mass. With the presence of strong helium lines and weaker hydrogen lines, SN 2021bxu is a Type IIb SN (Section 5). We constrain the time of explosion using the He i\(\,\)15876 line velocity and the blackbody radius evolution and find it to be \(5.0\pm 0.4\) days before discovery. It evolves quickly to show absorption features from heavier metals like oxygen, calcium, silicon, iron, and neutron capture elements like barium and scandium, which get stronger over the 30-day spectral Figure 15: Comparison of SN 2021bxu with a sample of SNe IIb, Ib, Ic from Taddia et al. (2018) and Prentice et al. (2016), Ca-rich IIb SNe from Das et al. (2022), and US-SNe from De et al. (2018) and Yao et al. (2020). The \({}^{56}\)Ni mass (\(M_{\rm Ni}\)), ejecta mass (\(M_{\rm ej}\)), and kinetic energy (\(K_{\rm ej}\)) are derived using the full bolometric light curves for all SNe. The vertical dashed lines indicate the rough boundaries dividing ultra-stripped SNe (US-SNe), strongly-stripped SNe (SS-SNe), and stripped-envelope SNe (SE-SNe; Das et al. 2022). The values for SN 2021bxu are shown in red in each panel. time-series. We note that SN 2021bxu shows spectral similarities to Type IIb SN 1993J as well as to Ca-rich IIb SNe during photometric phases with most of the same features observed. SN 1993J has velocities higher by a factor of \(\sim\)2 compared to SN 2021bxu. We also note similarities to SN 2021gno in terms of light curve evolution and modelling. From a photometric and spectroscopic analysis, we conclude that SN 2021bxu is a fast evolving SN, with a short distinct plateau phase not caused by H- or He-recombination and with some of the lowest observed line velocities compared to samples of Ca-rich IIb SNe and SE-SNe. Following the modelling of the bolometric and pseudo-bolometric light curves in Section 6, we see that the light curves of SN 2021bxu can be well modelled by a composite model including interaction of the shock with an extended envelope of material surrounding the progenitor and the normal radioactive decay of \({}^{56}\)Ni. We obtain the physical parameters for the explosion such as \(M_{\rm Ni}\), \(M_{\rm ej}\), \(K_{\rm ej}\), and \(T_{0}\), and the properties of the extended material \(E_{\rm e}\), \(M_{\rm e}\), and \(R_{e}\) from bolometric and pseudo-bolometric light curves (see Table 4). Ertini et al. (2023) performed hydrodynamic modelling of SN 2021gno and similarly showed that the initial cooling phase can be explained by extended circumstellar material composed mainly of He, maybe with traces of H, and the second peak can be explained by the radioactive decay of \({}^{56}\)Ni. We note that the \({}^{56}\)Ni mass and kinetic energy for SN 2021bxu are on the lower end of the distribution for Ca-rich SNe and SE-SNe. The ejecta mass falls within the range of Ca-rich SNe and SS-SNe. Overall we determine that SN 2021bxu likely occurred from a lower-mass progenitor which had a large radius at the time of explosion and an extended envelope having experienced mass-loss potentially to a companion star, similar either to the SN 1993J scenario or to the strongly-stripped Ca-rich SNe such as SN 2021gno. However, to fully understand and characterize explosions similar to SN 2021bxu and to better constrain the parameters of the initial decline in an effort to constrain the immediate surroundings of the progenitor, further high precision multi-band observations of SNe in their infant stages are needed. POISE promises to deliver such a data set over the coming years. In addition to the early-time data, late-time data for such objects in the nebular phase would also prove beneficial in discerning if they are Ca-rich transients and allude to new classes such as SS-SNe. ## Acknowledgements We thank Federica Chiti for helpful discussion. D.D.D. and B.J.S. acknowledge support from NSF grant AST-1908952. C.A. and J.M.D. acknowledge support by NASA grant JWST-GO-02114.032-A and JWST-GO-02122.032-A. E.B. acknowledges support by NASA grant JWST-GO-02114.032-A. L.G. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovacion (MCIN), the GsenataI de Investigacion (AEI) 10.13039/501100011033, and the European Social Fund (ESF) "Investing in your future" under the 2019 Ramon y Cajal program RYC2019-027683-I and the PID2020-115253GA-100 HOSTFLOWS project, from Centro Superior de Investigaciones Cientificas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M. M.D.S. and the Aarhus supernova group acknowledge support from the Independent Research Fund Denmark (IRFD, grant number 8021-00170B) and the Villum Fonden (2002). NUTS2 is support in part by the Instrument Center for Danish Astrophysics (IDA). J.P.A acknowledges funding from ANID, Millennium Science Initiative, ICN12_009. M.G. is supported by the EU Horizon 2020 research and innovation programme under grant agreement No 101004719. T.E.M.B. acknowledges financial support from the Spanish Ministerio de Ciencia e Innovacion (MCIN), the Agencia Estatal de Investigacion (AEI) 10.13039/501100011033, and the European Union Next Generation EU/PRTR funds under the 2021 Juan de la Cierva program FJC2021-047124-I and the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Cientificas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M. N.N. is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381) and by funding from the UK Space Agency. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of ePESST0+ (the advanced Public ESO Spectroscopic Survey for Transient Objects Survey): ePESST0+ observations were obtained under ESO program IDs 106.216C and 108.220C (PI: Inserra). Based on observations obtained at the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This work was enabled by observations made from the Gemini North telescope, located within the Maunakea Science Reserve and adjacent to the summit of Maunakea. We are grateful for the privilege of observing the Universe from a place that is unique in both its astronomical quality and its cultural significance. ## Data Availability The photometry presented in this paper is available in a machine-readable format from the online journal as supplementary material. A portion is shown in Tables 12, 13, and 14 for guidance regarding its form and content. The spectra presented in this paper are available via the WISeREP10 archive (Yaron & Gal-Yam, 2012). Footnote 10: [https://www.wiserep.org/](https://www.wiserep.org/)
2304.07553
Evidence for a black hole spin--orbit misalignment in the X-ray binary Cyg X-1
Recently, the accretion geometry of the black-hole X-ray binary Cyg X-1 was probed with the X-ray polarization. The position angle of the X-ray emitting flow was found to be aligned with the position angle of the radio jet in the plane of the sky. At the same time, the observed high polarization degree could be obtained only for a high inclination of the X-ray emitting flow, indicating a misalignment between the binary axis and the black hole spin. The jet, in turn, is believed to be directed by the spin axis, hence similar misalignment is expected between the jet and binary axes. We test this hypothesis using very long (up to about 26 years) multi-band radio observations. We find the misalignment of $20^\circ$--$30^\circ$. However, on the contrary to the earlier expectations, the jet and binary viewing angles are found to be similar, while the misalignment is seen between position angles of the jet and the binary axis on the plane of the sky. Furthermore, the presence of the misalignment questions our understanding of the evolution of this binary system.
Andrzej A. Zdziarski, Alexandra Veledina, Michal Szanecki, David A. Green, Joe S. Bright, David R. A. Williams
2023-04-15T13:16:21Z
http://arxiv.org/abs/2304.07553v2
# Evidence for a black hole spin-orbit misalignment in the X-ray binary Cyg X-1 ###### Abstract Recently, the accretion geometry of the black-hole X-ray binary Cyg X-1 was probed with the X-ray polarization. The position angle of the X-ray emitting flow was found to be aligned with the position angle of the radio jet in the plane of the sky. At the same time, the observed high polarization degree could be obtained only for a high inclination of the X-ray emitting flow, indicating a misalignment between the binary axis and the black hole spin. The jet, in turn, is believed to be directed by the spin axis, hence similar misalignment is expected between the jet and binary axes. We test this hypothesis using very long (up to about 26 years) multi-band radio observations. We find the misalignment of \(20^{\circ}\)-\(30^{\circ}\). However, on the contrary to the earlier expectations, the jet and binary viewing angles are found to be similar, while the misalignment is seen between position angles of the jet and the binary axis on the plane of the sky. Furthermore, the presence of the misalignment questions our understanding of the evolution of this binary system. ## 1 Introduction The archetypical high-mass X-ray binary Cyg X-1, discovered as an X-ray source in 1964 (Bowyer et al., 1965), is probably the best studied microquasar to date. We have accurate determination of the binary parameters, including the orbital period of \(P=5.599829\) d (Brocksopp et al., 1999), and other parameters (given here as the median values with 68% uncertainties), including the binary inclination, \(i=27.5^{+0.8}_{-0.6}\), the mass of the black hole (BH), \(M_{\rm BH}=21.2\pm 2.2M_{\odot}\), the donor mass, \(M_{*}=40.6^{+7.7}_{-7.1}M_{\odot}\), and the distance to the source, \(D=2.22^{+0.18}_{-0.17}\) kpc (Miller-Jones et al., 2021, hereafter MJ21). Moreover, the spin parameter of the BH has been measured as \(a_{*}\gtrsim 0.5\)(Kawano et al., 2017) and \(\lesssim 1\)(MJ21). This BH spin could not be acquired during accretion given the short life time of the system, which implies it was acquired before the BH formation. Moreover, the low proper motion of Cyg X-1 with respect to its likely parent association Cyg OB3 of \(10.7\pm 2.7\) km s\({}^{-1}\)(Rao et al., 2020) indicates the BH was formed with a low natal kick. As then estimated by MJ21, the BH spin axis appears to be inclined by at most \(10^{\circ}\) from the axis of the binary. With the launch of the _Imaging X-ray Polarimetry Explorer_ (_IXPE_; Weisskopf et al., 2022), the issue of a possible misalignment was revived. The polarimetric measurements of Cyg X-1 by _IXPE_ implied that the X-rays are produced in a hot gas that is flattened in the direction orthogonal to the resolved relativistic jet, observed in the system (Krawczynski et al., 2022). This configuration corresponds either to the truncated disc geometry (Poutanen et al., 1997; Esin et al., 1997) or to the slab corona geometry (Haardt and Maraschi, 1991), whose axis is well aligned in the plane of the sky with the position angle of the jet. The observed high, \(\approx\)4%, polarization degree cannot be explained if the inclination of this hot inner region is equal to the orbital inclination; it can be achieved instead if the inclination is higher, \(\gtrsim 45^{\circ}\). This discrepancy may indicate a misalignment of the BH spin with respect to the binary axis, implying a geometry where the outer parts of the disc are aligned with the orbital axis and the inner accretion flow is aligned with the BH spin (Bardeen and Petterson, 1975). On the other hand, the position angle of the binary, if assumed equal to the optical polarization angle (Krawczynski et al., 2022), shows good agreement with the X-ray polarization angle and with the position angle of the jet. This indicates that the orbital axis and the BH spin coincide in the plane of the sky. The misalignment would thus be evident only along the line of sight direction. Since jets are launched along the spin axis of the BH (Blandford and Znajek, 1977; McKinney et al., 2013), a BH spin-orbit misalignment should be visible as a jet-orbit one. In the case of jets launched from discs (Blandford and Payne, 1982), this will be the case if the inner disc is aligned with the BH spin (Bardeen and Petterson, 1975). The arguments above suggest that the jet axis should be inclined at more than \(45^{\circ}\) with respect to the observer, i.e., misaligned from the orbital plane by \(\gtrsim 20^{\circ}\), but in the plane of the sky the jet direction should coincide with the orbital angular momentum vector. Using the long-term radio light curves of Cyg X-1, we show that this picture is reversed. Namely, we find here a com pelling evidence for a jet-orbit misalignment in Cyg X-1, which manifests itself on the plane of the sky, while the inclination angles of the binary orbit and the jet coincide within a few degrees. ## 2 The data The radio jet in Cyg X-1 is relatively steady and compact in the hard and hard-intermediate X-ray spectral states (Done et al., 2007) of the source, and is much weaker during the soft states (e.g., Zdziarski et al., 2020), as is typical for accreting BH binaries (Fender et al., 2004). The radio emission is stable on average, but shows modulations at the orbital period. Study of the initial 20-month radio light-curve of the Ryle Telescope (Jones, 1991) revealed strong periodic modulation of emission at 15 GHz (Pooley et al., 1999) with a fractional semi-amplitude of \(\approx\)0.17. The study found a tentative evidence for a lag of the minimum radio flux with respect to the superior conjunction (the phase of the orbit when the compact object is located behind the star) by \(\approx\)0.12 of the orbital period, and, using the Green Bank Interferometer data at 8.3 and 2.25 GHz, suggested this lag to be increasing with the decreasing frequency. A promising interpretation for the lag is the misalignment of the jet axis from that of the binary (Malzac et al., 2009). The quality of the data, available at that time, however, did not allow for a detailed modelling. By now, the amount of the available 15 GHz data from the Ryle Telescope and its successor, the Arcminute Microkelvin Imager Large Array (AMI-LA) (Hickish et al., 2018), has increased many-fold. The unprecedented long duration of the resulting light curve enables detailed modelling of the orbital modulation, including the accurate determination of the lag. In our study, we use the radio light curves at 15 GHz from the Ryle Telescope and the AMI-LA (jointly covering MJD 50226-59575), and at 8.3 and 2.25 GHz, from Green Bank Interferometer (covering MJD 50409-51823) in the hard and hard-intermediate spectral states only. The hard and hard intermediate state intervals are defined based on the X-ray hardness as in Zdziarski et al. (2020) and are given in Table 1. The average fluxes in those states at 15, 8.3 and 2.25 GHz are \(\langle F_{\nu}\rangle=12.5\), 15.0 and 14.3 mJy, respectively. We fold and average the light curves over the ephemeris of \[t_{\rm sup}=t_{0}+Pm,\,t_{0}=50077.973,\,P=5.599829\,{\rm d}, \tag{1}\] where \(t_{\rm sup}\) is the time of a superior conjunction (the black hole furthest from the observer) in MJD, and \(m\) is an integer. This \(t_{0}\) is below the start times of our light curves, and it fully agrees with the previously given \(t_{0}\)(Gies et al., 2008). ## 3 The model We use the median values of \(M_{\rm BH}\), \(M_{*}\), \(D\), and \(i\) as given in Section 1. The donor radius is \(R_{*}\approx 22{\rm R}_{\odot}\) and its effective temperature is \(T_{*}\approx 3.1\times 10^{4}\) K (MJ21). The masses and the orbital period imply the semi-major axis of \(a\approx 53{\rm R}_{\odot}\). The modulation is due to free-free absorption of the radio emission by the stellar wind from the supergiant donor (Walborn, 1973), with the path toward the observer through the wind being orbital phase-dependent. The amplitudes of the modulation can be explained if the radio emission originates in the jet at heights comparable to the stellar separation (Szostek & Zdziarski, 2007; Zdziarski, 2012). The strongest absorption corresponds to the highest column density of the wind along the path to the observer. For a jet perpendicular to the plane of the sky, this would correspond to the superior conjunction. However, when the jet is significantly inclined and its approaching part lags behind the binary axis (as projected on the sky), the highest column density occurs at an orbital phase _after_ the superior conjunction. In this Letter, we show that the shapes of the orbital modulation can be well fitted if the jet is misaligned with respect to the binary axis. We specify the coordinate system and the geometry in Figure 1. For simplicity, we assume the orbit to be circular, since the eccentricity is only \(\approx\) 0.019 \(\pm\) 0.003 (MJ21). Movement of the Cyg X-1 binary components in the orbit is clockwise on the sky (MJ21). This implies that the orbital spin vector (along the \(+z\) axis) points away from us, and the inclination of that direction is \(i_{\rm orb}=180^{\circ}-i\), where \(i\) is the binary inclination measured using spectroscopic and photometric data (which are not sensitive to the orientation of the orbit). The estimates of its BH spin of \(a_{*}>0\) imply that rotation of the accretion flow is prograde. Thus, the direction of the BH spin vector, and consequently, of the jet spin, also points away from the observer, and along the counterjet. The standard convention that the superior conjunction corresponds to the orbital phase of \(\phi=0\) requires that the projection of the direction toward the observer onto the binary plane is in the \(-x\) direction. The inclination of the BH spin with respect to the binary axis is \(\theta_{\rm BH}\), which represents the misalignment of the BH spin vector (and the counterjet) with respect to that of the binary, as well as the misalignment of the jet with respect to the \(-z\) direction. The angle of the projection of the BH spin vector onto the binary plane with respect to the \(+x\)-axis is the azimuthal angle \(\phi_{\rm BH}\). Following the above, the unit vectors pointing towards the observer, from the stellar center to the compact object, and along the jet are, \[\mathbf{e}_{\rm obs}=(-\sin i_{\rm orb},0,\cos i_{\rm orb}),\quad \mathbf{e}_{\rm c}=(\cos\phi,\sin\phi,0),\] \[\mathbf{e}_{\rm BH}=(\sin\theta_{\rm BH}\cos\phi_{\rm BH},\sin\theta_ {\rm BH}\sin\phi_{\rm BH},\cos\theta_{\rm BH}), \tag{2}\] respectively, where \(i_{\rm orb}=180^{\circ}-i\). Hereafter, we neglect the emission of the counterjet, since Stirling et al. (2001) gives the jet/counterjet ratio of \(\approx\)50. We make a simplifying assumption that the modulated emission at a given frequency \begin{table} \begin{tabular}{l l l l l l l l} \hline Start & End & Start & End & Start & End & Start & End \\ \hline 50085 & 50222 & 52853 & 53003 & 55895 & 55940 & 57105 & 57265 & 58631 & 58792 \\ 50308 & 51845 & 53025 & 53265 & 56035 & 56087 & 57332 & 57970 & 59378 & 59391 \\ 51858 & 52167 & 53292 & 53368 & 56722 & 56748 & 58112 & 58210 & 59420 & 59860 \\ 52205 & 52237 & 53385 & 55387 & 56760 & 56845 & 58387 & 58416 & \\ 52545 & 52801 & 55674 & 55790 & 57012 & 57045 & 58482 & 58585 & \\ \hline \end{tabular} \end{table} Table 1: The adopted intervals (in MJD) of the occurrences of the hard and intermediate states. originates, on average, at a single distance, \(h\), from the BH. We calculate the optical depth starting at the emission point on the jet at the distance \(h\) from the BH along the direction toward the observer. The vector connecting the donor center with a point along the photon trajectory at the distance \(l\) from the emission point is \(a\mathbf{e}_{\rm c}-h\mathbf{e}_{\rm BH}+l\mathbf{e}_{\rm obs}\), and the vector connecting the center of the BH with that point at \(l\) is \(-h\mathbf{e}_{\rm BH}+l\mathbf{e}_{\rm obs}\). The lengths of these vectors give the distances of that point from the donor and BH centers, and their squares are given by \[r^{2}=(l\cos i_{\rm orb}-h\cos\theta_{\rm BH})^{2}+(l\sin i_{\rm orb }-a\cos\phi+\] \[h\sin\theta_{\rm BH}\cos\phi_{\rm BH})^{2}+(h\sin\theta_{\rm BH} \sin\phi_{\rm BH}-a\sin\phi)^{2}, \tag{3}\] \[s^{2}=h^{2}+l^{2}-2hl\cos i_{\rm orb}\cos\theta_{\rm BH}+\] \[2hl\sin i_{\rm orb}\sin\theta_{\rm BH}\cos\phi_{\rm BH}, \tag{4}\] respectively. The cosine of the viewing angle of the BH spin vector, \(i_{\rm BH}\), is given by \(\mathbf{e}_{\rm BH}\cdot\mathbf{e}_{\rm obs}\), and that of the jet, by \(-\mathbf{e}_{\rm BH}\cdot\mathbf{e}_{\rm obs}\). The viewing angle of the jet and of the BH spin are then \[i_{\rm jet}=\arccos(-\cos i_{\rm orb}\cos\theta_{\rm BH}+\cos \phi_{\rm BH}\sin i_{\rm orb}\sin\theta_{\rm BH}),\] \[i_{\rm BH}=180^{\circ}-i_{\rm jet}, \tag{5}\] respectively. We calculate the expected position angle of the projection of the orbital axis on the sky, \(\lambda_{\rm orb}\), with respect to the projection of the BH spin, \(\lambda_{\rm BH}\). The difference between these two angles (Poutanen et al., 2022) for clockwise rotation on the sky is \[\Delta\lambda\equiv\lambda_{\rm BH}-\lambda_{\rm orb}=\arccos \frac{\cos\theta_{\rm BH}-\cos i_{\rm BH}\cos i_{\rm orb}}{\sin i_{\rm BH}\sin i _{\rm orb}}=\] \[\arccos\frac{\cos i_{\rm orb}\cos\phi_{\rm BH}\sin\theta_{\rm BH}+ \sin i_{\rm orb}\cos\theta_{\rm BH}}{\sqrt{1-(\cos i_{\rm orb}\cos\theta_{\rm BH }-\sin i_{\rm orb}\cos\phi_{\rm BH}\sin\theta_{\rm BH})^{2}}}, \tag{6}\] and the observed jet position angle is \(\lambda_{\rm jet}=\lambda_{\rm BH}\pm 180^{\circ}\). Thus, \(\lambda_{\rm orb}=\lambda_{\rm jet}-\Delta\lambda\pm 180^{\circ}\). Next, we calculate the free-free absorption from a point along the jet at the distance \(h\) from the BH center toward the observer, i.e., along the line denoted \(l\) in Figure 1. We assume an isotropic stellar wind with the standard velocity profile (Lamers et al., 1987), \[v(r)\simeq v_{\infty}\left(1-\frac{R_{*}}{r}\right)^{\beta}, \tag{7}\] where \(v_{\infty}\) is the terminal wind velocity and the exponent \(\beta\) determines the acceleration rate. The electron density, \(n\), follows then from the continuity equation at a given mass loss rate, \(\dot{M}\). For simplicity, the presence of ions heavier than hydrogen is neglected. We use \(v_{\infty}=1.6\times 10^{8}\,\)cm s\({}^{-1}\), \(\beta=1\)(Gies & Bolton, 1986), and \(\dot{M}=-2.6\times 10^{-6}\rm M_{\odot}\,yr^{-1}\) in the hard spectral state (Gies et al., 2003). However, given the likely decrease of the wind density in the polar region (Gies et al., 2008; crossed by the line of sight of the radio photons), we scale that \(\dot{M}\) by a factor \(f\leq 1\). The free-free absorption coefficient is approximately (Rybicki & Lightman, 1979), \[\alpha_{\rm ff}\approx 0.12\bigg{(}\frac{T}{1\,{\rm K}}\bigg{)}^{-3/2}\bigg{(} \frac{n}{1\,{\rm cm}^{-3}}\bigg{)}^{2}\Big{(}\frac{\nu}{1\,{\rm GHz}}\bigg{)} ^{-2}{\rm cm}^{-1}. \tag{8}\] The phase-dependent optical depth to free-free absorption is \[\tau(\phi)=\tau_{0}\left(\frac{\nu}{15\,{\rm GHz}}\right)^{-2}\times\] \[\int_{0}^{\infty}\bigg{[}\frac{r(l)}{a}\bigg{]}^{-4}\left[1-\frac {R_{*}}{r(l)}\right]^{-2\beta}\!\!\bigg{\{}\frac{T[r(l),s(l)]}{T_{0}}\bigg{\}} ^{-3/2}\!\frac{{\rm d}l}{a}, \tag{9}\] where \(\tau_{0}\) is a reference optical depth defined at 15 GHz, the density at \(r=a\) under the assumption of \(v=v_{\infty}\), the distance \(a\), and at a reference temperature, \(T_{0}\), \[\tau_{0}\approx 26.3\left(\frac{-f\dot{M}}{2.6\times 10^{-6}\, \rm M_{\odot}\,yr^{-1}}\right)^{2}\left(\frac{M_{*}+M_{\rm BH}}{62\rm M_{\odot}} \right)^{-1}\times\] \[\left(\frac{v_{\infty}}{1.6\times 10^{8}\,{\rm cm}\;{\rm s}^{-1}} \right)^{-2}\left(\frac{T_{0}}{10^{6}\,{\rm K}}\right)^{-3/2}. \tag{10}\] Figure 1: The geometry of the binary and the jets. The axes \(x\) and \(y\) are in the binary plane, and \(+z\) gives the direction along the binary vector, \(\mathbf{e}_{\rm orb}\) (away from the observer, given the observed clockwise rotation). The observer is at an angle, \(i_{\rm orb}\), with respect to \(\mathbf{e}_{\rm orb}\), and \(\phi\) is the orbital phase; \(\phi=0\) and \(\pi\) correspond to the superior and inferior conjunction, respectively. The binary rotation follows the increasing \(\phi\). The shown configuration is close to the latter. Then, \(\theta_{\rm BH}\) is the inclination of the BH spin vector, \(\mathbf{e}_{\rm BH}\), and the counterjet with respect to \(\mathbf{e}_{\rm orb}\), \(\phi_{\rm BH}\) is its azimuthal angle, and \(h\) is the distance of the radio source from the BH center. The counterjet emission is neglected in our model. The distance from the radio source measured along the direction toward the observer is \(l\). The distances of the point at \(l\) measured from the centers of the BH and the donor are \(s\) (not shown for clarity) and \(r\), respectively. The observed flux is \[F(\phi)=F_{\rm intr}\exp[-\tau(\phi)], \tag{11}\] where \(F_{\rm intr}\) is the flux before the absorption. We calculate \(\tau(\phi)\) numerically using \(T(r,s)\) from the solution of the energy balance equation including Compton and photoionization heating and Compton, recombination and line cooling (Zdziarski, 2012). The ionizing X-ray luminosity is estimated as \(L_{\rm ion}\approx 2\times 10^{37}\) erg s\({}^{-1}\). The radiative heating and cooling of the wind at distance \(l\) from the emission point is by both the emission of the donor and by the X-rays. However, we do not include the adiabatic cooling. Such cooling would lead to a strong temperature decrease (Zdziarski, 2012) at \(r\gtrsim a\), and the associated strong increase of the absorption coefficient. Our neglect of that cooling is motivated by the wind expansion being, to a good approximation, in vacuum, with the wind density at radii of interest (e.g., \(n\sim 10^{10}\) cm\({}^{-3}\) at \(r=a\)) orders of magnitude higher than the density of the interstellar medium, which is also swept away by the wind. Thus, the wind does not perform a \(p{\rm d}V\) work on the surrounding medium. The expansion can lead to particle distribution to be anisotropic and different from a Maxwellian, which minor effect we neglect. We consider then the spatial distribution of the radio emission along the jet. Resolved radio maps (Stirling et al., 2001; Rushton et al., 2010) imply that a fraction \(\approx\)0.3-0.5 of the 8 GHz emission is emitted at distances \(\gtrsim\)\(2\times 10^{14}\) cm, which is \(\gtrsim 50a\). A similar constraint follows from the measured long radio lag with respect to X-rays (Tetarenko et al., 2019). However, if most of the radio flux originated from such distances, orbital modulation would have been negligible. On the other hand, we can calculate the photospheric distance from jet models. The hard-state 2-220 GHz spectrum of Cyg X-1 (Fender et al., 2000) is approximately flat with \(\alpha\approx 0\) (where the flux density \(F_{\nu}\propto\nu^{\alpha}\)). The simplest, and widely adopted, model of such a spectrum is partially self-absorbed synchrotron emission with the distributions of both nonthermal electrons and the magnetic energy flux maintained along the jet (Blandford & Konigl, 1979). The location of the bulk of the emission at a given frequency in this model approximately corresponds to unit optical depth to synchrotron self-absorption. For that, we use a previous formulation (Zdziarski et al., 2022). For the parameters of Cyg X-1, that distance is given by \[\frac{h_{\nu}}{a}\approx 2.8\frac{15{\rm GHz}}{\nu}\bigg{(}\frac{\sin 27 \fdg 5}{\sin i_{\rm jet}}\bigg{)}^{\frac{5+p}{13+2p}}\bigg{(}\frac{\langle F_{ \nu}\rangle}{10{\rm mJy}}\bigg{)}^{\frac{6+p}{13+2p}}\bigg{(}\frac{1}{ \Theta}\bigg{)}^{\frac{7+p}{13+2p}}, \tag{12}\] where \(2.8a\approx 1.0\times 10^{13}\)cm, \(\Theta\) is the jet opening angle, which has been estimated as \(\approx\)\(0\fdg 4\)-\(1\fdg 8\)(Tetarenko et al., 2019), and \(p\) is the steady-state power-law index of the relativistic electrons, which is consistent with \(\approx 2.5\)-3.5 (Zdziarski et al., 2014). The dependencies of \(h_{\nu}\) on the ratio of the gas-to-magnetic energy densities and on the minimum and maximum energies of the relativistic electrons are very weak. The range of values implied by Equation (12), \(h_{\nu}\approx(1.6\)-\(5.4)a\), is \(\ll 50a\), and consistent with the observed strong orbital modulations. The above considerations imply a relatively complex radio emission profile, which we approximate as two separate regions of the radio emission. Namely, we split the folded and averaged radio light curves, \(F_{\nu}(\phi)\), into the modulated, \(F_{\nu}^{\rm mod}\), and unmodulated parts, \[F_{\nu}(\phi)=F_{\nu}^{\rm mod}(\phi)+b(F_{\nu}),\quad\Delta F _{\nu}^{\rm mod}(\phi)=\Delta F_{\nu}(\phi),\] \[\frac{\Delta F_{\nu}^{\rm mod}(\phi)}{F_{\nu}^{\rm mod}(\phi)} \approx\frac{\Delta F_{\nu}(\phi)}{(1-b)F_{\nu}(\phi)}, \tag{13}\] where \(\Delta F_{\nu}(\phi)\) is the uncertainty of \(F_{\nu}(\phi)\), respectively, and \(b\) is the unmodulated fraction. We use \(b=0.5\) as a likely value. The unmodulated part accounts for the remote emission. However, we also perform the calculations at \(b=0\). We find the two sets of the fitted parameters are very similar (see Table 2 below), which shows our results are not sensitive to the assumed value of \(b\). ## 4 Results We fit our model to the three folded light curves simultaneously. In order to estimate the uncertainties of the fit, we use both the method based on \(\Delta\chi^{2}\), where \(\chi^{2}\) is the fit statistics (Lampton et al., 1976) and the Markov Chain Monte Carlo (MCMC) method1(Foreman-Mackey et al., 2013), both as implemented in xspec(Arnaud, 1996). In the latter, we assume wide normal priors centered on the best-fit parameters with the widths estimated from the linear error estimates. Both methods give very similar results and we present only those based on the MCMC. The fit results are given in Table 2, and the folded light curves are shown in Figure 2. Figure 3 shows the parameter distributions and corelations for the case of \(b=0.5\). The statistical quality of the fits is very good, with the reduced \(\chi^{2}\approx 1\). Table 2 shows that main difference between the results obtained for \(b=0\) and 0.5 is the larger ratios of the intrinsic to the observed fluxes for the modulated component at \(b=0.5\). We find a significant BH spin/orbit misalignment of \(16^{\circ}\)-\(33^{\circ}\), and the azimuthal angle of \(68^{\circ}\)-\(92^{\circ}\). These two angles account for the lags seen in the orbital modulations. The emission heights are determined primarily by the modulation amplitudes, and the misalignment angle results jointly from the lags and the heights. The azimuthal angle is close to \(90^{\circ}\), i.e., the jet is bent close to the plane of the sky. The attenuation is very modest at each frequency, compatible with the average radio spectrum being approximately a single power law (Fender et al., 2000). Footnote 1: Sanders 2012; [https://github.com/jeremysanders/xspec_emccee](https://github.com/jeremysanders/xspec_emccee) The presence of a misalignment has a very high statistical significance. When we fix the misalignment angle \(\Theta_{\rm BH}=0\) (as well as \(\phi_{\rm BH}\) at any value, which has then no influence on the fit), we obtain a very high value of \(\chi^{2}\)/d.o.f. of 402/37 compared to 37/35 for the model with a misalignment. The F-test probability of the fit improvement being by chance (Lampton et al., 1976) when allowing for a misalignment is \(\approx 7\times 10^{-19}\). With this model, we also measure the phase lags, i.e., the phases of the minima of the phase-folded fluxes. We keep the assumption of \(\Theta_{\rm BH}=0\) but introduce phenomenological phase lags, \(\Delta\phi\), with respect to the superior conjunction to the light curves. The obtained values are given in Table 3 for \(b=0\) and 0.5; the values are almost identical for the two. The hypothesis of no lags has the same statistical significance as that of the lack of a misalignment. Table 2 gives the jet viewing angle, \(i_{\rm jet}\) (Equation 5). We see that, due to the fitted relative jet orientation with respect to the binary axis and the observer, it is similar to the binary inclination, \(i\). On the other hand, we find a relatively large difference between the BH position angle on the sky, \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \(b\) & \(h_{15}/a\) & \(f\) & \(i(^{\circ})\) & \(\theta_{\rm BH}(^{\circ})\) & \(\phi_{\rm BH}(^{\circ})\) & \(\frac{\langle F_{\rm intr,15}\rangle}{\langle F_{\rm 15}\rangle}\) & \(\frac{h_{8}}{h_{5}}\) & \(\frac{\langle F_{\rm intr,8}\rangle}{\langle F_{\rm 8}\rangle}\) & \(\frac{h_{2}}{h_{5}}\) & \(\frac{\langle F_{\rm intr,2}\rangle}{\langle F_{\rm 2}\rangle}\) & \(\frac{\chi^{2}}{4.0\mbox{\sc f.f.}}\) & \(i_{\rm jet}(^{\circ})\) & \(\Delta\lambda(^{\circ})\) \\ \hline 0f & \(2.9^{+1.4}_{-1.8}\) & \(0.8^{+0.2}_{-0.5}\) & 27.5f & \(23^{+13}_{-8}\) & \(76^{+16}_{-8}\) & \(1.24^{+0.11}_{-0.06}\) & \(1.7^{+0.5}_{-0.2}\) & \(1.15^{+0.09}_{-0.07}\) & \(9.3^{+9.5}_{-5.2}\) & \(1.02^{+0.12}_{-0.02}\) & 36/35 & \(31^{+6}_{-2}\) & \(47^{+24}_{-17}\) \\ **0.5f** & \(2.4^{+1.1}_{-1.5}\) & \(0.8^{+0.2}_{-0.4}\) & 27.5f & \(22^{+11}_{-6}\) & \(76^{+16}_{-8}\) & \(1.49^{+0.15}_{-0.12}\) & \(1.7^{+0.6}_{-0.2}\) & \(1.29^{+0.17}_{-0.14}\) & \(9.1^{+9.6}_{-5.0}\) & \(1.03^{+0.21}_{-0.03}\) & 37/35 & \(31^{+4}_{-3}\) & \(46^{+20}_{-14}\) \\ \hline \end{tabular} _Notes:_ The table gives the median values and their 90% confidence ranges, while the \(\chi^{2}\) values are given for the best fits. Fixed parameters are marked by ’f’. The jet viewing angle, \(i_{\rm jet}\), and the difference in the position angles on the sky, \(\Delta\lambda\), are derived quantities rather than free parameters. The \(b=0.5\) case represents our final results, for which the values of \(\langle F_{\rm intr,\nu}\rangle/\langle F_{\nu}\rangle\) refer to the modulated component only. \end{table} Table 2: Fit results based on the MCMC method. Figure 2: The observed orbital modulations fitted by our model (solid curves). The left (a, b, c) and right (d, e, f) panels assume the 100% (\(b=0\)) and 50% (\(b=0.5\)) of the total flux is modulated, respectively. (a) 15 GHz, (b) 8.3 GHz, (c) 2.25 GHz, (d) 15 GHz, (e) 8.3 GHz, (f) 2.25 GHz. For clarity, two cycles are shown. (equal to that of the jet \(\pm 180^{\circ}\), equation 6) and the implied position angle, \(\lambda_{\rm orb}\), of the binary axis, \(\Delta\lambda\approx 32^{\circ}\)-\(66^{\circ}\). The values of \(i_{\rm jet}\) and \(\Delta\lambda\) depend on all of \(i_{\rm orb}\), \(\Theta_{\rm BH}\) and \(\phi_{\rm BH}\). Figure 4 shows the image of the jet in Cyg X-1 from a 1998 VLBA/VLA observation (Stirling et al., 2001). The position angle of an inner part of the jet is \(\lambda_{\rm jet}\approx-17^{\circ}\) (we note it changed to \(-26^{\circ}\) in 2016 observations; MJ21). We show here the orbital position angle, \(\lambda_{\rm orb}\). Due to the obtained large \(\Delta\lambda\) (Table 2), \(\lambda_{\rm orb}\approx 88^{\circ}\)-\(131^{\circ}\) is very different from the jet position angle. ## 5 Discussion Our model fits very well the data and gives the parameters in full agreement with the standard jet model (Blandford & Konigl, 1979). The fitted height of the 15 GHz emission of \(2.4^{+1.1}_{-1.5}a\) fully agrees with the estimate of Equation (12), as well as with more detailed calculations (Zdziarski et al., 2014). Then, the fitted location of the 8.3 GHz emission agrees (and that of 2.25 GHz is consistent) with the standard scaling (Blandford & Konigl, 1979) of \(h\propto\nu^{-1}\). The misalignment of the BH spin axis with respect to the binary one implies that the spin axis will precess. We use a post-Newtonian estimate (Barker & O'Connell, 1975; Apostolatos et al., 1994), which, together with the Kepler law, gives the de Sitter precession period of \[P_{\rm prec}=\frac{c^{2}(M_{*}+M_{\rm BH})^{4/3}P^{5/3}}{(2\pi G)^{2/3}(2+3M_{ *}/2M_{\rm BH})M_{*}M_{\rm BH}}, \tag{14}\] where \(G\) is the gravitational constant. For the best-fit parameters of Cyg X-1, \(P_{\rm prec}\approx 5.6\) kyr. While this is a negligible effect for observations of the jet, it has important implications for the origin of the interstellar shell apparently powered by the jet (Gallo et al., 2005). In that scenario, the jet lifetime is estimated between 17 and 63 kyr (Russell et al., 2007), i.e., Figure 3: The Markov-chain Monte Carlo results for the case of \(b=0.5\). The panels show the histograms of the one-dimensional posterior distributions for the model parameters and the two-parameter correlations. The median results for fitted quantities are shown by the middle vertical dashed lines in the distribution panels. The surrounding vertical dashed lines correspond to the 90% uncertainty. The parameters obtained are given above the posterior distributions. from a few to \(\sim\!10P_{\rm prec}\). Given our calculated misalignment angle, the jet would affect a larger structure than that observed. The origin of that structure from the jet of Cyg X-1 appears, however, not certain. No analogous structure due to the counterjet has been confirmed (Russell et al., 2007), and there is still no definite model of the shell (Sell et al., 2015). We find the jet viewing angle, \(i_{\rm jet}\), is similar to the binary inclination, \(i\). This finding does not support the interpretation of the system misalignment along the line of sight, which was put forward as an explanation of the high X-ray polarization (Krawczynski et al., 2022). Our results imply that if the inner flow is perpendicular to the BH spin (Bardeen & Petterson, 1975), the inclination of that flow is similar to that of the binary, in spite of the jet misalignment. Thus, the strong X-ray polarization has to have another origin, likely coronal outflows (Poutanen et al., 2023). Furthermore, our modelling implies that the jet should be inclined with respect to the binary axis in the plane of the sky; the angle between these two axes is \(\Delta\lambda\approx 32^{\circ}\)-\(66^{\circ}\). This conclusion is surprising in light of the earlier suggestion that the orbital axis, as probed by the position angle of the optical polarization, is well aligned with the position angle of the X-ray polarization and the axis of the radio jet (Krawczynski et al., 2022; Kravtsov et al., 2023). Our findings suggest that the optical polarization should not be related to the binary axis, as previously thought (Kemp et al., 1978, 1979). Instead, it may arise from scattering in a cocoon surrounding the jet, formed by the jet interaction with the wind of the companion star, or with the interstellar medium (Bicknell et al., 1997). In this case, the mean polarization angle would be aligned with the jet. Further thorough analysis of the optical polarization data is needed to verify this suggestion. On the other hand, we have checked that our implied astrometric solution (with the jet misalignment) is approximately consistent with the radio orbital displacements (MJ21). The presence of the binary/BH misalignment is in conflict with our current evolutionary scenario for Cyg X-1 (MJ21). One possible explanation of this discrepancy is that Cyg X-1 is not a member of the Cyg OB3 association. While the current determinations of their distances, \(2.22^{+0.18}_{-0.17}\) kpc and \(1.92\pm 0.31\) kpc (Rao et al., 2020), respectively, are compatible with each other, Cyg X-1 can still be not its member. Projected on the sky, Cyg X-1 is on the edge of the OB3 association, and was considered a field star in a previous clustering analysis (Mel'Nik & Efremov, 1995). Alternatively, Cyg X-1 may still be a member of the OB3 association, but the maximum asymmetric natal kick could be higher than the assumed (MJ21) 10-20 km s\({}^{-1}\). In fact, a recent study (Tauris, 2022) has shown that the observed distribution of inspiral spins of merging BHs, showing a strong tail of negative values, is in clear tension with the current theoretical models of binary evolution and supernovae. He showed that the discrepancy can be resolved if some BHs have their spins tossed in a random direction during their formation, and proposed some tentative mechanisms for the tossing. The misalignment in Cyg X-1 can be due to such process. We then consider alternative interpretations of the phase lags. First, the wind accretion in Cyg X-1 is focused toward the BH with the donor almost filling the Roche lobe (MJ21). The wind crossing the L1 point could then be asymmetric, but the maximal absorption is expected at phases before the superior conjunction (due to the Coriolis force; Frank et al., 1987), contrary to what we see. Furthermore, the X-rays from the system, which originate close to the BH, are orbitally modulated as well, due to bound-free absorption and Compton scattering by the same wind. Their flux minima are observed without any measurable lag (Wen et al., 1999; Pooley et al., 1999; Lachowicz et al., 2006) with respect to the spectroscopically-determined superior conjunction, which shows that any effects of the wind asymmetry on the absorption are negligible. The impact of the wind on the jet can also bend it in the direction along the wind (Bosch-Ramon & Barkov, 2016), i.e., with the jet bent outside and precessing at the binary period, which is neither observed in Cyg X-1 nor it would lead to the absorption lags. Second, the jet trajectory could be helical due to the circular motion of the BH with respect to the center of mass of the binary (Szostek & Zdziarski, 2007; Bosch-Ramon & Barkov, 2016). This could explain the observed radio phase lags, but Figure 4: The image of the jet obtained with the VLBA/VLA observation (Stirling et al., 2001) on 1998 August 10 at 8.4 GHz. We show the projected position vectors of the orbit and the spin of the BH. only if the jet velocity within the distances of \(\lesssim 30a\) were nonrelativistic (Szostek & Zdziarski, 2007). This disagrees with jet velocity close to the speed of light inferred from the apparent absence of an observed counterjet (according to Stirling et al., 2001) and the lag of the radio emission with respect to the X-rays (Tetarenko et al., 2019). However, this might be still possible if the jet consists of a slow and non-radiating sheath collimating a fast and radiating spine (Ferreira et al., 2006), as has been proposed (Szostek & Zdziarski, 2007). Our finding of the jet-orbit misalignment in Cyg X-1 makes it a member of a small group of X-ray binaries with unambiguous observational evidence for misalignments. The others are GRO J1655-40 (Hjellming & Rupen, 1995; Beer & Podsiadlowski, 2002), Cyg X-3 (Zdziarski et al., 2018) and MAXI J1820+070 (Poutanen et al., 2022). ## 6 Conclusions Using an unprecedentedly long light curve of Cyg X-1 at 15 GHz together with light curves at 8.3 and 2.25 GHz radio frequencies, we obtain strong evidence for a misalignment of its jet with respect to the orbital axis. The misalignment is inferred from the presence of significant delays of the minima of the radio light curves with respect to the times of superior conjunction. The observed strong orbital modulation of the radio flux is due to absorption of the jet emission by the stellar wind of the donor, which depends on the orbital phase. If the jet were aligned with the orbital axis, the maximal absorption - and the minimum of the radio flux - would be expected at the phase of the superior conjunction. The observed delays, instead, unambiguously show that the sources of the radio emission are displaced from the symmetry plane of the binary (defined by the semi-major axis and the normal to the binary plane). Thus, the jet at the locations of the radio emission jet is inclined with respect to the binary axis; we find the misalignment angle to be \(\approx 16^{\circ}\)-33\({}^{\circ}\). However, we find that the jet inclination is approximately the same as the binary inclination, in contrast to the earlier suggestion of a misalignment lying predominantly along the line of sight, motivated by the X-ray polarization studies. Moreover, we find the projection on the sky of the orbital axis is clearly different from that of the radio jet. This finding has implications for the production of the optical polarization. Finally, the presence of the misalignment in Cyg X-1 disagrees at face value with the evolutionary arguments. It implies that either Cyg X-1 is not a member of the Cyg OB3 association or that the kick it received during the BH formation was higher than previously estimated. ## Acknowledgements We thank Richard O'Shaughnessy for a consultation, Rob Fender for permission to use fig. 3 of Stirling et al. (2001) in our Figure 4, and James Miller-Jones and Arash Bahramian for advice with using their astrometry script. We also thank the referees for valuable comments. We acknowledge the staff who operate and run the AMI-LA telescope at at the Mullard Radio Astronomy Observatory, Lord's Bridge, Cambridge. AMI is supported by the European Research Council under grant ERC-2012-StG-307215 LODESTONE. AAZ and MS acknowledge support from the Polish National Science Center under the grant 2019/35/B/ST9/03944 and from the University of Lodz IDUB grant, decision No. 59/2021, respectively. Nordita is supported in part by NordForsk. Our work benefited from discussions during Team Meetings in the International Space Science Institute (Bern).
2305.08549
Elastic collision rates of spin-polarized fermions in two dimensions
We study the $p$-wave elastic collision rates in a two-dimensional spin-polarized ultracold Fermi gas in the presence of a $p$-wave Feshbach resonance. We derive the analytical relation of the elastic collision rate coefficient in the close vicinity of resonance when the effective range is dominant. The elastic collision rate is enhanced by an exponential scaling of $e^{-q_{r}^{2} / q_{T}^{2}}$ towards the resonance. Here, $q_{r}$ is the resonant momentum and $q_T$ is the thermal momentum. An analogous expression is derived for the case of three dimensions successfully explains the thermalization rates measurement in the recent experiment~[Phys. Rev. A 88, 012710 (2013)]. In the zero-range limit where the effective range is negligible, the elastic collision rate coefficient is proportional to temperature $T^2$ and scattering area $A_{p}^2$. In this limit, energy transfer from high to low velocity through $p$-wave collision is approximately $\sqrt{2}$ times faster compared to the three-dimensional case. We also discuss the collisional stability in the presence of three-body losses in the background scattering limit. Our results suggest that $p$-wave evaporation may be performed with improved efficiency and may provide insight into the dynamics of the system in experiments.
Muhammad Awais Altaf, Takashi Mukaiyama, Muhammad Waseem
2023-05-15T11:20:04Z
http://arxiv.org/abs/2305.08549v2
# Elastic collision rates of spin-polarized fermions in two dimensions ###### Abstract We study the \(p\)-wave elastic collision rates in a two-dimensional spin-polarized ultracold Fermi gas in the presence of a \(p\)-wave Feshbach resonance. We derive the analytical relation of the elastic collision rate coefficient in the close vicinity of resonance when the effective range is dominant. The elastic collision rate is enhanced by an exponential scaling of \(e^{-q_{r}^{2}/q_{r}^{2}}\) towards the resonance. Here, \(q_{r}\) is the resonant momentum and \(q_{T}\) is the thermal momentum. An analogous expression is derived for the case of three dimensions successfully explains the thermalization rates measurement in the recent experiment [Phys. Rev. A 88, 012710 (2013)]. In the zero-range limit where the effective range is negligible, the elastic collision rate coefficient is proportional to temperature \(T^{2}\) and scattering area \(A_{p}^{2}\). In this limit, energy transfer from high to low velocity through \(p\)-wave collision is approximately \(\sqrt{2}\) times faster compared to the three-dimensional case. We also discuss the collisional stability in the presence of three-body losses in the background scattering limit. Our results suggest that \(p\)-wave evaporation may be performed with improved efficiency and may provide insight into the dynamics of the system in experiments. ## I Introduction Placing spin-polarized fermions into the lowest identical hyperfine ground state forbids the \(s\)-wave scattering and leaves \(p\)-wave scattering as a dominant scattering channel. The \(p\)-wave interactions are very weak at low temperatures in single-component ultracold Fermi gases [1]. However, interactions between two identical fermions can be enhanced using the \(p\)-wave Feshbach resonances and have been observed in some experiments [2; 3; 4; 5; 6]. In the past, \(p\)-wave Feshbach resonances have been utilized for Feshbach molecules creation [7; 8; 9; 10], dimer association spectroscopy [11], non-resonant light control of the \(p\)-wave Feshbach resonances [12], Fermi liquid properties [13], normal-state properties [14], few-body physics [15; 16; 17; 18; 19; 20; 21], out-of-equilibrium thermodynamics [22; 23; 24], ground state energy and properties [25; 26] and elastic unitary \(p\)-wave interactions [27]. Single-component Fermi gases are of fundamental interest such as Kohn-Luttinger instabilities at weak interactions [28], unconventional superfluidity [29; 30; 31], shifts of atomic clock frequency [32], rich quantum phase diagrams [33]. Such fundamental interests lead to great efforts in theoretical and experimental studies to understand elastic [1; 34; 35] and inelastic scattering processes close to \(p\)-wave Feshbach resonances [15; 16; 17; 18; 19; 20; 21]. Elastic collisions are required to reach quantum degeneracy via evaporation. For single-component Fermi gases, elastic collisions and thermalization have been measured near \(p\)-wave Feshbach resonances [1; 34]. Recently, thermalization and evaporative cooling by background \(p\)-wave collision have been observed for the three-dimensional case with modest efficiency [35]. The \(p\)-wave Feshbach resonances not only enhance the binary elastic collision but also the inelastic collisions [15]. Earlier experimental studies showed that spin-polarized fermions close to \(p\)-wave resonances are unstable, with a much short lifetime, due to inelastic collisions[37; 8]. This shorter lifetime has led to theoretical studies to explore the reduction of inelastic collision losses in reduced dimensions [38; 31; 39]. On the fundamental side, fermions confined in two dimensions exhibit topological superfluid phases [40; 41; 42] and few-body bound states [43]. However, understanding the elastic collision rates of spin-polarized fermions in two dimensions is also important to explore the possibility of evaporation [35], non-equilibrium dynamics of system [44; 22], hydrodynamics [45] and quantum transport properties [46; 47]. In this paper, we focus on the \(p\)-wave elastic collision rates in a two-dimensional spin-polarized ultracold Fermi gas in the presence of a \(p\)-wave Feshbach resonance. We consider two extreme regimes of \(p\)-wave resonance. One is the non-zero-range limit in the close vicinity of resonance where the effective range term is dominant. We derive the analytical expression of elastic collision rates in this regime, which is in agreement with direct numerical simulations. Our analytical expression shows that elastic collision rates are enhanced exponentially towards the resonance. We have also derived an analogous analytical expression for a three-dimensional case that explains the experimental results of thermalization rates measured by Nakasuji _et al._, [34]. The other regime is the zero-range limit where the effective range is negligible and the scattering area \(A_{p}\) is near the background \(p\)-wave collision in the ultracold regime. In this zero-range limit, the elastic collision rate coefficient is proportional to temperature \(T^{2}\) and scattering area \(A_{p}^{2}\). We also analyze the transfer of energy from high to low velocities through \(p\)-wave collision near the background \(p\)-wave collision. Our analysis shows that the transfer of energy is \(\sqrt{2}\) times faster than the three-dimensional case. This suggests that \(p\)-wave evaporative cooling in two dimensions can be performed better than recently achieved modest efficiency in three dimensions [35]. We also discuss the collisional stability in the presence of three-body losses. We show that in the region of background \(p\)-wave collision, the ratio of good-to-bad collision rates can be improved as compared to the three-dimensional case. This article is organized as follows. Section II describes our theoretical analysis of elastic collision rates. In Sec. III, we describe energy transfer analysis in the limit of background collision. In Sec. IV, we explore the elastic-to-inelastic collision ratio, and finally, we concluded. ## II Elastic collision rates Let's first consider two identical fermions colliding in quasi-two dimensions in the presence of \(p\)-wave Feshbach resonance. the Quasi-two dimensions geometry can be obtained by tight harmonic confinement in the axial direction (\(z\)) with frequency \(\omega_{0}\)[7; 48]. In the axial direction, the extension of the wave function is given by harmonic oscillator length \(l_{0}=\sqrt{\hbar/m\omega_{0}}\). The interatomic separation \(\rho\) in the plane \(x,y\) greatly exceeds the harmonic oscillator length \(l_{0}\). Then \(p\)-wave relative motion is described by the wavefunction [39; 49], \[\psi_{2D}(\rho)=\varphi_{2D}(\rho)e^{i\theta}\frac{1}{\left(2\pi l_{0}^{2} \right)^{1/4}}\exp\left(\frac{-z^{2}}{4l_{0}^{2}}\right);\ \ \rho\gg l_{0}, \tag{1}\] where \(\theta\) is the scattering angle, and \(z\) is the inter-particle separation in the axial direction. Considering the ultracold limit with respect to the axial motion we assume that confinement length \(l_{0}\) is much larger than the characteristic radius of interaction \(R_{e}\)[49]. Then two-dimension radial wave function for \(p\)-wave becomes [49; 39] \[\varphi_{2D}(\rho)=i\big{\{}J_{1}(q\rho)-\frac{i}{4}f_{2D}(q)H_{1}(q\rho) \big{\}}, \tag{2}\] where \(J_{1}(q\rho)\) and \(H_{1}(q\rho)\) are the Bessel and Hankel functions respectively. Here, \(q\) is the relative wave vector in two dimensions. The two dimension scattering amplitude \(f_{2D}(q)\) is related to scattering phase shift \(\delta(q)\) as \(f_{2D}(q)=-4/(\cot\delta(q)-i)\). For low collision energy \(E=\hbar^{2}q^{2}/m\), the effective range expansion can be written as \(q^{2}\cot\delta(q)=-1/A_{p}-B_{p}q^{2}\)[50; 51]. Then the scattering amplitude of \(p\)-wave is given by [52; 50], \[f_{2D}(q)=\frac{4q^{2}}{1/A_{p}+B_{p}q^{2}+iq^{2}}. \tag{3}\] Here, \(A_{p}=\frac{3\sqrt{2\pi}l_{0}^{2}}{4}\left(\frac{l_{0}^{3}}{V_{p}}+\frac{k_{ \perp}l_{0}}{2}-0.065553\right)^{-1}\) is known as scattering area, which is a controllable interaction parameter. The scattering area depends on three-dimensional scattering volume \(V_{p}\), effective range \(k_{e}\), and harmonic oscillator length \(l_{0}\). The scattering volume \(V_{p}\) depends on external magnetic field and parameterized as \(V_{p}=V_{\rm bg}(1+\Delta B/(B-B_{0}))\). Here, \(V_{\rm bg}\) and \(\Delta B\) are the background scattering volume and resonance width in the magnetic field, respectively [53]. Here, \(B_{0}\) is the resonance position in three dimensions. The positive dimensionless effective range for quasi-two dimension is \(B_{p}=\frac{4}{3\sqrt{2\pi}}(l_{0}k_{e}-0.14641)-\frac{2}{\pi}{\rm ln}\left(l_ {0}q\right)\). The \(p\)-wave \(S\)-matrix element is given by \(S(q)=\exp(2i\delta(q))\). The \(S\)-matrix can be extracted from the scattering amplitude as \(f_{2D}=2i[S(q)-1]\), which results the elastic cross-section \[\sigma_{el}(q)=\frac{16q^{3}}{\left(1/A_{p}+B_{p}q^{2}\right)^{2}+q^{4}}. \tag{4}\] The elastic rate coefficient \(Q_{el}=v\sigma_{el}(q)\) becomes \[Q_{el}=\frac{32\hbar}{m}\frac{q^{4}}{\left(1/A_{p}+B_{p}q^{2}\right)^{2}+q^{4}}, \tag{5}\] where \(v=2\hbar q/m\) is the relative velocity. The two-dimensional confinement-influenced resonance occurs at \(1/A_{p}=0\) when the scattering cross-section (scattering amplitude) hits maximum. This condition occurs at \(B^{\prime}=B_{0}-\frac{V_{\rm bg}\Delta B}{l_{0}^{3}}(k_{e}/2+0.065553/l_{0})\). The shift in resonance \(B^{\prime}>B_{0}\) increases with the increase of confinement (axial frequency) and has been observed in the recent experiment [7]. Thermally averaged elastic collision rate is given by \(\gamma_{el}=\langle n\rangle\langle Q_{el}\rangle\), where \(\langle n\rangle=\left(\frac{mk_{B}}{8\pi\hbar^{2}}\right)\frac{T_{F}^{2}}{T}\) is the two-dimensional mean density for harmonically trapped gas. Here, the Fermi temperature \(T_{F}\) is related to the number of atoms \(N\) and mean trapping frequency \(\omega=\sqrt{\omega_{x}\omega_{y}}\) of the trap as \(k_{B}T_{F}=(2N)^{1/2}\hbar\omega\). Whereas, \(\langle Q_{el}\rangle\) is the averaged elastic collision rate coefficient over thermal Boltzmann distribution \[\langle Q_{el}\rangle =\frac{2}{q_{T}^{2}}\int_{0}^{\infty}qQ_{el}e^{-q^{2}/q_{T}^{2}}dq\] \[=\frac{64\hbar}{mq_{T}^{2}}\int_{0}^{\infty}\frac{d\left(q^{2}/2 \right)q^{4}e^{-q^{2}/q_{T}^{2}}}{\left(1/A_{p}+B_{p}q^{2}\right)^{2}+q^{4}}. \tag{6}\] We, numerically, calculated elastic collision rate \(\gamma_{el}\) using Eq. (II) for \({}^{6}\)Li atoms at two different temperatures assuming the condition of \(T=T_{F}\). In Fig. 1(a) and (b), solid curves show elastic collision rates at \(T_{F}=3\)\(\mu\)K (corresponding density \(\langle n\rangle=1.5\times 10^{12}/m^{2}\), \(\omega_{0}=2\pi\times 168\) kHz ) and at \(T_{F}=4\)\(\mu\)K (corresponding density \(\langle n\rangle=2.0\times 10^{12}/m^{2}\), \(\omega_{0}=2\pi\times 194\) kHz). The parameters chosen for \({}^{6}\)Li are \(\Delta B=40\)G, \(V_{\rm bg}=(-41a_{0})^{3}\) and \(k_{e}=0.058/a_{0}\)[54; 55; 34], here \(a_{0}\) is the Bohr radius. Now we consider the special case of the near resonance regime on the negative side of scattering area \(A_{p}<0\). In this case, the two-body system has a resonance pole above the threshold representing the bound state \(E_{b}=\hbar^{2}q_{r}^{2}/(2m)\). Here, resonant momenta is \(q_{r}=1/\sqrt{B_{p}|A_{p}|}\), which denotes the position of maximum scattering amplitude on the real \(q\) and varies with external magnetic field \(B\). In the near resonance regime, the most significant contribution arises from \(q_{r}\) under the condition of \(q_{T}/q_{r}>1\) or \((A_{p}q_{r}^{2}\ll 1)\)[39]. Here, \(q_{T}=\sqrt{mk_{B}T/\hbar^{2}}\) is being the thermal momentum. Near the resonance on its negative side, the largest contribution to the thermally average integral comes from the momenta in a narrow vicinity of \(q_{r}=1/\sqrt{B_{p}|A_{p}|}\). This allows us to put \(q\equiv q_{r}\) everywhere except for the first parenthesis in the denominator of Eq. 6, which results \[\langle Q_{el}\rangle^{n}=\frac{32\hbar q_{r}^{4}e^{-q_{r}^{2}/q_{T}^{2}}}{mB _{p}q_{T}^{2}}\int_{1/|A_{p}|}^{\infty}\frac{d\left(1/A_{p}+B_{p}q^{2}\right) ^{2}+q_{r}^{4}}. \tag{7}\] The estimate for the width of the momentum interval \(q_{r}-\delta q\leq q_{r}+\delta q\) provides the dominant contribution to the integral. For \(q=q_{r}+\delta q\) we have \[1/|A_{p}|+B_{p}q^{2}\sim q^{2}\implies B_{p}\delta qq_{r}\sim q_{r}^{2}.\] So by dividing the above equation with \(q_{r}^{2}\) and taking into account that \(\delta q\ll q_{r}\) (for the narrow interval) we get, \(\delta q/q_{r}\sim(|A_{p}|q_{r}^{2})\ll 1\). Defining \(x=\left(1/A_{p}+B_{p}q^{2}\right)/\alpha\) and \(\alpha=q_{r}^{2}\), we obtain \[\langle Q_{el}\rangle^{n}=\frac{32\hbar q_{r}^{4}}{mB_{p}q_{T}^{2}}e^{-q_{r}^ {2}/q_{T}^{2}}\frac{1}{\alpha}\int_{1/(|A_{p}|\alpha)}^{\infty}\frac{dx}{x^{2} +1}. \tag{8}\] Using the property of integrals \[\int_{1/(|A_{p}|\alpha)}^{\infty}\frac{dx}{x^{2}+1}=\frac{\pi}{2}+\tan^{-1}( 1/(|A_{p}|\alpha))\approx\pi,\] where we assume that near resonance \(A_{p}\alpha\ll 1\) such that one can write \(\tan^{-1}((1/(|A_{p}|\alpha)))\approx\pi/2\). This results in the approximate expression of the elastic collision coefficient in the near resonance regime, which is given by \[\langle Q_{el}\rangle^{n}=\frac{32\pi\hbar}{mB_{p}}\bigg{(}\frac{q_{r}}{q_{T} }\bigg{)}^{2}e^{-q_{r}^{2}/q_{T}^{2}}. \tag{9}\] Since, \(\langle Q_{el}\rangle^{n}\propto q_{r}^{2}\propto l_{0}/V_{p}\). Therefore, tight harmonic confinement has a nearly linear influence on \(\langle Q_{el}\rangle^{n}\). The open circles in Fig. 1 show the estimated values of elastic collision rate \(\gamma^{n}=\langle n\rangle\times\langle Q_{el}\rangle^{n}\), which agrees well with direct numerical simulations in the close vicinity of the resonance where elastic collision rates enhanced by four orders of magnitude with exponential scaling of Eq. 9. If \(q_{T}/q_{r}\) is much larger than unity, which can be achieved with high temperature and at very small magnetic field detuning. As a result, one can expand the exponent in Eq. (9) in Taylor series \(e^{-q_{r}^{2}/q_{T}^{2}}\approx 1\), which gives the peak value of elastic collision rate indicated by \(\star\) points in Fig. 1. This peak value is slightly overestimated. Combining with \(\langle n\rangle=q_{T}^{2}/(8\pi)\), it turns out that the peak becomes nearly independent of temperature, this condition can be well satisfied at higher temperatures. In the experiment, elastic collision rates can be extracted from thermalization rate \(\Gamma_{th}=\gamma_{el}/\alpha\). Here, \(\alpha\approx 4.1\) is an average number of \(p\)-wave elastic collisions needed for time evolution for the temperature difference between two radial directions towards thermalization [1]. Thermalization rates can be directly extracted using a cross-dimensional relaxation method similar to three-dimensional \(p\)-wave case [34; 35; 56]. In order to benchmark the approximate expression (9), we have also derived a similar expression for the three-dimensional case in Appendix A. Then we compare it to experimental data of thermalization rate from Nakasuji _et al._, [34] and found a good agreement (for detail see Appendix A). Next, we consider the weakly interacting regime which is sufficiently far from the resonance (\(A_{p}\to 0\)). In this regime, ratio \(q_{T}/q_{r}\) is very low approximately 0.3 in the experiments [17; 57]. In this zero-range limit when Figure 1: Numerical results of two dimensional elastic collision rates \(\gamma_{el}\) versus magnetic field detuning \(B-B_{0}\) at (a) \(T_{F}=3\)\(\mu\)K and (b) \(T_{F}=4\)\(\mu\)K, for spin-polarized \({}^{6}\)Li atoms in lowest hyperfine ground state calculated from Eq. (6). Solid markers show off-resonant results from Eq (10) while open circles mark the near-resonant values according to Eq. (9). These approximate results are in agreement with full numerical results. The \(\star\) points indicate the peak values under the condition of \(e^{-q_{r}^{2}/q_{T}^{2}}\approx 1\) in Eq. (9). the effective range is negligible, the dominant contribution arises from \(1/A_{p}\) in the denominator of Eq. (5). After averaging over the thermal Boltzmann distribution (Eq. 6), estimated value of the elastic rate coefficient becomes \[\langle Q_{el}\rangle^{f}=\frac{64mA_{p}^{2}}{\hbar^{3}}(k_{B}T)^{2}, \tag{10}\] which shows quadratic temperature dependence. The filled circles in Fig. 1 show the estimated far resonance value of elastic collision rate \(\gamma_{el}^{f}=\langle n\rangle\times\langle Q_{el}\rangle^{f}\), which agrees with numerical results with very slight underestimation. Eqs. 9 and 10 indicate that elastic collision rates exhibit different temperature dependencies in near and far resonance cases. ## III Energy transfer from high to low velocity In the previous section, we have observed that elastic collision rates are significantly enhanced up to fourth order of magnitude on approaching to the zero magnetic field detuning and well described by Eq. 9. However, in the case of spin-polarized fermions, at the same time, three-body losses are also significantly enhanced with the same scaling of \(e^{-q^{2}/q^{2}_{T}}\) towards zero magnetic fields detuning as observed in a recent experiment in two dimension [57], which significantly reduce the lifetime of gas. On the other hand, \(p\)-wave interactions are very weak at low temperatures and at large magnetic field detuning (i.e., near background \(p\)-wave collision in the ultracold regime). Therefore, in this section, we theoretically explore to what extent the \(p\)-wave elastic collisions influence the evaporation process by increasing the density or temperature of the gas until three-body recombination becomes too strong. In evaporative cooling, after removing high-velocity atoms from the tail of the Maxwell-Boltzmann distribution, atoms transfer energy from high to low velocity in order to recover the Maxwell-Boltzmann distribution. Estimating the required number of \(p\)-wave collisions to recover the Maxwell-Boltzmann distribution is an important benchmark. The collision rate for a partial wave \(l\) with speed \(v\) involving an atom is \[\gamma_{l,|v_{1}=v|}=\langle n\rangle\int\sigma_{p}|v-v_{2}|f_{v_{2}}d^{2}v_{2}, \tag{11}\] where \(\sigma_{p}\) is the collision cross-section, \(f_{v_{2}}\) is the two dimensional Maxwell-Boltzmann velocity distribution, and \(|v-v_{2}|=\left(v^{2}+v_{2}^{2}-2vv_{2}\cos\theta\right)^{1/2}\). Introducing, \(\tilde{u}^{2}=\frac{1}{2}\beta mv^{2}\) and \(u^{2}=\frac{1}{2}\beta mv_{2}^{2}\) with \(\beta=1/(k_{B}T)\), Eq. 11 becomes \[\gamma_{p,|v_{1}=v|}=\frac{\langle n\rangle}{\pi}\left(\frac{2}{\beta m} \right)^{1/2}\int\sigma(\tilde{u}-u)|\tilde{u}-u|e^{-u^{2}}udud\theta. \tag{12}\] Here, elastic cross section \(\sigma_{p}\) is \(\sigma(q)\equiv\sigma(|\tilde{u}-u|)\). The wave vector \(q\) in terms of velocity is \(q=\sqrt{m/2\beta\hbar^{2}}|\tilde{u}-u|\). In the region of background \(p\)-wave collision, the elastic scattering cross-section from Eq. 4 becomes \(\sigma(\tilde{u}-u)=16A_{p}^{2}(\sqrt{m/2\beta\hbar^{2}}|\tilde{u}-u|)^{3}\). We define relevant dimension less quantity \(\gamma_{p,|v_{1}=v|}/\gamma^{f}\) to estimate the elastic collision rate (\(p\)-wave evaporation rate) for an atom with velocity \(v\), which is given by \[\frac{\gamma_{p,|v_{1}=v|}}{\gamma^{f}}=\frac{\int_{0}^{\infty}\int_{0}^{2\pi }\left(\tilde{u}^{2}+u^{2}-2\tilde{u}u\cos\theta\right)^{2}d\theta}ue^{-u^{2} }{8\pi}du. \tag{13}\] The mean speed in two dimensions is \(\sqrt{\pi k_{B}T/2m}\) and for three-dimensional case is \(\sqrt{8k_{B}T/\pi m}\). Therefore, we plot normalized \(p\)-wave evaporation rates Eq. 13 as a function of common parameter temperature rather than mean velocity. In Fig. 2 (a), the dashed curve Figure 2: (a) The \(p\)-wave evaporation rates \(\gamma_{p,v}/\gamma^{f}\) versus temperature involving an atom with speed \(v\) normalized by thermally averaged collision rate \(\gamma^{f}\). The blue dashed curve shows \(p\)-wave collision rates in two-dimensions. The solid curve represents the \(p\)-wave collision rates while the black dotted curve represents the \(s\)-wave collision rates in three dimension. (b) The ratio of two-dimensional \(p\)-wave evaporation rates \(\gamma_{p,v}/\gamma^{f}\) to three-dimensional evaporation rates \(\Gamma_{p,v}/\Gamma^{f}\) for an atom at speed \(v\) as a function of temperature. Inset: The fraction of collisions up to a kinetic energy of \(\eta k_{B}T\), where \(\eta\) is the truncation parameter. shows two-dimensional \(p\)-wave normalized evaporation rates across the temperature spectrum. The solid curve shows three-dimensional \(p\)-wave normalized evaporation rate \(\Gamma_{p,v}/\Gamma^{f}\) (calculated in Appendix A). The dotted curve indicates the three-dimensional \(s\)-wave normalized evaporation rates as a reference [35]. On average, in the zero-temperature limit (or low-velocity limit), the number of \(p\)-wave collisions increases in two dimensions as compared to three dimensions. Therefore, it is possible that after cutting the tail of velocity distribution in evaporative cooling, \(p\)-wave collisions in two dimensions can take less time to populate the low velocities and faster recover the thermal distribution as compared to the three-dimensional \(p\)-wave case. To further understand it, we plot the ratio of two-dimensional collision rates (\(\gamma_{p,v}/\gamma^{f}\)) to three-dimensional collision rates (\(\Gamma_{p,v}/\Gamma^{f}\)) in Fig. 2 (b). It is evident that for the same averaged collision rates, transfer of energy to temperature range near \(T=0\) (or velocity group \(v=0\)) is \(\sqrt{2}\) times faster for \(p\)-wave collisions in two dimensions compared with the \(p\)-wave case in three dimensions. The decrease in temperature from an initial temperature depends on the truncation of energy distribution at \(\eta k_{B}T\) followed by thermal relaxation in an infinitely deep potential [58]. Here, \(\eta\) is the truncation parameter. We calculate fraction of collision rates by thermally averaging \(\gamma_{l,|v_{1}=v|}\) over all possible speeds, given by \[\gamma^{\prime}_{p,|v_{1}=v|}=\int_{0}^{v^{\prime}}\gamma_{p,|v_{1}=v|}f_{v}dv. \tag{14}\] Here, upper limit is \(v^{\prime}=\sqrt{\eta(2k_{B}T/m)}\) and \(f_{v}\) is the Maxwell-Boltzmann speed distribution. It is parameterized for three dimension as \(f_{v}=4\pi(\beta m/2\pi)^{3/2}v^{2}e^{-\beta mv^{2}/2}\) while for two dimension it is \(f_{v}=(\beta m)ve^{-\beta mv^{2}/2}\). In the inset of Fig. 2 (b) dashed curve shows the fraction of \(p\)-wave collision rates as a function of \(\eta\) for the two-dimensional case. The solid curve represents the three-dimensional \(p\)-wave collision rates fraction while the black dotted curve represents the three-dimensional \(s\)-wave collision rates fraction [35]. Typically, for \(\eta<8\), truncation effects become important [59]. For small \(\eta\) less than 4, the fraction of atoms that can escape over the threshold of potential is large in two dimensions compared to three dimensions \(p\)-wave case. In other words, its approaches more closely to the \(s\)-wave case. However, the temperature reduction per escape atom is usually small [58]. Therefore, the interplay between the truncation parameter and the dynamic of evaporation suggests that \(\eta\) should be kept large and constant [60]. In the range of \(6\geq\eta\leq 8\), two dimensions \(p\)-wave collision rates fraction gets much closer to three dimensions \(s\)-wave collision rates compared to three dimensions \(p\)-wave case. In this range, residual evaporation balance the heat generated by inelastic collision losses [61]. ## IV Collisional stability In the previous section, we show that evaporation efficiency in two dimensions can be improved better compared to three dimensions in the weakly interacting regime. Therefore, next, we analyze the collision stability in this region. There is no inelastic two-body collision for identical fermions in the lowest hyperfine ground state due to angular momentum conservation, especially for the \({}^{6}\)Li atom [2; 62]. In spin-polarized fermions, in the lowest hyperfine ground state, dominant inelastic losses are only due to a three-body inelastic collision. Therefore, we only focus on the ratio of two-body elastic (good) to three-body inelastic (bad) collisions in spin-polarized fermions confined in two dimensions. Good-to-bad collisions ratio \(R\) is generally defined as \[R=\frac{\gamma_{el}}{\gamma_{in}+(1/\tau)}=\frac{\langle n\rangle\times\langle Q _{el}\rangle}{\langle n^{2}\rangle\times Q_{3}+(1/\tau)} \tag{15}\] where \(\tau\) is the vacuum-limited lifetime of atoms in a trap and \(Q_{3}\) is the three-body inelastic loss coefficient. Here, \(\langle n^{2}\rangle=\frac{1}{48}\left(\frac{mk_{B}}{\pi k^{2}}\right)^{2} \frac{T_{F}^{4}}{T^{2}}\) is the mean squared density for two dimensional harmonically trapped gas. Considering the extreme case where the scattering area is quite small almost closer to the background scattering. We assume that density exceeds from \(1/\lambda^{2}\) (\(n\approx 10^{12}/m^{2}\)). Here, \(\lambda\) is the wavelength of resonant light. In this zero-range limit, the effective range \(B_{p}\) can be assumed negligible [17; 18; 35]. Elastic cross-section is scaled as \(\sigma_{p}\propto A_{p}^{2}q^{3}\) (as evident from Eq. 10). At temperature \(T=T_{F}\), elastic collision rate \(\gamma_{el}\) is propor Figure 3: Ratio of two-body elastic to three-body inelastic collisions for two-dimensional harmonically trapped \({}^{6}\)Li atoms near zero field or far away from the \(p\)-wave Feshbach resonance with \(T=T_{F}\) assuming vacuum life time of 60 seconds. The favorable region is the lower value of dimensionless three-body loss constant \(C_{0}\) at some intermediate densities. tional to \(A_{p}^{2}q^{6}\). In the zero-range approximation, for sufficiently small scattering area \(A_{p}\), three-body inelastic loss coefficient parameterized through scaling relation [57]. \[Q_{3}=C_{0}\frac{\hbar}{m}q_{T}^{4}A_{p}^{3}, \tag{16}\] with dimensionless non-universal scaling constant \(C_{0}\). Hence, three body loss rate \(\gamma_{in}=\langle n^{2}\rangle\times Q_{3}\) becomes proportional to \(A_{p}^{3}q^{8}\). This implies that the ratio of good to bad collision scales as \(1/(C_{0}A_{p}q^{2})\). Near zero field when the interaction is closer to the background scattering, the collision ratio mainly depends on achievable densities (\(q\) or \(T_{F}\)) and constant \(C_{0}\). Therefore, a favorable regime is at low densities with weak \(p\)-wave interactions. Scaling of Eq. 16 has been measured in the experiment with \(C_{0}=2\times 10^{4}\) around 0.5G detuning [57]. In the case of spin-polarized fermions in three dimensions, the equivalent scaling law is \(K_{3}=C(\hbar/m)k^{4}V_{p}^{8/3}\)[18] and has been observed in recent experiment [17; 35]. Here, \(k\) is the thermal wave vector and \(C\) is the equivalent dimensionless constant in three dimensions. The value of \(C=2\times 10^{6}\) was reported around 0.5 G detuning in Ref. [17]. Ref. [35] reported two order smaller value \(C=3\times 10^{4}\) around 1 G detuning. These different \(C\) values indicate the lack of universal character in the recombination of three ultracold fermions [18; 63], which means \(C\) depends on the detail of interatomic potential unlike Bose gases [64]. Therefore, the value of \(C_{0}\) or \(C\) might vary with scattering strength, and the value of \(C_{0}\) is of crucial importance for the quantitative evaluation of the three-body losses. But its value is not well defined. Therefore, considering the non-universal character of \(C_{0}\), we calculate the ratio of good-to-bad collision near background \(p\)-wave collision as a function of temperature at three different values of \(C_{0}\), and results are shown in Fig. 3. Here, we assumed the vacuum limited lifetime \(\tau\approx 60\) seconds. The good-to-bad collision ratio can lead to a favorable regime at somewhere intermediate densities for the lower value of \(C_{0}\). There is a favorable range around \(T_{F}\approx 100-200\)\(\mu\)K (corresponding density \(\langle n\rangle\approx 0.5-1.0\times 10^{14}/m^{2}\)) where the ratio of good to bad collision reaches maximum 100 to 300. Next, it is natural to focus on a close resonance regime where the scattering area is maximum and the condition of \(q_{T}/q_{r}\geq 1\) is well satisfied. In this regime, the three-body loss coefficient \(Q_{3}\) shows the unitary behavior. In the unitary regime, the three-body loss coefficient depends only on temperature and no dependence on scattering area. Considering \(q_{T}\) as the only relevant length scale in the unitary limit, expected scaling is \(Q_{3}\propto(\hbar/m)q_{T}^{-2}\) from the dimensional analysis. After taking the thermal average over Boltzmann distribution, we get the maximum upper limit of three-body loss constant[65] \[Q_{3}^{p}=\zeta\frac{3\pi\hbar^{3}}{m^{2}(k_{B}T)}=\zeta\frac{3\pi\hbar}{mq_{ T}^{2}}. \tag{17}\] Here, we introduce the specie dependant non-universal dimensionless constant \(\zeta\leq 1\) similar to three dimension case [16; 66; 61]. From Eq. 9 The peak value of the elastic collision rate coefficient is proportional to \(1/(A_{p}q_{T}^{2})\). Assuming Fermi gas at temperature \(T=T_{F}\), \(\langle n\rangle/\langle n^{2}\rangle=6\pi/q_{T}^{2}\). This implies that the ratio of elastic to inelastic collision rate (in units of Hz) is nearly scaled to \(1/(\zeta A_{p}q_{T}^{2})\). Since \(A_{p}\) is quite large at resonance, it results in a low value of good-to-bad collision ratio. However, tailoring achievable density (or \(T_{F}\)) to lower value improves the good-to-bad collision ratio, as it is possible in two dimensions. This suggests that thermalization measurement similar to the three-dimensional case can be performed with better resolution in experiments and also some new interesting few-body physics. ## V Conclusion In summary, we studied the elastic collision rates in two regimes of \(p\)-wave interactions for two-dimensional spin-polarized fermions in the lowest hyperfine ground state of \({}^{6}\)Li atoms. In the non-zero-range limit where the effective range term is dominant, elastic collision rates are enhanced exponentially towards the resonance with scaling of \(e^{-q_{T}^{2}/q_{T}^{2}}\) and also in agreement with direct numerical results. The derived analogous expression for the case of three dimensions successfully explains the experimental results of thermalization rates measured by Nakasuji et al., [34]. In the zero-range limit, when the effective range is negligible and the scattering area \(A_{p}\) is near the background \(p\)-wave collision in the ultracold regime, the elastic collision rate coefficient is proportional to temperature \(T^{2}\) and scattering area \(A_{p}^{2}\). In this background limit, the transfer of energy from high to low velocities through \(p\)-wave collision is almost \(\sqrt{2}\) times faster than the three-dimensional case. This may also allow performing \(p\)-wave evaporative cooling better than modest efficiency using optimized two-dimensional trap and vacuum limited lifetime. ## Appendix A Elastic collision rates in three dimensions The scattering amplitude for \(p\)-wave interaction between two fermions with relative wave vector \(k\) in three dimension is given by \[f(k)=\frac{-k^{2}}{1/V_{p}+k_{e}k^{2}+ik^{3}}. \tag{18}\] Here, \(V_{p}\) is scattering volume and \(k_{e}>0\) is the effective range. The \(p\)-wave \(S\)-matrix element is given by \(\exp(2i\delta(k))\). The elastic rate constant is \(K_{el}=v\sigma(k)\), where \(\sigma(k)=3\pi\left|1-S(k)\right|^{2}/k^{2}\) is the \(p\)-wave elastic scattering cross section, and \(v=2\hbar k/m\) is the relative velocity. As a result, the elastic rate coefficient becomes \[K_{el}=\frac{24\pi\hbar}{m}\frac{k^{5}}{(1/V_{p}+k_{e}k^{2})^{2}+k^{6}}. \tag{3}\] Very close to the resonance when \(V_{p}\rightarrow\infty\), the largest contribution comes from the momenta of resonant bound state \(k_{r}=1/\sqrt{k_{e}|V_{p}|}\). In the close vicinity of resonant regime, \(k_{T}\gg k_{r}\), where \(k_{T}=\sqrt{3mk_{B}T/2\hbar^{2}}\) is the thermal momentum. As a result, only a small fraction of relative momenta contributes to the collision process. Following the procedure similar to the two-dimensional case in the main text, we find the expression for the elastic collision rate coefficient \[\langle K_{el}\rangle^{n}=\frac{96\pi^{3/2}\hbar}{mk_{e}}\bigg{(}\frac{k_{r}} {k_{T}}\bigg{)}^{3}e^{-\left(k_{r}^{2}/k_{T}^{2}\right)}. \tag{4}\] Thermalization rates can be obtained from the above equation as \[\Gamma_{th}^{n}=\frac{1}{\alpha}\langle n\rangle\times\langle K_{el}\rangle^{ n}. \tag{5}\] The mean density for a three-dimensional harmonically trapped Fermi gas at temperature \(T\) in the Boltzmann regime is given by \(\langle n\rangle=\frac{1}{48}\left(\frac{mk_{B}}{\hbar^{2}\pi}\right)^{3/2} \frac{T_{T}^{3}}{T^{3/2}}\)[35]. The dashed curves in Fig. 4 show the fitted thermalization rates \(\Gamma_{th}^{n}\) in comparison to the experimental data from Ref. [34] for four different sets of temperatures. During the fitting we kept all scattering parameters fixed and kept the mean density as the only free parameter. The expression 5 successfully reproduces the experimental results in the narrow range where interaction is sufficiently strong. The mean density obtained from fitting differs from the measured density of approximately 50% due to uncertainty in the estimation of atom numbers in the trap and as well as trap conditions such as trapping frequencies. At sufficiently far away from the resonance, interaction is weak (\(V_{p}\to 0\)) and \(k_{T}\ll k_{r}\). In this regime, elastic collision rates can be approximated as [35] \[\Gamma^{f}=\langle n\rangle\times\frac{288\sqrt{\pi}V_{p}^{2}m^{3/2}}{\hbar^{4 }}(k_{B}T)^{\frac{5}{2}}. \tag{6}\] The ratio of elastic scattering rate for an atom with velocity \(v\) compared to average scattering rate \(\Gamma_{f}\)[35]: \[\frac{\Gamma_{p,|v_{1}=v|}}{\Gamma^{f}}=\frac{\int_{0}^{\infty}\int_{0}^{\pi} \left(\tilde{u}^{2}+u^{2}-2\tilde{u}u\cos\theta\right)^{5/2}\sin\theta d\theta u ^{2}e^{-u^{2}}du}{24\sqrt{3}} \tag{7}\] At zero velocity one can substitute \(v=0\) (\(\tilde{u}=0\)) in Eq. 7 and in Eq. 13, which results the ratio \((\gamma_{p,v}/\gamma^{f})/(\Gamma_{p,v}/\Gamma^{f})=\sqrt{2}\). ## Acknowledgement We acknowledge the fruitful discussions with Yair Margalit.
2305.10306
UniEX: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective
We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis. Our approach converts the text-based IE tasks as the token-pair problem, which uniformly disassembles all extraction targets into joint span detection, classification and association problems with a unified extractive framework, namely UniEX. UniEX can synchronously encode schema-based prompt and textual information, and collaboratively learn the generalized knowledge from pre-defined information using the auto-encoder language models. We develop a traffine attention mechanism to integrate heterogeneous factors including tasks, labels and inside tokens, and obtain the extraction target via a scoring matrix. Experiment results show that UniEX can outperform generative universal IE models in terms of performance and inference-speed on $14$ benchmarks IE datasets with the supervised setting. The state-of-the-art performance in low-resource scenarios also verifies the transferability and effectiveness of UniEX.
Ping Yang, Junyu Lu, Ruyi Gan, Junjie Wang, Yuxiang Zhang, Jiaxing Zhang, Pingjian Zhang
2023-05-17T15:44:12Z
http://arxiv.org/abs/2305.10306v3
UniEX: An Effective and Efficient Framework for Unified Information Extraction via a Span-extractive Perspective ###### Abstract We propose a new paradigm for universal information extraction (IE) that is compatible with any schema format and applicable to a list of IE tasks, such as named entity recognition, relation extraction, event extraction and sentiment analysis. Our approach converts the text-based IE tasks as the token-pair problem, which uniformly disassembles all extraction targets into joint span detection, classification and association problems with a unified extractive framework, namely UniEX. UniEX can synchronously encode schema-based prompt and textual information, and collaboratively learn the generalized knowledge from pre-defined information using the auto-encoder language models. We develop a traffic attention mechanism to integrate heterogeneous factors including tasks, labels and inside tokens, and obtain the extraction target via a scoring matrix. Experiment results show that UniEX can outperform generative universal IE models in terms of performance and inference-speed on \(14\) benchmarks IE datasets with the supervised setting. The state-of-the-art performance in low-resource scenarios also verifies the transferability and effectiveness of UniEX. ## 1 Introduction Information extraction (IE) aims at automatically extracting structured information from unstructured textual sources, covering a wide range of subtasks such as named entity recognition, relation extraction, semantic role labeling, and sentiment analysis Muslea et al. (1999); Grishman (2019). However, the variety of subtasks build the isolation zones between each other and form their own dedicated models. Fig 1 (a) presents that the popular IE approaches handle structured extraction by the addition of task-specific layers on top of pre-trained language models (LMs) and a subsequent fine-tuning of the conjoined model Lample et al. (2016); Luo et al. (2020); Wei et al. (2020); Ye et al. (2022). The isolated architectures and chaotic situation prevents enhancements from one task from being applied to another, which hinders the effective latent semantics sharing such as label names, and suffer from inductive bias in transfer learning Paolini et al. (2020). With powerful capabilities in knowledge sharing and semantic generalization, large-scale LMs bring the opportunity to handle multiple IE tasks using a single framework. As shown in Fig 1 (b), by developing sophisticated schema-based prompt and structural generation specification, the IE tasks can be transformed into text-to-text and text-to-structure formats via large-scale generative LMs Dong et al. (2019); Paolini et al. (2020); Lu et al. (2022) such as T5 Raffel et al. (2020). Moreover, the universal IE frameworks can learn general knowledge from multi-source prompts, which is Figure 1: (a) Task-specific IE methods: isolated structures and schemas. (b) Typical generative universal IE: unified modeling via text or structure generation. (c) Our extractive universal IE: unified modeling via traffic attention mechanism and auto-encoder LMs. beneficial for perceiving unseen content in low-resource scenarios. Despite their success, these generative frameworks suffer from their inherent problems, which limit their potential and performance in universal modeling. Firstly, the schema-based prompt and contextual information are synthetically encoded for generating the target structure, which is not conducive to directly leveraging the position information among different tokens. Secondly, the generative architecture utilizes the token-wise decoder to obtain the target structure, which is extremely time-consuming. The aforementioned issues prompt us to rethink the foundation of IE tasks. Fundamentally, we discover that the extraction targets of different IE tasks involve the determination of semantic roles and semantic types, both of which can be converted into span formats by the correlation of the inside tokens in the passage. For instance, an entity type is the boundary detection and label classification of a semantic role, while a relation type can be regarded as the semantic association between specific semantic roles. From this perspective, the IE tasks can be decoded using a span-extractive framework, which can be uniformly decomposed as several atomic operations: i) Span Detection, which locates the boundaries of the mentioned semantic roles; ii) Span Classification, which recognizes the semantic types of the semantic roles; iii) Span Association, which establishes and measures the correlation between semantic roles to determine semantic types. According to the above observation, we propose a new paradigm for universal IE, called **U**nified **E**xtraction model (UniEX) as Figure 1 (c). Specifically, we first introduce a rule-based transformation to bridge various extraction targets and unified input formats, which leverages task-specific labels with identifiers as the schema-based prompt to learn general IE knowledge. Then, recent works Liu et al. (2019); Yang et al. (2022) state that the auto-encoder LMs with bidirectional context representations are more suitable for natural language understanding. Therefore, We employ BERT-like LMs to construct an extractive architecture for underlying semantic encoding. Finally, inspired by the successful application of span-decoder and biaffine network to decode entity and relation with a scoring matrix Yu et al. (2020); Li et al. (2020); Yuan et al. (2022), we introduce a triaffine attention mechanism for structural decoding, which jointly considers high-order interactions among multiple factors, including tasks, labels and inside tokens. Each triaffine scoring matrix is assigned to a demand-specific prompt for obtaining span-extractive objectives. Through extensive experiments on several challenging benchmarks of \(4\) main IE tasks (entity/relation/event/sentiment extraction), we demonstrate that compared with the state-of-the-art universal IE models and task-specific low-resource approaches, our UniEX achieves a substantial improvement in performance and efficiency with supervised, few-shot and zero-shot settings. Our main contributions are summarized as: * We develop an efficient and effective universal IE paradigm by converting all IE tasks into joint span classification, detection and association problem. * We introduce UniEX, a new unified extractive framework that utilizes the extractive structures to encode the underlying information and control the schema-based span decoding via the triaffine attention mechanism. * We apply our approach in low-resource scenarios, and significant performance improvements suggest that our approach is potential for attaching label information to generalized objects and transfer learning. Our code will be made publicly available. ## 2 Related Work Unified NLP Task FormatsSince the prompt-tuning can improve the ability of language models to learn common knowledge and fix the gap across different NLP tasks, recent studies show the necessity of unifying all NLP tasks in the format of a natural language response to natural language input Raffel et al. (2020); Sanh et al. (2022); Wei et al. (2021). Previous unified frameworks usually cast parts of text problems as question answering McCann et al. (2018) or span extraction Keskar et al. (2019) tasks. TANL Paolini et al. (2020) frames the structured prediction tasks as a translation task between augmented natural languages. By developing a text-to-text architecture, T5 Raffel et al. (2020) makes prompts to effectively distinguish different tasks and provide prior knowledge for multitask learning. UIE Lu et al. (2022) uniformly models IE tasks with a text-to-structure framework, which encodes different extraction structures via a structured extraction language, adaptively generates varying targets via a structural schema instructor. Although effective, such methods focus on generative styles and thus cannot be adapted to the knowledge selection for vast label-based models. It motivates us to design an efficient and effective universal IE method, where we develop unified Extraction (EX) formats and triaffine attention mechanism. Label InformationLabel semantics is an important information source, which carries out the related meaning induced from the data Hou et al. (2020); Ma et al. (2022); Mueller et al. (2022). The L-TapNet Hou et al. (2020) introduces the collapsed dependency transfer mechanism to leverage the semantics of label names for few-shot tagging tasks. LSAP Mueller et al. (2022) improves the generalization and data efficiency of few-shot text classification by incorporating label semantics into the pre-training and fine-tuning phases of generative LMs. Together, these successful employments of label knowledge in low-resource setting motivates us to introduce label semantics into our unified inputs to handle few-shot and zero-shot scenarios. ## 3 Approaches Generally, there are two main challenges in universally modeling different IE tasks via the extractive architecture. Firstly, IE tasks are usually demand-driven, indicating that each pre-defined schema should correspond to the extraction of specific structural information. Secondly, due to the diversity of IE tasks, we need to resolve appropriate structural formats from the output sequence to accommodate different target structures, such as entity, relation and event. In this section, we outline how the UniEX exploits a shared underlying semantic encoder to learn the prompt and text knowledge jointly, and conduct various IE tasks in a unified text-to-structure architecture via the triaffine attention mechanism. ### The UniEX Framework #### 3.1.1 Unified Input Formally, given the task-specific pre-defined schema and texts, the universal IE model needs to adaptively capture the corresponding structural information from the text indicated by the task-relevant information. To achieve this, we formulate a unified input format consisting of task-relevant schema and text, as shown in Figure 2. To promote the sharing of generalized knowledge across different IE tasks, we choose to simply use the task-based and label-based schemas as prompt rather than elaborate questions, fill-in blanks or structural indicators. To achieve proper prompt representation, we introduce several special tokens [D-TOK], [C-TOK] and [A-TOK] as identifiers, uniformly replacing the corresponding schema representations in the input sentence. Here, [D-TOK] inherits the ability of [CLS] to capture the global semantic information. [C-TOK] and [A-TOK] inherit the ability of [SEP], thus remaining to use token representation to symbolize the connotation of subsequent schemas. Consider an input set denoted as \((s,x)\), includes the following: i) task-based schema \(s_{d}\) for span detection, ii) label-based schemas \(s_{c}\) for span classification and \(s_{a}\) for span association, iii) one passage \(x=\{x_{1},\ldots,x_{N_{x}}\}\). The input sentence with \(N_{s}=N_{sd}+N_{sc}+N_{sa}\) schemas and \(N_{x}\) inside tokens can be denoted as: \[\begin{split} x_{inp}=&\left\{\left[\left.\mathrm{ D\text{-}TOK}\right|^{i}s_{d}^{i}\right\}_{i=1}^{N_{sd}}\right.\left\{\left[ \left.\mathrm{C\text{-}TOK}\right|^{i}s_{c}^{i}\right\}_{i=1}^{N_{se}}\right.\\ &\left.\left\{\left[\left.\mathrm{A\text{-}TOK}\right|^{i}s_{a} ^{i}\right\}_{i=1}^{N_{sa}}\right.\left.\mathrm{SEP}\right]x\left.\mathrm{ SEP}\right]\text{.}\end{split} \tag{1}\] #### 3.1.2 Backbone Network In our UniEX framework, we employ the BERT-like LMs as the extractive backbone, such as RoBERTa Liu et al. (2019) and ALBERT Lan et al. (2020), to integrate the bidirectional modeled input \(x_{inp}\). Note that the unified input contains multiple labels, resulting in undesired mutual influence across different labels and leading to a misunderstanding of the correspondence between the label and its structural format during the decoding phase. Meanwhile, in some tasks, the large number of labels allows schemas to take up excessive locations, squeezing the space for text. Referring to the embedding methods in the UniMC Yang et al. (2022), we address these issues from several perspectives, including position id and attention mask. Firstly, to avoid the information interference caused by the mutual interaction within label-based schemas, we constantly update the position id \(pos\) to tell apart intra-information in the label. In this way, the position information of label-relevant tokens is coequally treated based on their position embedding, and the refreshed location information for the first token of each label-based schema avoids the natural increase of the location id. Then, as shown in Figure 3, due to the detailed correlation among schema-based prompts in the IE tasks, we further introduce a schema-based attention mask matrix \(M_{mask}\) in the self-attention calculation to control the flow of labels, ensuring that unrelated labels are invisible to each other. In particular, different entity, relation and event types are invisible to each other, while relation and event types can contact their bound entity types. Furthermore, we take the encoded hidden vector from the last Transformer-based layer, where we combine the special tokens part as the schema representations \(H_{s}\in\mathbb{R}^{N_{s}\times d}\) and the passage tokens part as the text representations \(H_{x}\in\mathbb{R}^{N_{x}\times d}\) with hidden size \(d\). \[H_{s},H_{x}=\text{Encoder}\left(x_{inp},pos,M_{\text{mask}}\right) \tag{2}\] #### 3.1.3 Tiaffine Attention for Span Representation After obtaining the schema representations and text representations from the auto-encoder LM, the following challenge is _how to construct a unified decoding format that is compatible with different IE structures, with the goal of adaptively exploiting schemas to control various extraction targets._ Take the example in Figure 4, for the event extraction system, we locate the start and end indices of the words boundary "Dariues", "Ferguson" and "injure" as the semantic roles, categorized as the _Agent_, _Victim_ and _Trigger_ semantic types (entity/trigger) respectively, and collectively to the _Injure_ semantic type (event). For the relation extraction system, we associate the semantic roles "Betsy Ross" and "Philadelphia" by attaching their intersecting information to the _Live in_ semantic type (relation). In conjunction with the discussions in the Introduction, we consider two elements for universally modeling IE tasks as joint span detection, classification and association: I) Different extraction targets are presented in the form of span, relying on unified information carriers to accommodate various semantic roles and semantic types. II) The span-extractive architecture is necessary for establishing schema-to-text information interaction, which can adaptively extract schema-related semantic information from text. For the first proposal, we introduce two information carriers for decoding heterogeneous IE structures in a unified span format: 1. **Structural Table** indicates a rank-2 scoring matrix corresponding to a particular schema, which accommodates the semantic information required Figure 3: Schema-based Attention Mask Matrix of the relation extraction task with triplet type \((e^{1},r^{1},e^{2})\) and \((e^{1},r^{2},e^{3})\). The relation and entity types are internally invisible, whereas the paired relation and entity types can attend to each other. Figure 2: The overall architecture of UniEX. The sample text comes from CoNLL04 (Roth and Yih, 2004). for span-extractive parsing. 2. **Spotting Designator** indicates the location of spans in the preceding structural table, which represent extraction targets corresponding to the particular schema. For the second proposal, we attempt to explore the internal interaction of the inside tokens by converting the text representation into span representation. Then, we apply two separate FFNs to create different representations (\(H_{x}^{s}\)\(/\)\(H_{x}^{e}\)) for the start/end positions of the inside tokens. To further interact such multiple heterogeneous factors simultaneously, we define the deep triaffine transformation with weighted matrix \(\mathcal{W}\in\mathbb{R}^{d\times d\times d}\), which apply the triaffine attention to aggregate the schema-wise span representations by considering schema as queries as well as start/end of the inside tokens as keys and values. In this process, the triaffine transformation injects each schema information into the span representations and resolves the corresponding extraction targets. It creates a \(N_{s}\times N_{x}\times N_{x}\) scoring tensor \(S\) by calculating continuous matrix multiplication as following: \[\begin{split} H_{x}^{s}&=\text{FFN}_{s}\left(H_{x} \right),\\ H_{x}^{e}&=\text{FFN}_{e}\left(H_{x}\right),\\ S&=\sigma(\mathcal{W}\times_{1}H_{s}\times_{2}H_{x} ^{s}\times_{3}H_{x}^{e}),\end{split} \tag{3}\] where \(\times_{k}\) is the matrix multiplication between input tensor and dimension-\(k\) of \(\mathcal{W}\). \(\sigma(*)\) denotes the Sigmoid activation function. At this point, the tensor \(S\) provides a mapping score from the schema to internal spans of the text, where each rank-2 scoring matrix corresponding to a specific schema is the structural table. For the \(r\)-th structural table, the affine score of each span \((p,q)\) that starts with \(p\) and ends with \(q\) can be denoted as \(S_{r,p,q}\in[0,1]\), while the affine score of a valid span in the structural table is the spotting designator. We divide all \(N_{s}\) structural tables into three parts according to the distribution of the schemas, among them, \(N_{sd}\) for span detection, \(N_{sc}\) for span classification, and \(N_{sa}\) for span association. For different schemas, we develop their spotting designators by following strategies: **Span Detection**: In particular, we usually use the structural table derived from the task-based schema representation for span detection, which can be obtained from the hidden state of the special token [CLS]. Since the [CLS] token is mutually visible to other schemas, the task-based schema representation can capture the span-related semantic information of the semantic roles from the task and label names. The spotting designators identify the start and end indices of the \(i\)-th semantic roles as (\(s_{i},e_{i}\)) using the axes. **Span Classification**: The label-based schema representations for entity/argument/trigger/event types are used for span classification. The spotting designators are identical with the span positions of the semantic roles, indicating that the semantic type of the \(i\)-th span can be identified by attaching to the (\(s_{i},e_{i}\)) position in the corresponding structural table. **Span Association**: The label-based schema representations for relation/sentiment types are used for span association. In this process, we model the potentially related semantic roles and correlate them to corresponding semantic types. The spotting designators locate at two interleaved positions associated with the semantic roles of the semantic type, that is, for the \(i\)-th and \(j\)-th spans, the extraction target is transformed to the identification of the (\(s_{i},s_{j}\)) and (\(e_{i},e_{j}\)) positions in the corresponding structural table. Note that all span values in the structural table for label-based schemas are masked except for the spotting designators, because we only need to observe the semantic types and semantic association among the detected spans. Specifically, the spotting designators for span detection are the spans with \(q\geq p\), and the spotting designators for span classification and association are defined by the position consistency and interleaving of valid spans with \(S_{r,p,q}=1\) in span detection. Figure 4: Uniformly modeling different extraction targets as joint span detection, classification and association with sampling from selected datasets. ### EX Training Procedure Given the input sentence \(x_{inp}\), We uniformly reformat different output targets as a rank-3 matrix \(Y\), sharing the same spotting designators as the triaffine scoring matrix. Similarly, we denote the value of each valid span as \(Y_{r,p,q}\in\{0,1\}\), with \(Y_{r,p,q}=1\) denoting the desirable span for a ground-truth and \(Y_{r,p,q}=0\) denoting the meaningless span for semantic role or semantic type. Hence it is a binary classification problem and we optimize our models with binary cross-entropy: \[\mathrm{BCE}(y,\hat{y})=-(y\cdot\log(\hat{y})+(1-y)\cdot\log(1-\hat{y})), \tag{4}\] \[\mathcal{L}=\sum_{r=1}^{N_{x}}\sum_{p=1}^{N_{x}}\sum_{q=1}^{N_{x}}\mathrm{BCE }\left(Y_{r,p,q},S_{r,p,q}\right). \tag{5}\] ## 4 Experiments To verify the effectiveness of our UniEX, we conduct extensive experiments on different IE tasks with supervised (high-resource), few-shot and zero-shot (low-resource) scenarios. ### Experimental Setup For the supervised setting, we follow the preparation in TANL Paolini et al. (2020) and UIE Lu et al. (2022) to collect 14 publicly available IE benchmark datasets and cluster the well-representative IE tasks into 4 groups, including entity, relation, event and structured sentiment extraction. In particular, for each group, we design a corresponding conversion regulation to translate raw data into the unified EX format. Then, for the few-shot setting, we adopt the popular datasets FewNERD Ding et al. (2021) and Cross-Dataset Hou et al. (2020) in few-shot entity extraction and domain partition as Ma et al. (2022). For the zero-shot setting, we use the common zero-shot relation extraction datasets WikiZSL Chen and Li (2021) and FewRel Han et al. (2018) and follow the same process of data and label splitting as Chia et al. (2022). Following the same evaluation metrics as all previous methods, we use span-based offset Micro-F1 with strict match criteria as the primary metric for performance comparison. Please refer to Appendix A for more details on dataset descriptions, unified EX input formats, metrics and training implementation. ### Experiments on Supervised Settings In our experiment, under the high-resource scenario, we compare our approach with the state-of-the-art generative universal IE architectures that provide a universal backbone for IE tasks based on T5 Raffel et al. (2020), including TANL Paolini et al. (2020) and UIE Lu et al. (2022). For a fair comparison, We only consider results without exploiting large-scale contexts and external knowledge beyond the dataset-specific information, and present the average outcomes if the baseline is conducted in multiple runs. The main results of UniEX and other baselines on 14 IE datasets are shown in \begin{table} \begin{tabular}{c c c c|c c|c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Domain**} & \multirow{2}{*}{**Metric**} & \multicolumn{2}{c|}{**TANL**} & \multicolumn{2}{c|}{**UniEX**} & \multicolumn{2}{c}{**UIE**} & \multicolumn{1}{c}{**UniEX**} \\ & & & & **220M** & **132M** & **770M** & **372M** \\ \hline \multirow{3}{*}{Entity Extraction} & ACE04 & News, Speech & Entity F1 & - & - & 86.52 & **87.12** \\ & ACE05-Ent & News, Speech & Entity F1 & 84.90 & **85.96** & 85.52 & **87.02** \\ & CoNLL03 & News & Entity F1 & 91.70 & 92.13 & 92.17 & **92.65** \\ & GENIA & Biology & Entity F1 & 76.40 & **76.69** & - & - \\ \hline \multirow{3}{*}{Relation Extraction} & ACE05-Rel & News, Speech & Relation Strict F1 & **63.70** & 63.64 & 64.68 & **66.06** \\ & CoNLL04 & News & Relation Strict F1 & 71.40 & **71.79** & 73.07 & **73.40** \\ & SciERC & Scientific & Relation Strict F1 & - & - & 33.36 & **38.00** \\ & ADE & Medicine & Relation Strict F1 & 80.60 & 83.81 & - & - \\ \hline \multirow{3}{*}{Event Extraction} & ACE05-Evt & News, Speech & \begin{tabular}{c} Event Trigger F1 \\ Event Argument F1 \\ Event Trigger F1 \\ Event Argument F1 \\ Event Argument F1 \\ \end{tabular} & 68.40 & **70.86** & 72.63 & **74.08** \\ & CASEI & News, Speech & Event Argument F1 & 47.60 & **50.67** & **54.67** & 53.92 \\ & CASIE & Cybersecurity & \begin{tabular}{c} Event Trigger F1 \\ Event Argument F1 \\ \end{tabular} & - & - & 68.98 & **71.46** \\ \hline \multirow{3}{*}{Sentiment Extraction} & 14-res & Review & Sentiment Triplet F1 & - & 73.78 & **74.77** \\ & 14-lap & Review & Sentiment Triplet F1 & - & 63.15 & **65.23** \\ \cline{1-1} & 15-res & Review & Sentiment Triplet F1 & - & 66.10 & **68.58** \\ \cline{1-1} & 16-res & Review & Sentiment Triplet F1 & - & - & 73.87 & **76.02** \\ \hline \hline \end{tabular} \end{table} Table 1: Overall results of universal IE approaches on different datasets for entity/relation/event/sentiment extraction tasks. **Base** refers to TANL and UniEX respectively using T5-base and RoBERTa-base as the backbone. **Large** refers to UIE and UniEX respectively using T5-large and RoBERTa-large as the backbone. Table 1. We can observe that: 1) By modeling IE as joint span detection, classification and association, and encoding the schema-based prompt and input texts with the triaffine attention mechanism, UniEX provides an effective universal extractive backbone for all IE tasks. The UniEX outperforms the universal IE models with approximate backbone sizes, achieving new state-of-the-art performance on almost all tasks and datasets. 2) The introduction of label-based schema facilitates the model learning task-relevant knowledge, while the triaffine scoring matrix establishes the correspondence between each schema and extraction targets. Obviously, the UniEX can better capture and share label semantics than using generative structures to encode underlying information. Meanwhile, triaffine transformation is a unified and cross-task adaptive operation, precisely controlling where to detect and which to associate in all IE tasks. Compared with the TANL and UIE, our approach achieves significant performance improvement on most datasets, with nearly \(1.36\%\) and \(1.52\%\) F1 on average, respectively. ### Experiments on Low-resource Scenarios To verify the generalization and transferability of UniEX in low-resource scenarios, we evaluate models under few-shot and zero-shot settings, respectively. In order to reduce the influence of noise caused by random sampling on the experiment results, we repeat the data/label selection processes for five different random seeds and report the averaged experiment results as previous works Hou et al. (2020); Chia et al. (2022). We use the BERTbase Devlin et al. (2019) as the UniEX backbone to align with other low-resource results. Firstly, we compare the UniEX with the competitive few-shot entity extraction models. For FewNER, we compare the proposed approach to DecomMeta Ma et al. (2022), ESD Wang et al. (2022), and methods from Ding et al. (2021), e.g., ProtoBERT, NNShot. For Cross-Dataset, we compare the UniEX to DecomMeta Ma et al. (2022) and baselines reported by Hou et al. (2020), e.g., TransferBERT, Matching Network, ProtoBERT and L-TapNet+CDT. Table 2 and 3 illustrates the main results on FewNER and Cross-Dataset of our approach alongside those reported by previous methods. It can be seen that UniEX achieves the best performance under different type granularity and domain divisions, and outperforms the prior methods with a large margin. Compare with DecomMeta on Cross-Dataset, UniEX achieves a performance improvement up to 6.94% and 5.63% F1 scores on average in 1-shot and 5-shot, which demonstrates the effectiveness of our approach in learning general IE knowledge. It indicates that even without pretraining on large-scale corpus, our approach can still sufficiently excavate the semantic information \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{**Intra**} & \multicolumn{4}{c}{**Inter**} \\ \cline{2-9} & \multicolumn{2}{c}{**1\(\sim\)2-shot**} & \multicolumn{2}{c}{**5\(\sim\)10-shot**} & \multicolumn{2}{c}{**1\(\sim\)2-shot**} & \multicolumn{2}{c}{**5\(\sim\)10-shot**} \\ \cline{2-9} & 5 way & 10 way & 5 way & 10 way & 5 way & 10 way & 5 way & 10 way \\ \hline ProtoBERT\({}^{\ddagger}\) & 23.45\(\pm\)0.92 & 19.76\(\pm\)0.59 & 41.93\(\pm\)0.55 & 34.61\(\pm\)0.59 & 44.44\(\pm\)0.11 & 39.09\(\pm\)0.87 & 58.80\(\pm\)1.42 & 53.97\(\pm\)0.38 \\ NNShot\({}^{\dagger}\) & 31.01\(\pm\)1.21 & 21.88\(\pm\)0.23 & 35.74\(\pm\)2.36 & 27.67\(\pm\)1.06 & 54.29\(\pm\)0.40 & 46.98\(\pm\)1.96 & 50.56\(\pm\)3.33 & 50.00\(\pm\)0.36 \\ ESD & 41.44\(\pm\)1.16 & 32.29\(\pm\)1.10 & 50.68\(\pm\)0.94 & 42.92\(\pm\)0.75 & 66.46\(\pm\)0.49 & 59.95\(\pm\)0.69 & **74.14\(\pm\)0.80** & 67.91\(\pm\)1.41 \\ DecomMeta & 52.04\(\pm\)0.44 & 43.50\(\pm\)0.59 & 63.23\(\pm\)0.45 & **56.84\(\pm\)0.14** & 68.77\(\pm\)0.24 & 63.26\(\pm\)0.40 & 71.62\(\pm\)0.16 & 68.32\(\pm\)0.10 \\ **UniEX** & **53.92\(\pm\)0.39** & **45.67\(\pm\)0.53** & **63.26\(\pm\)0.14** & 56.65\(\pm\)0.27 & **69.37\(\pm\)0.19** & **64.53\(\pm\)0.05** & 73.79\(\pm\)0.32 & **69.63\(\pm\)0.45** \\ \hline \hline \end{tabular} \end{table} Table 2: F1 scores with standard deviations on FewNERD. \({}^{\dagger}\) denotes the results reported from Ding et al. (2021). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{4}{c}{**1-shot**} & \multicolumn{4}{c}{**5-shot**} \\ \cline{2-9} & News & Wiki & Social & Mixed & News & Wiki & Social & Mixed \\ \hline TransferBERT\({}^{\ddagger}\) & 4.75\(\pm\)1.42 & 0.57\(\pm\)0.32 & 2.71\(\pm\)0.72 & 3.46\(\pm\)0.54 & 15.36\(\pm\)2.81 & 3.62\(\pm\)0.57 & 11.08\(\pm\)0.57 & 35.49\(\pm\)7.60 \\ Matching Network\({}^{\ddagger}\) & 19.50\(\pm\)0.35 & 4.73\(\pm\)0.16 & 17.23\(\pm\)2.75 & 15.06\(\pm\)1.61 & 19.85\(\pm\)0.74 & 5.58\(\pm\)0.23 & 6.61\(\pm\)1.75 & 8.08\(\pm\)0.47 \\ ProtoBERT\({}^{\ddagger}\) & 32.49\(\pm\)2.01 & 3.89\(\pm\)0.24 & 10.68\(\pm\)1.40 & 6.67\(\pm\)0.46 & 50.06\(\pm\)1.57 & 5.94\(\pm\)0.44 & 17.26\(\pm\)2.65 & 13.59\(\pm\)1.61 \\ L-TapNet+CDT\({}^{\ddagger}\) & 44.30\(\pm\)3.15 & 12.04\(\pm\)0.65 & 20.80\(\pm\)1.06 & 15.17\(\pm\)1.25 & 45.35\(\pm\)2.67 & 11.65\(\pm\)2.34 & 23.30\(\pm\)2.80 & 20.95\(\pm\)2.81 \\ DecomMeta & 46.09\(\pm\)0.44 & 17.54\(\pm\)0.98 & 25.14\(\pm\)0.24 & 34.13\(\pm\)0.92 & 58.18\(\pm\)0.87 & **31.36\(\pm\)0.91** & 31.02\(\pm\)1.28 & 45.55\(\pm\)0.90 \\ **UniEX** & **58.51\(\pm\)0.14** & **18.20\(\pm\)0.45** & **34.67\(\pm\)0.25** & **39.28\(\pm\)0.55** & **66.08\(\pm\)0.42** & 29.68\(\pm\)0.32 & **38.64\(\pm\)1.29** & **54.25\(\pm\)0.35** \\ \hline \hline \end{tabular} \end{table} Table 3: F1 scores with standard deviations on Cross-Dataset. \({}^{\ddagger}\) denotes the results reported from Hou et al. (2020). related with objective entities from label names, which enhances the understanding of task-specific information when data is extremely scarce. Secondly, we compare UniEX with the latest baselines TableSequence Wang and Lu (2020) and RelationPrompt Chia et al. (2022) on zero-shot relation triplet extraction task for Wiki-ZSL and Few-Rel datasets in Table 4. In both single-triplet and multi-triplet evaluation, UniEX consistently outperforms the baseline models in terms of Accuracy and overall F1 score respectively, which demonstrates the ability of our approach to handle unseen labels. Although we observe a lack of advantage in recall score for multi-triplet evaluation, the significant improvement in precision allowed our approach to achieve a balanced precision-recall ratio. The reason for such difference is probably because the directional matching in the triaffine transformation will tend to guide the model to predict more credible targets. ### Ablation Study In this section, we intend to verify the necessity of key components of the UniEX, including the flow controlling and triaffine transformation. Table 5 shows ablation experiment results of UniEX on four downstream tasks. **W/O SAM**: removing the schema-based attention mask matrix that controls the flowing of labels. We find that model performance is almost zero on many tasks, which demonstrates the importance of eliminating intra-information of labels. AMM makes the labels unreachable to each other, effectively avoiding the mutual interference of label semantics. **W/O TriA**: replacing the triaffine transformation with the multi-head selection network, which multiplies the schema and the head-to-tail span of the text respectively, and then replicates and adds them to get the scoring matrix. The significant performance decline demonstrates the important role of triaffine attention mechanism in establishing dense correspondence between schemas and text spans. **W/O Label**: replacing the label names with the special token [unused n], which eliminates label semantics while allowing the model to still distinguish between different labels. We find a slight degradation of model performance in small datasets CoNLL03 and 16-res, indicating that the prior knowledge provided by label names can effectively compensate for the deficiency of training data. As the correspondence between schema and extraction targets is not affected, model performance in large datasets tends to stabilize. ### Efficiency Analysis To verify the computation efficiency of our approach on universal IE, we compare inference-speed with UIE Lu et al. (2022) on the four standard datasets mentioned in section 4.4. As shown in Table 6, we can find that since generating the target structure is a token-wise process, the inference-speed of UIE is slow and limited by the length of the target structure. On the contrary, UniEX can decode all the target structures at once from the scoring matrices obtained by triaffine transformation, with an average speedup ratio of 13.3 to UIE. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & **CoNLL03** & **CoNLL04** & **CASIE** & **16-res** \\ & (sent/s) & (sent/s) & (sent/s) & (sent/s) \\ \hline UIE & 2.1(x1.0) & 1.0(x1.0) & 1.1(x1.0) & 1.4(x1.0) \\ **UniEX** & 16.5(x7.9) & 16.6(x16.6) & 14.9(x13.5) & 19.7(x14.1) \\ \hline \hline \end{tabular} \end{table} Table 6: The efficiency comparison of UIE and UniEX with batch_size=1. \((\times k)\) is the relative inference-speed. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Single-Triplet**} & \multicolumn{2}{c}{**Multi-Triplet**} \\ \cline{3-6} & & _Acc._ & _P._ & _R._ & _F1_ \\ \hline \multirow{3}{*}{Wiki-ZSL} & TableSequence & 14.47 & 43.68 & 3.51 & 6.29 \\ & RelationPrompt & 16.64 & 29.11 & **31.00** & 30.01 \\ & UniEX & **26.84** & **58.22** & 25.85 & **34.94** \\ \hline \multirow{3}{*}{FewRel} & TableSequence & 11.82 & 15.23 & 1.91 & 3.40 \\ & RelationPrompt & 22.27 & 20.80 & **24.32** & 22.34 \\ & UniEX & **27.30** & **44.46** & 15.72 & **23.13** \\ \hline \hline \end{tabular} \end{table} Table 4: Result for zero-shot relation triplet extraction under the setting of unseen label set size \(m=5\). We use the Micro-F1, Precision (P.) and Recall (R.) to evaluate the multiple triplet extraction. Evaluating single triplet extraction involves only one possible triplet for each sentence, hence we only use the Accuracy (Acc.) metric. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Dataset** & **CoNLL03** & **CoNLL04** & **CASIE** & **16-res** \\ \hline **F1** & **Ent** & **Rel-S** & **Evt-Tri** & **Evt-Arg** & **Rel-S** \\ \hline W/O SAM & 28.47 & 0 & 4.03 & 0 & 0 \\ W/O TriA & 58.58 & 49.40 & 6.97 & 1.51 & 29.77 \\ W/O Label & 92.59 & 70.94 & 71.18 & 62.29 & 74.64 \\ \hline **UniEX** & 92.65 & 73.40 & 71.46 & 62.91 & 76.02 \\ \hline \hline \end{tabular} \end{table} Table 5: Experiment results of UniEX with different ablation strategies on the test set of four downstream datasets: CoNLL03 (entity), CoNLL04 (relation), CASIE (event) and 16-res (sentiment). Conclusion In this paper, we introduce a new paradigm for universal IE by converting all IE tasks into joint span detection, classification and association problems with a unified extractive framework. UniEX collaboratively learns the generalized knowledge from schema-based prompts and controls the correspondence between schema and extraction targets via the triaffine attention mechanism. Experiments on both supervised setting and low-resource scenarios verify the transferability and effectiveness of our approaches. ### Limitations In this paper, our main contribution is an effective and efficient framework for universal IE. We aim to introduce a new unified IE paradigm with extractive structures and triaffine attention mechanism, which can achieve better performance in a variety of tasks and scenarios with more efficient inference-speed. However, it is non-trivial to decide whether a sophisticated and artificial prompt is required for complex datasets and large label sets. In addition, we only compare with limited baselines with specific datasets configurations when analyzing the performance of the UniEX in supervised, few-shot and zero-shot settings. In experiments, we implement only a few comparative experiments between BERT Devlin et al. (2019) and RoBERTa Liu et al. (2019) due to the limit of computational resources. ## Ethical Considerations As an important domain of natural language processing, information extraction is a common technology in our society. It is necessary to discuss the ethical influence when using the extraction models Leidner and Plachouras (2017). In this work, We develop a new universal IE framework, which enhances the generalization ability in various scenarios. As discussed Schramowski et al. (2019, 2022); Blodgett et al. (2020), pre-trained LMs might contain human-made biases, which might be embedded in both the parameters and outputs of the open-source models. In addition, we note the potential abuse of universal IE models, as these models achieve excellent performance in various domains and settings after adapting to pre-training on large-scale IE datasets, which allows the models to be integrated into applications often without justification. We encourage open debating on its utilization, such as the task selection and the deployment, hoping to reduce the chance of any misconduct.
2306.06034
RANS-PINN based Simulation Surrogates for Predicting Turbulent Flows
Physics-informed neural networks (PINNs) provide a framework to build surrogate models for dynamical systems governed by differential equations. During the learning process, PINNs incorporate a physics-based regularization term within the loss function to enhance generalization performance. Since simulating dynamics controlled by partial differential equations (PDEs) can be computationally expensive, PINNs have gained popularity in learning parametric surrogates for fluid flow problems governed by Navier-Stokes equations. In this work, we introduce RANS-PINN, a modified PINN framework, to predict flow fields (i.e., velocity and pressure) in high Reynolds number turbulent flow regimes. To account for the additional complexity introduced by turbulence, RANS-PINN employs a 2-equation eddy viscosity model based on a Reynolds-averaged Navier-Stokes (RANS) formulation. Furthermore, we adopt a novel training approach that ensures effective initialization and balance among the various components of the loss function. The effectiveness of the RANS-PINN framework is then demonstrated using a parametric PINN.
Shinjan Ghosh, Amit Chakraborty, Georgia Olympia Brikis, Biswadip Dey
2023-06-09T16:55:49Z
http://arxiv.org/abs/2306.06034v3
# RANS-PINN based Simulation Surrogates for Predicting Turbulent Flows ###### Abstract Physics-informed neural networks (PINNs) provide a framework to build surrogate models for dynamical systems governed by differential equations. During the learning process, PINNs incorporate a physics-based regularization term within the loss function to enhance generalization performance. Since simulating dynamics controlled by partial differential equations (PDEs) can be computationally expensive, PINNs have gained popularity in learning parametric surrogates for fluid flow problems governed by Navier-Stokes equations. In this work, we introduce RANS-PINN, a modified PINN framework, to predict flow fields (i.e., velocity and pressure) in high Reynolds number turbulent flow regimes. To account for the additional complexity introduced by turbulence, RANS-PINN employs a 2-equation eddy viscosity model based on a Reynolds-averaged Navier-Stokes (RANS) formulation. Furthermore, we adopt a novel training approach that ensures effective initialization and balance among the various components of the loss function. The effectiveness of the RANS-PINN framework is then demonstrated using a parametric PINN. Machine Learning, ICML, ICML ## 1 Introduction The traditional approach to designing complex devices and systems, for example, aerodynamic surfaces and thermal management systems, involves a back-and-forth interplay between exploring the design and operating space and assessing performance through computationally intensive computational fluid dynamics (CFD) simulations. However, the high computational cost associated with high-fidelity CFD solvers like Simcenter Star-CCM+ or Ansys Fluent undeniably curtails the overall scope of the design optimization process, often leading to suboptimal design choices. In this context, neural networks, with their expressiveness to capture pertinent functional relationships between initial/boundary conditions and the solution field of a PDE and the ability to predict simulation outcomes by invoking a single forward pass, offer an excellent tool for building fast and accurate surrogate models for CFD simulations. Such deep learning based approaches can accelerate design evaluations significantly, facilitating the generation of enhanced design choices through fast predictions of simulation outcomes. In recent years, there has been considerable attention given to the use of deep learning methods to expedite CFD simulations and thereby improve engineering design processes (Vinuesa Brunton, 2022; Warey et al., 2020; Zhang et al., 2022). While some approaches use deep learning to accelerate traditional CFD solvers (Hsieh et al., 2019; Kochkov et al., 2021), a certain body of research treats the flow problems as problems defined over a cartesian grid or an irregular mesh and uses techniques involving convolutional or graph neural operators to predict the flow fields (Hennigh, 2017; Jiang et al., 2020; Wang et al., 2020). Alternatively, in another line of work, physics-informed neural networks (PINNs) exploit _automatic differentiation_ and incorporate the underlying PDEs to approximate the solution field (Raissi et al., 2019; White et al., 2019; Nabian and Meidani, 2020; Zhang et al., 2020; Jin et al., 2021). In addition, self-supervised learning methods for solving PDEs with PINNs have also been explored (Dwivedi et al., 2019; Lu et al., 2019; Nabian and Meidani, 2019). This expanding body of research demonstrates the ability of ML-based approaches to accurately predict simulation outcomes, such as flow and temperature profiles over a spatiotemporal domain, utilizing both mesh-based and mesh-free techniques. Notably, the inclusion of physics-based regularization in these formulations have proven instrumental in enhancing the quality of the results. PINNs combine differential equations, such as compressible and incompressible Navier-Stokes equations, with experimental data or high-fidelity numerical simulations. While their ability to replace existing CFD solvers is a matter of debate, PINNs can accelerate simulations (Kochkov et al., 2021), reconstruct flow domains from a limited sensor or experimental data (Wang et al., 2022), and create parametric surrogates for design exploration and optimization (Olden burg et al., 2022; Sun et al., 2023). However, current PINN methods encounter challenges due to the complex interaction among the individual components of the loss function (both supervised and unsupervised), particularly when dealing with high-dimensional, non-convex PDE based losses. These challenges become more pronounced as the physics of the problem becomes more intricate, e.g., turbulent flows. RANS, the most commonly used turbulent CFD simulation tool, offers reasonably accurate solutions at a lower computational cost compared to high-fidelity _direct numerical simulation_ (DNS) and _large eddy simulation_ (LES), which require even finer mesh refinement to adequately capture all turbulence scales, further increasing computation time. Since its introduction by Launder & Spalding (1974), \(k\)-\(\epsilon\) model (\(k\) is the turbulent kinetic energy and \(\epsilon\) is the turbulent dissipation rate) has been established as a preferred model for efficient computation and real-world problems (Yang & Shih, 1993; Scott-Pomerantz, 2004; Ghosh et al., 2022). From this perspective, incorporating RANS-based turbulence modeling can significantly expand the application of PINNs in real-world simulation and design problems. However, using PINNs for RANS-based turbulence modeling is yet to be thoroughly studied (Majchrzak et al., 2023). Previous research by Eivazi et al. (2022) employed RANS within PINNs but utilized a Reynolds-stress formulation instead of a 2-equation model like \(k\)-\(\epsilon\). In contrast, Xu et al. (2021) employed a PINN with RANS formulation to calculate missing flow components. In this study, we focus on constructing PINN-based surrogate models for turbulent flow problems using a RANS formulation, specifically the \(k\)-\(\epsilon\) model, along with relevant data. We refer to the resulting solution as RANS-PINN and implement it using Nvidia Modulus (22.03) (Mod; Hennigh et al., 2021). The proposed training regime first pre-trains the network using data losses and then introduces the physics losses in a carefully crafted manner. We first assess RANS-PINN on three distinct geometries: a cylinder, an airfoil, and flow over backward facing step; and, then employ it to learn a parametric PINN for flow over a cylinder. This approach improves upon the existing turbulence modeling capabilities of Nvidia Modulus, while also adding to the very few existing studies on RANS based turbulence modeling using PINNs. ## 2 RANS-PINN ### Governing physics The underlying physics is governed by the continuity equation (to _conserve mass_), Navier-Stokes equation (to _conserve momentum_), and standard \(k\)-\(\epsilon\) turbulence model. By letting \(u\) and \(p\) denote the flow velocity and pressure, respectively, continuity and Navier-Stokes equation can be expressed as: \[\begin{split}\text{\bf NS:}&\nabla(u)=0\\ \text{\bf Cont:}&\rho(u\cdot\nabla)u+\nabla(p)- \mu_{eff}\nabla^{2}u=0,\end{split}\] where \(\rho\) is density of the fluid, \(\nabla\) denotes the vector differential operator, and \(\mu_{eff}:=\mu+\mu_{t}=\mu+0.09k^{2}/\epsilon\) represents the effective viscosity, i.e., the sum of molecular viscosity (\(\mu\)) and turbulent viscosity (\(\mu_{t}\)). In addition, the \(k\)-\(\epsilon\) turbulence model can be expressed as: \[\begin{split} k\text{\bf:}&\nabla(\rho u)=\nabla \left[\left(\mu+\frac{\mu_{t}}{\sigma_{k}}\right)\nabla k\right]+P_{k}- \epsilon\\ \text{\bf\epsilon:}&\nabla(\rho\text{\bf\bf\epsilon })=\nabla\left[\left(\mu+\frac{\mu_{t}}{\sigma_{\epsilon}}\right)\nabla \epsilon\right]+(C_{1}P_{\epsilon}+C_{2}\epsilon)\frac{\epsilon}{k}\end{split}\] where, \(C_{1}=1.44\), \(C_{2}=1.92\), \(\sigma_{k}=1\), and \(\sigma_{\epsilon}=1.3\) are empirical model constants. In addition, \(P_{k}\) and \(P_{\epsilon}\) are production terms. The _Reynolds number_ for this system is defined as: \(Re=\rho u_{inlet}L/\mu\), where \(u_{inlet}\) is the inlet velocity and \(L\) is the characteristic length. ### RANS-PINN architecture and training regime The RANS-PINN architecture (Fig. 1) uses Fourier neural operators (Li et al., 2021) with their default hyperparameters used in Modulus (Mod). For each of the individual output variables (i.e., \(u\), \(p\), \(k\), and \(\epsilon\)), we use separate neural networks, all sharing the same input variables consisting of positional coordinates \((x,y)\) and the associated Reynolds number. These networks are connected to the supervised/data loss, as well as the nodes of the PDE loss components. Conventional approaches to training PINNs involve introducing data and PDE losses simultaneously at the start of the training phase, often with equal weight multipliers. However, they often results in noisy training losses, slow convergence, and high validation error. RANS-PINN addresses these challenges by employing a pre-training step that only uses the data-driven supervised loss. During pre-training, each of the individual networks is updated independently using their corresponding data loss. Following pre-training, we introduce the PDE constraints into the loss function. Moreover, to normalize the effect of the individual components of the PDE loss function, we scale them by the inverse of their corresponding residual values. We then use _Adam_ with a decaying step size (with the initial step size of 0.001 and a decay rate of 0.95) until the training loss converges. To address the challenges associated with abrupt changes observed in the turbulence dissipation term \(\epsilon\) near wall and free shear regions, we use a _logarithmic loss function_ for both data and PDE losses associated with \(\epsilon\). Everything else is computed using an _MSE loss function_. The overall loss function can then be expressed as: \[\mathcal{L}=\mathcal{L}_{data}+\mathcal{L}_{BC}+\mathcal{L}_{PDE}, \tag{1}\] where the PDE loss is defined with weights \(\lambda_{i}\)'s as: \[\mathcal{L}_{PDE}=\lambda_{1}\mathcal{L}_{NS}+\lambda_{2}\mathcal{L}_{Cont}+ \lambda_{3}\mathcal{L}_{k}+\lambda_{4}\mathcal{L}_{\epsilon}. \tag{2}\] ## 3 Results and discussions ### Dataset generation using CFD simulation In this study, we employ Simcenter STAR-CCM+ (_Release 17.02.008_) to simulate turbulent flow scenarios using RANS CFD with the \(k\)-\(\epsilon\) turbulence model. Automatic meshers have been used for each case, with refinement near walls for low wall \(y+\), and wall functions for turbulence quantities. Moreover, we have used wake refinements to simulate flow around the cylinder (Fig. 2) and the airfoil. The data generated from the simulation is then normalized using the non-dimensional version of the underlying dynamics (i.e., continuity, Navier-Stokes, and RANS equations). We bring the range of various variables to a comparable order of magnitude by normalizing the spatial coordinates, the velocity, and the pressure with the characteristic length, the inlet velocity, and the dynamic pressure, respectively. Later, the data is denormalized again before visualization. ### Flow over a cylinder While the primary objective of this work is to construct a parametric PINN capable of accommodating varying Reynolds numbers (\(Re\)), an initial investigation is conducted using single CFD cases (at a fixed \(Re\)) to assess the optimal training regime. Flow over a cylinder is a well-studied problem in CFD, for both laminar and turbulent flows. The cylindrical obstacle causes a stagnation zone, and the flow diverts around the obstacle. As a result, flow separation occurs and vortex shedding can be seen in the wake. However, steady RANS models average out the periodic unsteady behaviour, resulting in the time averaged flow field. In this work, we employ a constant velocity inlet, along Figure 4: Prediction error in (a) _velocity_ and (b) _pressure_. To highlight the prediction error, we use normalized values of the logarithm of difference between the true and the predicted values. Figure 3: Spatial distribution of (a) _velocity magnitude_ and (b) _pressure_ for a \(Re=5600\) flow over the cylinder. Figure 2: Partial view of mesh for flow over a cylinder with refinement at the cylinder surface and wake regions. This mesh is also used for point cloud sampling in PINN training, and takes into account the density variations. Velocity profile shows gradients in regions of refinement. with symmetry planes on the top and bottom walls and a zero pressure outlet. For training, 3000 spatially distributed CFD data points are randomly sampled, with an additional 3000 points dedicated to PDE losses. Fig. 2(a) illustrates the comparison between true and predicted velocity fields, showcasing the aforementioned flow phenomena. The pressure plots (Fig. 2(b)) show a high pressure stagnation region as well as the low pressure flow separation region in both the true and predicted cases. The differences between true and predicted velocities and pressure for the log-loss training case are shown in Fig. 4. Major losses occur around the cylinder walls, which is known to be a challenging region for all turbulence models due to steep gradients. Moreover, the challenges with using only the data loss or the data+PDE loss but without the logarithmic loss function for \(\epsilon\) are highlighted in Fig. 5. These choices for the loss function yield flow fields with discontinuities and noise stemming from the combination of data and physics losses. This is further reflected in the validation error values reported in Table 1. In conclusion, the proposed training regime for RANS-PINN exhibits lower validation losses as well as superior predictive performance. Test on other geometries: Flow over a backwards facing step and NACA 2412 airfoil at a single \(Re\) To understand the general efficacy of the proposed training method, two additional geometries were chosen for investigation. The first geometry involves airfoils which represents external flows, where a pressure gradient is established between the top and bottom surfaces due to acceleration of flow over the top surface (seen in darker red zones of velocity in Fig. 5(a) and higher pressure magnitudes in Fig. 5(b)), which causes lift. The second geometry consists of a backwards facing step (Fig. 7), where a separation bubble form due to sudden expansion in the channel. This leads to flow separation and detachment and then re-attachment. Both cases had no-slip walls and constant velocity inlet boundary conditions with a zero pressure exit. Low validation error in Table 2 and visual inspection of Fig. 6 and Fig. 7 show that the flow fields have been successfully predicted. ### Parametric PINN for flow over a cylinder After establishing the training regime with these three flow geometries, we revisit the _flow over a cylinder_ problem for creating a parametric PINN. The parametric PINN can predict outcomes of CFD simulations for unseen flow scenarios, in particular for any given Reynolds number (\(Re\)), which depends on the inlet velocity inlet velocities. We achieve this by including the Reynolds number as an additional input to the individual neural networks. In this study, we ran CFD simulations for six different Reynolds numbers ranging from 2800 to 5600, with uniform spacing between the values. We sampled 3000 spatial data points from each simulation and utilized them along with PDE losses to train the parametric PINN with \(Re\) as the \begin{table} \begin{tabular}{l c c c} \hline \hline Loss Function & x vel & y vel & Pressure \\ \hline Data Only & 0.205 & 0.284 & 0.029 \\ Data+PDE & 0.187 & 0.474 & 0.066 \\ Data+PDE w/ Log-loss & 0.014 & 0.03 & 0.105 \\ \hline \hline \end{tabular} \end{table} Table 1: Validation errors for flow over cylinder \begin{table} \begin{tabular}{l c c c} \hline \hline Case & x vel & y vel & Pressure \\ \hline NACA 2412 & 0.091 & 0.131 & 0.022 \\ Backwards facing step & 0.024 & 0.146 & 0.137 \\ \hline \hline \end{tabular} \end{table} Table 2: Validation errors for NACA airfoil (\(Re=3\times 10^{5}\)) and backward facing step (\(Re=5600\)). Figure 5: Impact of various choices for the loss function. This figure compares the magnitude of the velocity predicted by a PINN trained with (a) only data loss, (b) both data and PDE loss, and (c) both data and PDE loss with a log-loss for \(\epsilon\) against its true value from STAR-CCM+ simulation. Figure 6: Spatial distribution of (a) _velocity magnitude_ and (b) _pressure_ for a \(Re=3\times 10^{5}\) flow around a NACA 2412 airfoil. Figure 7: Spatial distribution of (a) _velocity magnitude_ and (b) _pressure_ for a \(Re=5600\) flow over a backward facing step. underlying parameter. Although each CFD simulation has 61000 mesh data points, we trained the parametric PINN using only 3000 points, resulting in faster convergence. By leveraging the parametric PINN, we can now predict flow fields for any given Reynolds number. This is highly beneficial for design optimization and exploration studies, as it eliminates the need for additional CFD data to predict primary flow variables across the entire solution domain. Moreover, compared to the traditional approaches, where each CFD simulation run takes approximately 24 core minutes, the parametric PINN can yield results in a near real-time fashion, significantly accelerating the overall process. Fig. 8a and Fig. 8b show the velocity and pressure distributions for \(Re=3140\). On the other hand, Fig. 9a and Fig. 9b display the velocity and pressure distributions for \(Re=5700\), which falls outside the training range. In each validation case, we examined 61000 mesh points. Table 3 presents the overall error metrics for validation in the case of the parametric PINN. ## 4 Conclusion PINN-based approaches to learning surrogate models for spatiotemporal systems governed by nonlinear PDEs are relatively common in the literature. However, despite playing an instrumental role in many real-world applications, two-equation RANS turbulence models are yet to be integrated into PINN-based approaches. In this work, we adopt a novel training regime to ensure the successful integration of RANS turbulence model physics into PINNs. Once trained with a limited amount of CFD data, RANS-PINN can yield accurate predictions of overall flow fields for a single Reynolds number. Building upon the successful outcomes of these evaluations for three different flow geometries (flow over a cylinder, a backward-facing step, and a NACA 2412 airfoil), we develop a parametric version of the RANS-PINN to predict flow over a cylinder for any given/unforeseen Reynolds numbers. The parametric RANS-PINN, which highlights how whole simulation cases can be inferred without requiring any CFD data from that specific Reynolds number, offers significant potential in solving design exploration and inverse problems for many real world applications. ## Broader impact The current work focuses on turbulent flow problems with two-equation turbulence models. While this perspective is not commonly explored in the PINN literature, these turbulence models hold significant importance in many industrial and academic settings where a lack of computing resources prevents the use of Direct Numerical Simulation (DNS) and Large Eddy Simulation (LES). We can effectively tackle design and inverse problems in many real-world cases by employing a turbulent flow PINN, such as RANS-PINN. The ability to reconstruct a flow field from limited data can help in real-world problems with limited sensor data. Moreover, a parametric PINN trained with minimal CFD data adds significant value to design exploration and optimization by offering a convenient, fast, and computationally efficient means to predict simulation outcomes.
2307.11661
Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts
Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on downstream datasets. VLMs are 0-shot adapted to a downstream dataset by designing prompts that are relevant to the dataset. Such prompt engineering makes use of domain expertise and a validation dataset. Meanwhile, recent developments in generative pretrained models like GPT-4 mean they can be used as advanced internet search tools. They can also be manipulated to provide visual information in any structure. In this work, we show that GPT-4 can be used to generate text that is visually descriptive and how this can be used to adapt CLIP to downstream tasks. We show considerable improvements in 0-shot transfer accuracy on specialized fine-grained datasets like EuroSAT (~7%), DTD (~7%), SUN397 (~4.6%), and CUB (~3.3%) when compared to CLIP's default prompt. We also design a simple few-shot adapter that learns to choose the best possible sentences to construct generalizable classifiers that outperform the recently proposed CoCoOP by ~2% on average and by over 4% on 4 specialized fine-grained datasets. The code, prompts, and auxiliary text dataset is available at https://github.com/mayug/VDT-Adapter.
Mayug Maniparambil, Chris Vorster, Derek Molloy, Noel Murphy, Kevin McGuinness, Noel E. O'Connor
2023-07-21T15:49:59Z
http://arxiv.org/abs/2307.11661v2
# Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts ###### Abstract Contrastive pretrained large Vision-Language Models (VLMs) like CLIP have revolutionized visual representation learning by providing good performance on downstream datasets. VLMs are 0-shot adapted to a downstream dataset by designing prompts that are relevant to the dataset. Such prompt engineering makes use of domain expertise and a validation dataset. Meanwhile, recent developments in generative pretrained models like GPT-4 mean they can be used as advanced internet search tools. They can also be manipulated to provide visual information in any structure. In this work, we show that GPT-4 can be used to generate text that is visually descriptive and how this can be used to adapt CLIP to downstream tasks. We show considerable improvements in 0-shot transfer accuracy on specialized fine-grained datasets like EuroSAT (\(\sim 7\%\)), DTD (\(\sim 7\%\)), SUN397 (\(\sim 4.6\%\)), and CUB (\(\sim 3.3\%\)) when compared to CLIP's default prompt. We also design a simple few-shot adapter that learns to choose the best possible sentences to construct generalizable classifiers that outperform the recently proposed CoCoOP by \(\sim 2\%\) on average and by over \(4\%\) on 4 specialized fine-grained datasets. The code, prompts, and auxiliary text dataset is available at github.com/mayug/VDT-Adapter. ## 1 Introduction Contrastive pre-training of large-scale VLMs has demonstrated remarkable image classification performance on open-set classes. Models like CLIP [25] and ALIGN [13] are pretrained on web-scale datasets consisting of image-text pairs (over 400 million and 1.8 billion respectively), resulting in a highly generalizable model with competent 0-shot domain adaptation capabilities. While vanilla supervised training is performed on a closed set of concepts or classes, CLIP pretraining uses natural language. This results in a joint text-vision embedding space that is not constrained to a fixed set of classes. In CLIP, the classifier is constructed by plugging the class name into a predetermined prompt template like 'a photo of {class name}'. A straightforward way to adapt CLIP to different domains is by prompt engineering, which usually involves modifying the prompt template to include semantic information about the target task. For example, to classify bird images, one could construct a prompt 'a photo of {classname}, a type of bird'. This prompt engineering process, however, is not optimal because it: 1.) requires domain expertise in the target domain; 2.) has high variance - small changes to the prompt result in large variation in performance; 3.) has a fixed prompt template for all the classes, therefore only the class name in the prompt provides the classification anchor, which might not contain enough information to distinguish different classes. For example, in Fig 1 we see an image of a Green Heron, which from the name would suggest that it is predominantly a green-colored bird and we would assume that it is similar to Green Woodpecker if we have never seen either bird. However, we can see that it is in fact a blackish-brown bird with a chestnut-colored neck and visually more similar to a bird like the Black Bittern. For 0-shot transfer to fine-grained datasets like this to work well, CLIP has to either have seen and associated images of a Green Heron to the text 'Green Heron' from its large pretraining dataset or additional information in the form of _visually descriptive textual_ (VDT) information is required. Here we define VDT as a set of sentences that describe the visual features of the class under consideration including shape, size, color, environment, patterns, composition, etc. While most humans can identify many different common bird species just from their names, they would need access to an ornithology taxonomy of bird descriptions to identify more rare bird species. Similarly, we argue that CLIP's 0-shot accuracy can be improved by incorporating VDT information into the prompts. As shown, in Fig 1, including VDT information like _black crown_ and _black rump_ moves the classification prototype of Green Heron away from the classification prototype of Green Woodpecker and towards that of Black Bittern in the text-encoder's embedding space. In this work, we first show that we can use VDT information for each class in the target domain to construct class conditional prompts that achieve performance improvements over CLIP's default prompt. We show this on the CUB dataset [1] by constructing sentences from domain experts about the bird species in Section 3.2.1 as they are readily available as part of the dataset. However, we acknowledge that domain expert annotations are costly and time-consuming to obtain, hampering the scalability of our method to other datasets. To address this, we focus on the recent advances in _generative pre-trained Large Language Models (LLMs)_ like GPT-4 to construct these class conditional prompts in a manner easily scalable to other datasets. These models are a good fit for the task of constructing sophisticated prompts, because: 1) they are a condensed form of human knowledge (trained on web-scale text data) [32]; 2) they can be manipulated to produce information in any form or structure which makes compatibility with CLIP's prompt style relatively simple. Therefore we use GPT-4 to construct visually descriptive textual information about the classes with special emphasis in the GPT-4 prompts about visual cues like shape, color, structure, and compositionality. We use the generated VDT information to construct prompt ensembles that are passed through CLIP's text encoder and aggregated to generate classifiers that are then used for 0-shot classification. Using GPT-4 circumvents the need for domain knowledge and conveniently provides class conditional prompts. Prompt ensembling the VDT sentences reduce CLIP's performance sensitivity to small changes in the prompt. We show performance improvements over vanilla CLIP with the default prompt on 12 datasets with an average improvement of 2% and even better improvements in fine-grained datasets like EuroSAT (\(\sim 7\%\)), DTD (\(\sim 7\%\)), SUN397 (\(\sim 4.6\%\)), and CUB (\(\sim 3.3\%\)). The prompts and all the auxiliary class information will be made publicly available to promote research in prompt ensembling and multi-modal adapter design. Finally, we design a simple adapter that learns to adaptively select and aggregate the best sentences for any given dataset and show that making use of this additional VDT information improves the few-shot domain transfer performance of CLIP as well. We demonstrate the few-shot adaptation performance for the recently proposed Base-to-New setting on a benchmark of 12 datasets and outperform recent methods like CoOp [35] and CoCoOp [34] despite having fewer model parameters, shorter training time, and a simpler model architecture. In short, our contributions are as follows: 1. We show that including visually descriptive textual (VDT) information in prompts results in better 0-shot domain transfer performance of CLIP. 2. We use GPT-4 to generate VDT sentences in a scalable manner and show consistent performance improvements over CLIP in 0-shot domain transfer. 3. We design a simple adapter network to make use of this extra information for few-shot transfer and show performance improvements over methods like CLIP Figure 1: An example showing three birds, Green Heron, Green Woodpecker, and Black Bittern. Green Heron and Green Woodpecker have close-by classification prototypes by virtue of not having enough details in the prompt template. Only the text-encoder’s embedding space is visualized. Here we see that adding visual descriptions to the prompt resolves this issue and moves the classification prototypes in the word-encoder’s space such that classification prototypes for visually similar birds (Green Woodpecker and Black Bittern) lie together. Adapter and CoCoOp [34] for few-shot domain transfer in the Base-to-New setting. 4. We release all the VDT information for all 12 datasets to promote further research in multi-modal prompt and adapter design for low-shot domain transfer of large VLMs. ## 2 Related Works ### Vision Language Models Recent VLMs [13, 25, 9] jointly learn the vision and language encoders from scratch and have demonstrated impressive 0-shot domain transfer performance. As mentioned in [35], this can be attributed to transformer networks [28], contrastive losses [4, 11], and web-scale training datasets [25, 14]. While our GPT-generated prompt ensembles are similar to CLIP's prompt ensembles, CLIP's prompt ensembles were constructed and tuned manually, and are class agnostic, while ours were generated by GPT models that were prompted to provide VDT information for each class. ### Prompt Learning CoOp [35] successfully used prompt learning in VLMs but had generalizability limitations due to overfitting on the few-shot dataset [34]. In response, CoCoOp was proposed, enhancing performance with image-conditioned prompt learning using a meta-network, albeit at a higher resource cost. We address generalizability differently by using class conditional VDT information. Our simpler and more efficient model, CLIP-A-self, outperforms CoCoOp in the Base-to-New few-shot setting. ### Few-shot adapters for Vision Language models CLIP-Adapter [10] (CLIP-A) offers a simpler few-shot transfer method for VLMs, utilizing an MLP trained on fixed image/text encoders. Our CLIP-A-self is different from CLIP-A in that we apply a self-attention mechanism on the set of all sentences for any class, learning to select and aggregate the best subset of VDT information for the dataset from the few-shot training set. Although Tip-adapter [33] showed superior performance on base classes with a cache model, it's inapplicable in the Base-to-New setting due to its reliance on few-shot test class examples, making it irrelevant for our comparison. ### Semantic information from Large Language Models Recent advancements in transformer-based language models, particularly the GPT family [3, 22], have demonstrated exceptional abilities in semantic extraction from intricate texts. Their application to vision tasks has emerged as an active area of research. [20] employs Palm540BLL [5] to generate semantic data for unsupervised class embedding vectors in 0-shot classification, but only tests on three legacy datasets. Our research presents results on a modern benchmark of 12 datasets. Recently, [24, 19] leverage GPT-3 for class conditional prompts to enhance CLIP's 0-shot domain transfer on 6 datasets. While [19] focuses on using GPT-3 to construct visual descriptors that aid in the interpretability of CLIP's predictions during 0-shot domain transfer, we argue that 0-shot domain transfer performance improves with the inclusion of high-quality VDT information. Hence, we make use of GPT-4 for richer, more diverse, and more accurate VDT information. While [19] utilize GPT-3, probability space ensemble, and highlight VDT's role in 0-shot transfer, our method differs. We use GPT-4 for auxiliary data collection, perform ensemble in word-encoder space, and introduce a few-shot adapter for optimal VDT selection in few-shot transfer. [27] uses GPT-3 for prompt construction in diffusion models to generate images for support sets while our work only uses GPT4 to acquire auxiliary text data. To our knowledge, we are the first to prompt GPT-4 for visually descriptive sentences to improve CLIP's 0-shot and few-shot domain transfer. ## 3 Methodology ### Review of CLIP and CLIP-Adapter Through contrastive pretraining on large image-text datasets, CLIP performs image classification on various concepts, aligning related images and texts in a shared embedding space, while separating dissimilar ones. After pretraining, CLIP directly performs image classification on the target dataset without any finetuning. First, we review how the CLIP model performs 0-shot classification on an open set. The CLIP model, comprising a vision and language model, encodes an image and its corresponding caption into visual and textual embeddings, respectively. During inference, these embeddings are compared using _cosine similarity_. Given an image \(I\in\mathbb{R}^{H\times W\times C}\), where \(H\), \(W\), \(C\) denotes the height, width, and number of channels of the image, the vision encoder transforms the image into the joint embedding space to get the image features \(f\in\mathbb{R}^{D}\) where \(D\) represents the dimension of the features. During inference, a prompt template such as 'A photo of {classname}' is used to generate sentences for \(K\) different classes and passed through the text-encoder to yield classifier weight matrix \(W\in\mathbb{R}^{D\times K}\). Prediction probabilities are then calculated by multiplying image feature \(f\) with \(W\) and applying a softmax function: \[f=\text{Backbone}(\mathbf{I}),\;\;p_{i}=\frac{\exp(\mathbf{W}_{i}^{T}f)/\tau}{\sum_ {j=1}^{K}\exp(\mathbf{W}_{j}^{T}f)/\tau}, \tag{1}\] In CLIP [25], 0-shot domain transfer utilizes domain-specific information in the prompt template, such as 'A photo of a {class-name}, a type of bird' for bird images. [25] reports that careful prompt design and prompt ensembling are important to improve 0-shot classification accuracy. Prompt ensembling is achieved by constructing several prompts for each class and then averaging the classification vectors. In our work, we show that prompt ensembles of VDT information improve CLIP's 0-shot domain transfer. CLIP-A [10] is a learnable MLP adapter applied to image and/or word encoder features for few-shot transfer to target datasets. During few-shot transfer, given \(N\) images per class with labels, denoted as \(\left(x_{i,k},y_{i,k}\right)_{i=1,k=1}^{i=N,j=K}\), \(K\) classifier weights are constructed using the prompt template \(H\) and text encoder \(g\) as \(W=g(H(classname(\{y_{i,k}\})))\). The image features \(f\) and text features \(W\) pass through the learnable adapters \(A_{v}\), \(A_{t}\) to get adapted features as follows. \[f^{\star} =\alpha A_{v}(f)^{T}+(1-\alpha)f, \tag{2}\] \[\mathbf{W}^{\star} =\beta A_{t}(\mathbf{W})^{T}+(1-\beta)\mathbf{W}. \tag{3}\] The hyperparameters \(\alpha\) and \(\beta\) blend CLIP's knowledge with fine-tuned knowledge to avoid CLIP-Adapter overfitting. Logits are calculated as per Eqn 1, and cross entropy loss over the entire training set \(\left(x_{i,k},y_{i,k}\right)_{i=1,k=1}^{i=N,j=K}\) is used to optimize \(A_{v}\), \(A_{t}\). In the _All_ setting, few-shot transfer is tested on a hold-out dataset with images from the \(K\) classes used in training. In the Base-to-New setting, proposed by [34], the evaluation occurs on \(U\) non-overlapping classes. Our model is evaluated in the more practical Base-to-New setting. ### Language Model Prompt Design In this section, we show that using VDT information in the prompt template improves CLIP's 0-shot transfer capabilities and describe our approach to generate class-specific prompts using an LLM. #### 3.2.1 Visual Descriptive Sentences [25] demonstrates that careful prompt design and prompt ensembling improve the 0-shot classification performance of CLIP. Here we ask the question: What type of information can be appended to the prompt template to improve the 0-shot domain transfer performance? We show that appending visually descriptive information to the prompt template and ensembling improves the 0-shot performance over the default prompt and prompts containing non-visual information. Using the CUB dataset with expert annotations, we contrast the 0-shot performance of visual and non-visual prompt ensembles. For the visual prompts, we take class attribute vectors detailing attributes like color, pattern, shape, etc. for 28 bird body parts, leading to 312 scores per bird. We use the most pronounced attribute-value pairs to form 28 visual prompts (denoted _Visual-GT_) such as 'A photo of Green Heron. Green Heron has a greenish-black head cap.' Conversely, for non-visual prompts (denoted _Non-Visual-GT_), we collect information on bird calls, migration, behavior, and habitat, yielding 12 different prompts like 'A photo of Green Heron. The green heron's bird call is a loud, harsh'skewow' per class. We derive classification vectors for _Visual-GT_ and _Non-Visual-GT_ by averaging class-level sentence embeddings within CLIP's joint embedding space, considering its 77-token limit. Table 1 shows no improvement using _Non-Visual-GT_ prompts over the default, yet a 4\(\%\) improvement with _Visual-GT_. #### 3.2.2 Prompting LLMs for visually descriptive information In the prior section, we highlighted the use of expert VDT information in creating class-specific prompts to enhance CLIP's 0-shot performance. However, acquiring expert annotations is both expensive and time-consuming. To overcome this, we utilize GPT language models, known for their large-scale knowledge and flexibility [32]. Our approach involves using GPT-4 to generate visual descriptions for any given dataset thereby aiding in the construction of prompt ensembles for CLIP in a scalable manner. Our prompting strategy takes inspiration from chain-of \begin{table} \begin{tabular}{c c c c c|c} \hline \hline \multirow{2}{*}{Prompting} & \multirow{2}{*}{Default} & \multicolumn{2}{c}{Non-Visual-GT} & \multicolumn{1}{c}{Visual-GT} & \multicolumn{1}{c}{Visual-GPT} \\ \hline Accuracy & 54.7 & 53.0 & 57.7 & 57.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparing visual and non-visual prompt ensembles for 0-shot domain transfer to the CUB dataset. \begin{table} \begin{tabular}{c c c c c c|c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{EuroSAT} & \multirow{2}{*}{Food101} & \multirow{2}{*}{DTD} & \multicolumn{2}{c}{Oxford} & \multirow{2}{*}{CUB} & \multirow{2}{*}{ImageNet} & \multirow{2}{*}{Average} \\ & & & & & & & \\ \hline CLIP & 47.69 & 85.97 & 43.09 & 89.07 & 54.70 & 64.51 & 64.17 \\ DCLIP[19] & 48.82 & **88.50** & 45.59 & 86.92 & **57.75** & 68.03 & 65.93 \\ CLIP-GPT & **54.86** & 86.43 & **50.15** & **91.54** & 57.43 & **68.92** & **68.21** \\ \hline \hline \end{tabular} \end{table} Table 2: Results of including LLM generated VDT on 6 datasets for comparison with other works. We see that higher quality VDT from GPT-4 outperforms GPT-3 generated VDT on specialized datasets like DTD OxfordPets and EuroSAT. thought prompting [29] and is as follows: First, we ask GPT-4 to list all the attributes that may be necessary to discriminate between images of the \(K\) classes under consideration. Second, we ask GPT-4 to provide the values for all these attributes for all the \(K\) classes as sentences. An example for the CUB dataset is shown in the left side of Fig 1. The last row in Table 1 shows that the GPT-4 generated visual sentences' performance is similar to that of sentences generated from the class attribute vectors annotated by domain experts. We follow the same simple strategy for all the datasets in the benchmark suite to generate visually descriptive sentences in a scalable and flexible manner and use them to construct prompt ensembles. ### Simple few-shot adapters for visual sentences We design a simple adapter that can use VDT information to improve the few-shot transfer of CLIP to the target datasets. Similar to the CLIP-A text, we append a small set \begin{table} \begin{tabular}{l l l|l} \hline \hline & & Base & New & H \\ \hline CLIP & 68.45 & 73.89 & 71.05 \\ CoOp & **82.39** & 62.39 & 70.99 \\ CoCoOp & 79.35 & 71.89 & 75.37 \\ CLIP-A & 78.90 & 72.14 & 75.07 \\ \hline CLIP-A-self & 82.12 & 74.20 & **77.78** \\ \hline \hline \end{tabular} \end{table} Table 4: Comparing our CLIP-A-self against other methods on average accuracy over 12 datasets. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline Methods & EuroSAT & Caltech101 & Oxford & Food101 & FGVC & DTD & Oxford & Stanford & Sun397 & UCF101 & CUB & ImageNet & Average \\ \hline CLIP & 47.69 & 93.75 & 70.69 & 85.97 & **24.81** & 43.09 & 89.07 & **65.55** & 62.61 & **67.54** & 54.70 & 64.51 & 64.16 \\ CLIP-GPT & **54.86** & **94.51** & **73.40** & **86.43** & 23.42 & **50.15** & **91.54** & 65.01 & **67.24** & 65.51 & **57.43** & **68.9** & **66.53** \\ \hline \hline \end{tabular} \end{table} Table 3: Results of 12 datasets with ViT-B/16. Figure 2: CLIP-A-self, our simple self-attention based adapter learns to select and aggregate the most relevant subset of Visually Descriptive Text (VDT) to generate more generalizable classifiers. First, we prompt GPT-4 to generate VDT, N sentences for K classes that are then passed through the text encoder to get embeddings for each of the N*K sentences. Self-attention is applied over the N sentences of each class and averaged to get K adapted classifier embeddings. of learnable parameters to the output of the word encoder and train the adapter using cross-entropy loss. Our CLIP-A-self uses a self-attention layer that applies attention over the embeddings of the different sentences for each class and averages the output to get the final classification vector. Given we have \(M\) GPT generated sentences for each of the \(K\) classes \(t_{m,k}\), we construct \(M\) prompts by appending each sentence to the prompt template like \(H(classname(y_{i,k}),\{t_{m,k}\})\) and pass them through CLIP's word encoder to get \(W^{sent}\in\mathbb{R}^{D\times M\times K}\). For the self-attention adapter, we apply vanilla self-attention [28] over all the visual descriptive sentences such that during training it learns to select and aggregate the most relevant visual sentences for identifying each class. Just like before, we first obtain the classification vector for all sentences \(W^{s}\in\mathbb{R}^{K\times M\times D}\) and pass them as the key, query, and value to the self-attention module \(B_{self}\) and average out the output tokens to get the final classification vector \(W^{\star}\). Here the attention is applied over the \(M\) different visually descriptive sentences. \[W_{avg}=1/M\sum_{m=1}^{M}W^{s}_{m,k} \tag{4}\] \[\big{\{}W^{a}_{m,k}\big{\}}_{1}^{M}=B_{self}\big{(}\big{\{}W^{s}_ {m,k}\big{\}}_{1}^{M},\big{\{}W^{s}_{m,k}\big{\}}_{1}^{M},\big{\{}W^{s}_{m,k} \big{\}}_{1}^{M}\big{)}\] (5) \[W_{a-mean}=1/M\sum_{m=1}^{M}W^{a}_{m,k}\] (6) \[W^{\star}=\beta\mathbf{W_{a-mean}}^{T}+(1-\beta)\mathbf{W_{avg}} \tag{7}\] We finally obtain the new adapter classifier weights \(W^{\star}\in\mathbb{R}^{D\times K}\) that have been adapted to focus on the most visually discriminative information among the \(M\) visually descriptive sentences for any given dataset. We make use of 1 to calculate the probabilities and predict the image category by selecting the class with the highest probability. During the few-shot training only the weights of the adapter network \(B_{self}\) are trained using cross-entropy loss. ## 4 Experiments We assess the significance of visual sentence ensembles in two scenarios: (i) we gauge visual sentence quality by comparing an ensemble of these prompts with CLIP's default prompts across 12 benchmark datasets; (ii) we contrast the performance of adapters using these visual prompts against other few-shot transfer techniques in Base-to-New class generalization within a dataset. Prior to discussing the results, we detail the datasets and experimental setup. ### Datasets We use 11 diverse image recognition datasets from [35] and the bird species CUB dataset [1] for both study settings, extending our suite to 12. These include generic object datasets ImageNet [7] and Caltech101 [8]; fine-grained classification datasets OxfordPets [23], StanfordCars [16], Flowers102 [21], Food101 [2] and FGVCAircraft [17]; SUN397 [31] for scene recognition; UCF101 [26] for action recognition; DTD [6] for texture classification; EuroSAT [12] for satellite imagery; and CUB for bird identification. For 0-shot transfer with visual sentences, we test on _All_ classes across these datasets while for the Base-to-New setting, following [34], we equally sample classes for base and new sets without overlap. We use the 150-base and 50-new class split from ZSL and few-shot literature [30, 18] for CUB. Like [34], our CLIP-A-self is evaluated on the 16-shot setting for easier comparison with other methods. ### Baselines We compare the performance of visual sentences ensemble on 0-shot transfer against the CLIP model [25] whose default prompts for each dataset have been extensively fine-tuned using a test set. We also compare against DCLIP [19] a recent work that uses GPT-3 to generate VDT information for 0-shot transfer. We compare our CLIP-A-self against two prompt learning methods CoOp [35] which learns static prompts and CoCoOp [34] which learns a dynamic prompt that is specifically designed to improve Base-to-New transfer. We also compare our CLIP-A-self against CLIP-A [10] due to the similarity in architecture and to show that the performance improvements are from making use of the visual sentences and not from the just adapting the text features. ### Training settings Our implementation is based on CoOp's and CLIP-A's code. 1 We make all our comparisons on VIT CLIP backbone i.e., VIT-B/16. We take the results for CoOp and CoCoOp for all datasets (except CUB) from their respective papers, while we make use of practices from the respective papers like context length set to 4 and context initialization to "a photo of" to ensure the best results on the CUB dataset. For CLIP-A, we re-run all experiments on VIT-B/16 backbone as they were not reported in the paper. For all adapter models including ours, we only tune the residual ratio \(\beta\) hyper-parameter. For CLIP-A, we use the version where the MLP is applied on top of the visual encoder as it performed the best [10]. We make use of May version of GPT-4 for obtaining the auxiliary dataset. Footnote 1: [https://github.com/KaiyangZhou/CoOp](https://github.com/KaiyangZhou/CoOp), [https://github.com/gaopengcuhk/CLIP-Adapter](https://github.com/gaopengcuhk/CLIP-Adapter) ### GPT generated visual sentences improve 0-shot transfer. We compare the performance of CLIP-GPT prompt ensemble with the default prompts of CLIP in Table 3. GPT generated prompt ensemble improves upon the performance of CLIP 0-shot by \(~{}2\%\) on average over 12 datasets. The improvement over CLIP-ZS is significant; over \(5\%\) for specialized fine-grained datasets like CUB, SUN397, EuroSAT, and DTD and over \(2\%\) for oxford-flowers and oxford-pets. This shows that CLIP does not recognize several of the classnames in these datasets and describing the class in the form of visually descriptive sentences results in better classifiers from the text-encoder and better classification accuracy. It is also worth noting that only including the visually descriptive sentences in the prompts can help improve the performance of general datasets like Imagenet (over 4\(\%\)) and Caltech-101 (over 1\(\%\)) too. For all other datasets, the transfer performance matches that of CLIP, with the exception being the action recognition dataset UCF-101. We inspected the sentences generated for UCF-101 and notice that several of the sentences generated by GPT involves temporal information instead of visual descriptions and we believe this could be the reason for the drop in accuracy. However, we notice in Section 4.5.1 that the self-attention module of the few-shot adapter learns to emphasize the visual sentences out of the generated sentences which might explain the improvement in the performance of few-shot adapters in the new setting in Section 4.5. We also compare against recent work [19] on their subset of 6 datasets for VIT-B/16 encoder in 2. We see that using the larger \begin{table} \end{table} Table 5: **Comparison of GPT-Adapters with CLIP, CoOp and CoCoOp in the Base-to-New generalization setting. For prompt learning-based methods (CoOp and CoCoOp), their prompts are learned from the base classes (16 shots). The results strongly justify the importance of including extra visual information. H denotes Harmonic mean (to highlight the generalization trade-off [30]).** GPT-4 model over the GPT-3 model results in much higher improvements for specialized datasets like DTD (\(\sim 5\%\)) and EuroSAT (\(\sim 6\%\)). We compare the text used by [19] against our GPT4-generated VDT in the supplementary. ### GPT-Adapters improve few-shot transfer performance. We compare the performance of our CLIP-A-self against CLIP, CoOp, and CoCoOp on the benchmark suite of 12 datasets in the Base-to-New setting in Table 5. Here we see that GPT-Adapters that make use of the VDT information outperform CoCoOp by \(3\%\) in the new setting while maintaining similar performance to that of CoOp in the base setting on the average accuracy over 12 datasets. This is impressive considering that CoCoOp makes use of a meta-network and forward pass through the text encoder making it computationally intensive to train. CoCoOp takes up to 5 hours to train on 16-shot ImageNet for VIT-B/16 encoder, in comparison, our CLIP-A-self takes only 10 mins (on an RTX 3090 GPU). The Base-to-New generalization ability of our adapters is even more impressive for fine-grained, specialized datasets as evidenced by the gains over CoCoOp in Harmonic mean of base and new accuracy. For example, CLIP-A-self demonstrates gains in datasets like FGV-Circraft ( 7.5\(\%\)), EuroSat ( 7.4\(\%\)), DTD ( 5.8\(\%\)), CUB ( 4.3\(\%\)), Flowers102 ( 4\(\%\)), Stanford Cars ( 2.4\(\%\)) and UCF-101 ( 2.4\(\%\) ). This demonstrates that our adapters make use of semantic information in the form of visually descriptive sentences and fuse this with CLIP's 0-shot knowledge to build more generalizable classifiers that transfer well to unseen classes within the same dataset. It is also worth noting that even though the same set of VDT did not provide any improvements in 0-shot domain transfer for datasets like FGVC-Aircraft, Stanford-Cars, and UCF-101, our self-attention adapter was able to choose the most informative subset of VDT and produce few-shot classifiers that provide substantial few-shot transfer performance gains in comparison to CoCoOp. We show in Section 4.5.1 the sentences picked by the attention mechanism for these datasets to qualitatively verify this. #### 4.5.1 Attention weights Analysis We note that even though CLIP-gpt ensembles were outperformed by CLIP default prompt on FGVC Aircraft, UCF-101, and Stanford Cars dataset, we see that CLIP-A-self outperforms CLIP-A and CoCoOp [34] on these datasets in the few-shot transfer setting. We believe that this is because, during few-shot training, the self-attention mechanism learns to select the most relevant visual sentences out of the set of visually descriptive text and helps produce generalizable classifiers. In Table 1 in supplementary, we show the top 3 and bottom 3 attributes picked by attention scores for each of these datasets and show that the sentences with the highest attention scores correspond to visually descriptive attributes in the set and vice versa for the lowest scored attributes. For example, for both Stanford Cars and FGVC it is interesting to see that the color scheme is one of the least used attributes as it's difficult to identify a car or a plane from its color or livery. For UCF-101, information like the force involved or temporal information like speed and range of motion of the action is unlikely to be encoded in the image and hence is not selected by the attention mechanism. Information regarding the subject and the object of the action, like the posture of the person, description of the object, and interaction between objects are visible in the images and hence weighted highly by the attention mechanism. ### Ablation over different GPT models In this section, we see if other GPT models like GPT-3.5 and open-source model, OpenAssistant [15], are as capable as GPT-4 in generating visually descriptive information. We explore this on the CUB dataset as it is fine-grained and specialized. The results are presented in Table 6. We find that the performance improves with larger models which are more capable of memorizing accurate class information with less hallucination [32]. Even though we obtain decent performance with the open-source model OpenAssistant, the outputs were always inconsistent and noisy, resulting in a lot of clean-up effort in comparison to GPT-3.5 and GPT-4 where the outputs were in the form of concise sentences following a dictionary format. It is worth noting that our few-shot adapter is capable of picking out the the best VDT information even from a noisy set, pushing the Base-to-New generalization performance of OpenAssistant, and GPT-3.5 close to that of GPT-4. ## 5 Conclusion In this work, we show that using visually descriptive textual (VDT) information can improve the 0-shot domain transfer performance of CLIP over non-visual information and the default prompts. We demonstrate GPT-4 to be an accurate and flexible source of VDT information by improving the 0-shot domain transfer performances on a suite of \begin{table} \begin{tabular}{l|c c c c} \hline \hline Prompting & ZS & Base & New & H \\ \hline Default & 54.7 & NA & NA & NA \\ OpenAssistant & 56.0 & 78.3 & 69.8 & 73.80 \\ GPT-3.5 & 55.7 & 78.1 & 70.6 & 74.16 \\ GPT-4 & 57.4 & 78.6 & 71.3 & 74.77 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparing different GPT models for obtaining the VDT information. We see that the larger models provide higher quality VDT information but CLIP-A-self is capable of producing generalizable classifiers even with smaller models like OpenAssistant. 12 benchmark datasets. Our few-shot adapter CLIP-A-self learns to pick the best VDT information from the GPT generated set and improve the few-shot domain transfer in the Base-to-New setting even when the quality of the generated text deteriorates. We release all prompts and VDT information for all 12 datasets to promote further research in the fertile research direction of using LLMs for learning multi-modal adapters for foundation models.
2303.12847
Scalar decay into pions via Higgs portal
In extensions of the Standard Model (SM) of particle physics a light scalar from a hidden sector can interact with known particles via mixing with the SM Higgs boson. If the scalar mass is of GeV scale, this coupling induces the scalar decay into light hadrons, that saturates the scalar width. Searches for the light scalars are performed in many ongoing experiments and planned for the next generation projects. Applying dispersion relations changes the leading order estimate of the scalar decay rate into pions by a factor of about a hundred indicating the strong final state interaction. This subtlety for about thirty years prevented any reliable inference of the model parameters from experimental data. In this Letter we use the gravitational form factor for neutral pion extracted from analysis of $\gamma^*\gamma\to\pi^0\pi^0$ processes to estimate the quark contribution to scalar decay into two pions. We find a factor of two uncertainty in this estimate and argue that the possible gluon contribution is of the same order. The decay rate to pions smoothly matches that to gluons dominating for heavier scalars. With this finding we refine sensitivities of future projects to the scalar-Higgs mixing. The accuracy in the calculations can be further improved by performing similar analysis of $\gamma^*\gamma\to K K$ and $\gamma^*\gamma\to\eta\eta$ processes and possibly decays like $J/\psi\to\gamma+\pi\pi$.
Dmitry Gorbunov, Ekaterina Kriukova, Oleg Teryaev
2023-03-22T18:12:22Z
http://arxiv.org/abs/2303.12847v2
# Scalar decay into pions via Higgs portal ###### Abstract In extensions of the Standard Model (SM) of particle physics a light scalar from a hidden sector can interact with known particles via mixing with the SM Higgs boson. If the scalar mass is of GeV scale, this coupling induces the scalar decay into light hadrons, that saturates the scalar width. Searches for the light scalars are performed in many ongoing experiments and planned for the next generation projects. Applying dispersion relations changes the leading order estimate of the scalar decay rate into pions by a factor of about a hundred indicating the strong final state interaction. This subtlety for about thirty years prevented any reliable inference of the model parameters from experimental data. In this letter we use the gravitational form factor for neutral pion extracted from analysis of \(\gamma^{*}\gamma\to\pi^{0}\pi^{0}\) processes to estimate the quark contribution to scalar decay into two pions. We find a factor of two uncertainty in this estimate and argue that the possible gluon contribution is of the same order. The decay rate to pions smoothly matches that to gluons dominating for heavier scalars. With this finding we refine sensitivities of future projects to the scalar-Higgs mixing. The accuracy in the calculations can be further improved by performing similar analysis of \(\gamma^{*}\gamma\to KK\) and \(\gamma^{*}\gamma\to\eta\eta\) processes and possibly decays like \(J/\psi\to\gamma+\pi\pi\). + Footnote †: preprint: INR-TH-2023-003 _1._ New physics required to address neutrino oscillations, baryon asymmetry of the Universe, dark matter and other phenomena unexplained within the Standard Model of particle physics, can be confined in a hidden sector, so that the new particles are sterile with respect to the SM gauge interactions. They still can couple to the SM particles not only via gravity. There can be interactions via contact terms constructed by specific field products, invariant under the SM and hidden gauge groups. One of the intriguing examples follows from the so-called scalar Higgs-field portal [1], which combines the SM Higgs field \(H\) and a scalar \(S\), singlet with respect to the SM gauge group, into the interaction \[\mathcal{L}=\mu SH^{\dagger}H+\lambda S^{2}H^{\dagger}H\,. \tag{1}\] When the SM Higgs field gets non-zero vacuum expectation value \(v=246\,\mathrm{GeV}\), the first term in eq. (1) yields mixing between the scalar and the SM Higgs boson \(h\). Note, that if \(S\) is charged under the hidden sector gauge group, this term is absent. However, the mixing can still arise, if the hidden sector gauge group is spontaneously broken, similar to the electroweak gauge group of the SM. In this case the mixing between the Higgs boson and its analog in the hidden sector comes from the second term in (1). Without loss of generality in both cases the induced interaction between the SM Higgs boson \(h\) and the hidden scalar \(S\) can be described as the mixing mass term in the scalar sector \[\mathcal{L}_{s}=\frac{1}{2}m_{h}^{2}h^{2}+\mu vSh+\frac{1}{2}M_{S}^{2}S^{2}\,. \tag{2}\] Thus, if kinematically allowed, the hidden scalar can be produced in scatterings and decays of SM particles and can decay into the SM particles through the virtual Higgs boson provided \(\mu\neq 0\). Hereafter we are interested in the models, where the hidden scalar is lighter than the Higgs boson. This case may naturally be favored [2; 3], because the heavy scalars coupled to the SM Higgs field would induce large quantum corrections to its mass. Moreover, we concentrate on the situation, where the scalar is at a GeV mass scale and so it can be produced in particle collisions, including accelerator experiments. While this choice may look ad hoc, there are particular extensions of the SM which actually predict new scalars in this mass range, see e.g. [4; 5; 6]. Searches for such light scalars have been performed in beam-dump experiments, accelerator experiments on neutrino oscillations, collider experiments, precision measurements, hunting for rare processes, etc, see Ref. [7] for the most recent summary. So far, the negative results imposed constraints on the scalar production and decay rates, which could not be reliably transferred to limits on the model parameters because of a factor of hundred uncertainties in estimates of the scalar decay rates into light hadrons [4; 8; 9]. In this _Letter_ we perform the calculation of the scalar decay rate into a couple of pions and argue that its uncertainty is only about a factor of two. To begin with, we note that the Lagrangian (2) can be diagonalized. The resulting scalar couplings to the SM fields are those of the SM Higgs boson couplings multiplied by the corresponding mixing angle \(\xi\). The latter is strongly constrained by negative results of the experimental searches, \(\xi\ll 1\). In this regime we can safely use the same notations \(S\), \(h\) and names for the true mass states in the scalar sector. The scalar effective interaction with quarks \(q\) and gluons \(g\) is described by the Lagrangian \[\mathcal{L}_{qg}=-\xi\,S\sum_{q}\frac{m_{q}}{v}\bar{q}q+\xi\,S\frac{\alpha_{s}\,N_ {h}}{12\pi\,v}G^{a}_{\mu\nu}G^{\mu\nu\,a}\,, \tag{3}\] where \(m_{q}\) are quark masses. The first term in (3) is from the Yukawa couplings of the SM Higgs boson. Then \(N_{h}\) heavy quarks, i.e. \(m_{q}\gg M_{S}/2\), induce the second term by quantum corrections, there \(G^{a}_{\mu\nu}\) is gluonic field tensor, \(a=1,\ldots,8\), and strong coupling \(\alpha_{s}\) (being the QCD analogue of the fine structure constant) is evaluated at the scale of the order of \(M_{S}\). This Lagrangian allows one to estimate the scalar decay rates to quarks, \[\Gamma(S\to\bar{q}q)=\xi^{2}\frac{N_{c}}{8\pi}\frac{m_{q}^{2}\,M_{S}}{v^{2}} \left(1-\frac{4\,m_{q}^{2}}{M_{S}^{2}}\right)^{3/2} \tag{4}\] (where \(N_{c}=3\) is the number of quark color states) and to gluons, \[\Gamma(S\to gg)=\xi^{2}\frac{N_{c}^{2}-1}{8}\frac{N_{h}^{2}\alpha_{s}^{2}}{3 2\pi^{3}}\frac{M_{S}^{3}}{v^{2}} \tag{5}\] (where \(N_{c}^{2}-1=8\) is the number of gluon states). Numerically, quark modes dominate over the gluon mode for heavy scalars. However, at GeV scale only \(u\), \(d\) and \(s\) quarks are relevant, \(N_{h}=3\), and gluons become important. Light scalars decay directly into meson pairs, and these hadronic decay rates can be described directly via effective interaction between the scalar and light mesons. It can be obtained by making use of the renorminvariance of the hadronic contribution to the trace of the energy-momentum tensor [10], \[T_{\mu}^{\mu}\equiv\sum_{q=u,d,s}m_{q}\bar{q}q-\frac{9\alpha_{s}}{8\pi}G^{a}_ {\mu\nu}G^{\mu\nu\,a}\,, \tag{6}\] which we present to the leading order in \(\alpha_{s}\). The last term in (6) comes from violation of the scale invariance by the trace anomaly due to the running of \(\alpha_{s}\) with energy. It is generated by one-loop triangle diagrams with virtual \(u,d,s\) quarks, and for heavy quarks the contributions of two terms cancel. This relation allows one to recast the light scalar interaction (3) in terms of quarks and \(T_{\mu}^{\mu}\) as \[\mathcal{L}_{T}=-\xi\,\frac{S}{v}\left(\left(1-\frac{2N_{h}}{27}\right)\!\! \sum_{q=u,d,s}m_{q}\bar{q}q+\frac{2\,N_{h}}{27}T_{\mu}^{\mu}\right). \tag{7}\] Therefore, to calculate the scalar decay rates to, say, pions, one must evaluate the matrix elements (\(a,b=1,2,3\)) \[\langle\pi^{a}(p)\pi^{b}(p^{\prime})|m_{u}\bar{u}u+m_{d}\bar{d}d |0\rangle \equiv \delta^{ab}\Gamma_{\pi}(s)\,, \tag{8}\] \[\langle\pi^{a}(p)\pi^{b}(p^{\prime})|m_{s}\bar{s}s|0\rangle \equiv \delta^{ab}\Delta_{\pi}(s)\,,\] (9) \[\langle\pi^{a}(p)\pi^{b}(p^{\prime})|T_{\mu}^{\mu}|0\rangle \equiv \delta^{ab}T_{\pi}(s) \tag{10}\] and the similar elements for decays into kaons, \(\eta\)-mesons, etc. The form factors entering eqs. (8)-(10) depend on the invariant mass of pion states, \(s=(p+p^{\prime})^{2}\). They can be calculated within the Chiral Perturbation Theory (ChPT). The leading order terms read \[\Gamma_{\pi}(s) = m_{\pi}^{2}\,, \tag{11}\] \[\Delta_{\pi}(s) = 0\,,\] (12) \[T_{\pi}(s) = s+2m_{\pi}^{2}\,, \tag{13}\] where \(m_{\pi}\) stands for the pion mass. Hence the amplitude of the scalar decay into pions is proportional to [11] \[G_{\pi}(s=M_{S}^{2}) \equiv 2\,T_{\pi}(s)+7\,\Gamma_{\pi}(s)+7\,\Delta_{\pi}(s) \tag{14}\] \[= 11\,m_{\pi}^{2}+2M_{S}^{2}\,. \tag{15}\] However, adopting these formulas for evaluation of the scalar decay rates was argued to be unreliable [8] due to the strong interaction of pions in the final states. The arguments were based on the usage of dispersion relations and extracted from \(\pi\pi\to\pi\pi\) data \(S\)-matrix elements. The corresponding corrections strongly depend on \(s\) and change the leading-order estimate of the hadronic decay rate by a factor upto a hundred [9]. Later the usage of dispersion relations has been questioned in literature, e.g. [12; 13], but no credible alternative estimate of the hadronic decay rates were presented. _2._ In this letter we make use of the \(q=u,d\) quark contributions to the gravitational [14] form factors of a pion, considered [15] in the timelike domain. They are defined via quark energy-momentum tensor as \[\begin{split}&\langle\pi^{a}(p)\pi^{b}(p^{\prime})|T_{q}^{\mu\nu} (0)|0\rangle\\ &\equiv\frac{\delta^{ab}}{2}\left((s\,\eta^{\mu\nu}-P^{\mu}P^{ \nu})\,\Theta_{1,q}(s)+\Delta^{\mu}\Delta^{\nu}\Theta_{2,q}(s)\right),\end{split} \tag{16}\] where \(P\equiv p+p^{\prime}\) and \(\Delta\equiv p^{\prime}-p\). Convolution of (16) with metric \(\eta_{\mu\nu}\) and summation over quarks, \(\Theta_{i}\equiv\Theta_{i,u}+\Theta_{i,d}\), gives for the quark form factor (8) \[\Gamma_{\pi}(s)=s\left(\frac{3}{2}\Theta_{1}(s)-\frac{1}{2}\Theta_{2}(s)\right) +2m_{\pi}^{2}\Theta_{2}(s)\,. \tag{17}\] The form factors \(\Theta_{1(2),q}(s)\) have been inferred by fitting the experimental data from Belle on \(\gamma^{*}\gamma\to\pi^{0}\pi^{0}\) scattering within the technique of Generalized Distribution Amplitudes [16; 17]. The fitting formulas read (we use the original notations from [15] correcting the obvious typo: extra \(\beta^{2}\)-factor in the resonance term in \(\tilde{B}_{20}\)): \[\Theta_{1,q}(s) = -\frac{3}{5}\tilde{B}_{10}(s)+\frac{3}{10}\tilde{B}_{20}(s) \tag{18}\] \[\Theta_{2,q}(s) = \frac{9}{10\beta^{2}}\tilde{B}_{20}(s) \tag{19}\] where \(\beta^{2}=\beta^{2}(s)\equiv 1-4m_{\pi}^{2}/s\) and \[\tilde{B}_{10}(s)= -\frac{10}{9}\left[\left(1+\frac{2m_{\pi}^{2}}{s}\right)M_{2(q)}^{ \pi}F_{q}^{\pi}(s)\right.\] \[\left.+\frac{3g_{f_{0}\pi\pi}\bar{f}_{f_{0}}}{2\sqrt{2}\sqrt{(M_{ f_{0}}^{2}-s)^{2}+\Gamma_{f_{0}}^{2}M_{f_{0}}^{2}}}\right]\mathrm{e}^{i\delta_{0}( \sqrt{s})},\] \[\tilde{B}_{20}(s) =\frac{10}{9}\beta^{2}\left[M_{2(q)}^{\pi}F_{q}^{\pi}(s)\right.\] \[+\left.\frac{g_{f_{2}\pi\pi}f_{f_{2}}M_{f_{2}}^{2}}{\sqrt{2}\sqrt {(M_{f_{2}}^{2}-s)^{2}+\Gamma_{f_{2}}^{2}M_{f_{2}}^{2}}}\right]\mathrm{e}^{i \delta_{2}(\sqrt{s})}\] with \(F_{q}^{\pi}(s)=(1+\beta^{2}s/\Lambda^{2})^{-1}\) and the relative contribution of quarks to the total pion momentum \(M_{2(u)}^{\pi}+M_{2(d)}^{\pi}=0.5\). The phase shifts for \(S\) and \(D\)-waves were taken from numerical fit of Ref. [18], and the first one was corrected above the kaon threshold as \(\delta_{0}(\sqrt{s})\to\delta_{0}(\sqrt{s})+a_{\delta}\left(\sqrt{s}-2m_{K} \right)^{bs}\). The values (at the scattering energy scale) of the resonance parameters used in the fit are presented in Tab. 1. These numbers coincide with those in Tab.1 of Ref. [15] after correcting the typo in the value of \(g_{f_{2}\pi\pi}\) which was presented there being smaller by a factor of 12.44 (however not affecting the code and final results). Numerical fit to the Belle data on \(\gamma^{*}\gamma\to\pi^{0}\pi^{0}\) revealed the values of the fitting parameters summarized in Tab. 2. With formulas and parameters above we accurately restore the real and imaginary parts of the form factors \(\Theta_{i,q}\) presented in Fig. 19 of [15]. There are two sets with similar accuracy of the fitting, which provides an estimate of the uncertainty in the gravitational form factors, and hence in the scalar decay rate to pions, associated with the exploited methods and experimental data. Numerical results for real, imaginary parts and the absolute value of \(\Gamma_{\pi}\) for \(\Theta_{i,q}\) determined by these fits are shown in Fig. 1. Remarkably, both fits at \(s\to 0\) approach the leading order prediction of ChPT (11). With \(x\equiv M_{S}^{2}/1\,\mathrm{GeV}^{2}\) we obtain a numerical approximation for the average between set 1 and set 2: \[|\Gamma_{\pi}|=(0.13+0.39x-0.062x^{2}+0.0041x^{3})\ \mathrm{GeV}^{2}. \tag{20}\] _3._ The estimated form factor contributes to the total scalar decay rate to pions (the rate to neutral pions is half of the rate to charged ones) as \[\Gamma(S\to\pi\pi)=\frac{3}{32\pi}\frac{49\xi^{2}\,|\Gamma_{\pi} (M_{S}^{2})|^{2}}{81\,v^{2}M_{S}}\,\beta(M_{S}^{2}) \tag{21}\] \[=(0.983+6.54x-1.12x^{2}+0.071x^{3})\cdot 10^{-8}\ \mathrm{GeV}\,,\] while the leading order ChPT result is obtained from (21) by replacing \(\Gamma_{\pi}\to G_{\pi}\), see (14). Both results are outlined in Fig. 2 along with the leading-order QCD calculations of the decay rate into gluons (5) and the next-to-leading order ones calculated for the light Higgs boson at the renormalization scale \(\mu=M_{S}\)[21]. There is also shown the decay width obtained using the NLO ChPT result for \(\Gamma_{\pi}\) and \(T_{\pi}\)[19; 20]. We observe, that with our estimate of \(\Gamma_{\pi}\) the decay rate to pions reasonably matches that into gluons at \(M_{S}\simeq\) 1.5-2 GeV (light quark contribution (4) is negligible), and in this mass range the total ChPT leading order estimate reveals similar result. One may therefore speak on "gluon-hadron" (or in some sense "quark-gluon", as the pions are obviously quark states) duality. While the smaller fitting curve seems to be preferable, the deviations by no means may substantially exceed a factor of two, still consisting with uncertainties we expect in the fitting and calculations. One should bear in mind that the ChPT expansion is valid only up to dark scalar masses of order 1 GeV, although we extend the green line in Fig. 2 to higher \(s\) in order to demonstrate the overlapping of the results. Note that the scalar form factor behaviour within dispersion approach and ChPT were systematically compared in Sec. 2 of [22]. While both NNLO contribution of ChPT and the dispersion relation provide a decrease of \begin{table} \begin{tabular}{|c|c|c|c|} \hline Meson (\(h\)) & \(M_{h}\) (GeV) & \(\Gamma_{h}\) (GeV) & \(g_{h\pi\pi}\) & \(f_{h}\) (GeV) \\ \hline \(f_{0}(500)\) & 0.475 & 0.550 & 2.959 GeV & – \\ \hline \(f_{2}(1270)\) & 1.275 & 0.185 & 1.953 GeV\({}^{-1}\) & 0.0754 \\ \hline \end{tabular} \end{table} Table 1: Parameters of the hadronic resonances entering the fitting formulas for \(\tilde{B}_{i0}\). Figure 1: Real, imaginary parts and the absolute value of \(\Gamma_{\pi}\) for the fitting sets 1 and 2 of Tab. 2. \(\Gamma_{\pi}(s)\) at \(s\sim(0.5\,\mathrm{GeV})^{2}\), there is a quantitative discrepancy due to an underestimate of phase in ChPT. Moreover, at \(s\sim 1\,\mathrm{GeV}^{2}\) the phase evaluation in the Omnes approach should be strongly violated by the contributions of inelastic channels, in particular, \(KK\), making their account rather important. Indeed \(\Gamma_{\pi}\) exhibits much lower peak within 2-channels analysis [13]. Contrary to estimates of [8], our result for the decay rate as a function of scalar mass does not exhibit any peak-like structures, which might be attributed to the impacts of light scalar hadronic resonances. We find that impacts of some, in particular \(f_{0}(500)\) and \(f_{2}(1270)\), are small, while some, e.g. \(f_{0}(980)\), are not seen in the fit [15] and hence do not interfere in (8) being most probably bound states of four quarks. Note in passing that operator \(m\bar{q}q\) contributes also to \(T_{\pi}\). Its account implies the replacement \(49/81\to 1\) in eq. (21). _4._ The calculated decay rate of the hidden scalar into pions can be used along the decay rates into photons, leptons, etc, see e.g. [4], to evaluate the light scalar lifetime and entire pattern of branching ratios of the light scalar decays into the SM particles, see Fig. 3. Here for \(\Gamma_{\pi}(s)\) we use (20). In the mass region \(M_{S}\sim 1\,\mathrm{GeV}\) we expect uncertainties by a factor of 2-3 due to uncertainties in the gravitational form factor we used and due to our disregard of the gluonic contribution to the latter. It might be partially accounted along the quark contribution in the analysis of Ref. [15], which deserves an elaboration. We do not expect any induced by gluons features in the scalar decay rate to hadrons provided no light scalar resonances consisted of gluons. We use the corrected pattern in Fig. 3 to refine the experimental reach in the model parameter space as presented in Fig. 4. To further improve the accuracy in calculation of the scalar decay rates to hadrons, it is worth to infer other hadronic gravitational form factors, possibly from analyses of \(\gamma^{*}\gamma\to KK\), \(\gamma^{*}\gamma\to\eta\eta\) scatterings or \(J/\psi\to\gamma+\pi\pi\), etc, decays collected at \(c\)- and \(b\)-factories. We thank I. Timiryasov for stimulating discussions. OT is indebted to S. Kumano and Qin-Tao Song for helpful correspondence. The work is partially supported by the Russian Science Foundation RSF grant 21-12-00379. The work of EK is supported by the grant of "BASIS" Foundation no. 21-2-10-37-1. Figure 4: Expected reach of existing and proposed experiments in the model parameter space updated in accordance with the result for decay rate to pions. Peaks of original curves from [7] corresponding to the resonance enhancement of \(\Gamma(S\to\pi\pi)\) have been smoothed accordingly. Figure 3: Branching ratios of hidden scalar to leptons, photon pairs, pions (using the result of this work), kaons (within the framework of ChPT), \(s\bar{s}\) pairs and gluons (NLO QCD calculation). The branchings are shown with solid lines. The dashed line shows hidden scalar lifetime multiplied by \(\xi^{2}\). Figure 2: The decay width of hidden scalar to pions calculated using \(\Gamma_{\pi}\) from Fig. 1 for the fitting sets 1 and 2 (light and dark blue lines) and ChPT (green line) divided by \(\xi^{2}\). Light green line shows the decay width to pions divided by \(\xi^{2}\) obtained using the NLO ChPT results for \(\Gamma_{\pi}\) and \(T_{\pi}\)[19; 20]. The result of LO (NLO [21]) QCD calculation for the decay width of the hidden scalar to gluons is shown in red (dark orange), the renormalization scale is \(\sqrt{s}\equiv M_{S}\).
2304.08660
(LC)$^2$: LiDAR-Camera Loop Constraints For Cross-Modal Place Recognition
Localization has been a challenging task for autonomous navigation. A loop detection algorithm must overcome environmental changes for the place recognition and re-localization of robots. Therefore, deep learning has been extensively studied for the consistent transformation of measurements into localization descriptors. Street view images are easily accessible; however, images are vulnerable to appearance changes. LiDAR can robustly provide precise structural information. However, constructing a point cloud database is expensive, and point clouds exist only in limited places. Different from previous works that train networks to produce shared embedding directly between the 2D image and 3D point cloud, we transform both data into 2.5D depth images for matching. In this work, we propose a novel cross-matching method, called (LC)$^2$, for achieving LiDAR localization without a prior point cloud map. To this end, LiDAR measurements are expressed in the form of range images before matching them to reduce the modality discrepancy. Subsequently, the network is trained to extract localization descriptors from disparity and range images. Next, the best matches are employed as a loop factor in a pose graph. Using public datasets that include multiple sessions in significantly different lighting conditions, we demonstrated that LiDAR-based navigation systems could be optimized from image databases and vice versa.
Alex Junho Lee, Seungwon Song, Hyungtae Lim, Woojoo Lee, Hyun Myung
2023-04-17T23:20:16Z
http://arxiv.org/abs/2304.08660v1
# (Lc)\({}^{2}\): LiDAR-Camera Loop Constraints ###### Abstract Localization has been a challenging task for autonomous navigation. A loop detection algorithm must overcome environmental changes for the place recognition and re-localization of robots. Therefore, deep learning has been extensively studied for the consistent transformation of measurements into localization descriptors. Street view images are easily accessible; however, images are vulnerable to appearance changes. LiDAR can robustly provide precise structural information. However, constructing a point cloud database is expensive, and point clouds exist only in limited places. Different from previous works that train networks to produce shared embedding directly between the 2D image and 3D point cloud, we transform both data into 2.5D depth images for matching. In this work, we propose a novel cross-matching method, called (\(LC\))\({}^{2}\), for achieving LiDAR localization without a prior point cloud map. To this end, LiDAR measurements are expressed in the form of range images before matching them to reduce the modality discrepancy. Subsequently, the network is trained to extract localization descriptors from disparity and range images. Next, the best matches are employed as a loop factor in a pose graph. Using public datasets that include multiple sessions in significantly different lighting conditions, we demonstrated that LiDAR-based navigation systems could be optimized from image databases and vice versa. Localization; Sensor Fusion; Deep Learning Methods; Representation Learning ## I Introduction Global localization is a key problem in mobile robotics. Although the global navigation satellite system (GNSS) can provide accurate location data in open areas, it may fail to provide correct positions in urban or indoor environments owing to occlusion or blackout [1]. Thus, mobile robots should adopt localization systems to determine their positions on a map based on observations from an operational sensor system. Among them, visual sensors such as cameras, are widely used for their price competency and data intuitiveness. However, cameras suffer from appearance changes and require algorithms to be robust to such variances. Focusing on robust visual landmarks for place recognition, methods such as bag-of-words model [2] and approaches using convolutional neural networks (CNNs) [3, 4, 5, 6] have been introduced. Despite the advances in image-based place recognition algorithms, cameras are not always the best option for robot localization because the images are vulnerable to environmental changes. Therefore, LiDAR-based navigation systems are generally utilized for localization and mapping of mobile robots. LiDAR-based simultaneous localization and mapping (SLAM) has succeeded in constructing precise point cloud maps [7] and estimating relative poses at loop closures [8, 9]. With the provided point clouds obtained from a prior visit, a loop closure can be defined between the current frame and point cloud database. However, the point cloud database may not always be available owing to the limited accessibility of LiDAR sensors. Compared with cameras, LiDAR sensors are bulky and expensive and consume more energy. Therefore, the existing databases are often captured using vision-based systems owing to their economic feasibility for database construction and update. However, the visual information and a point cloud differ in data representation, known as the discrepancy of modality. Thus these abundant visual databases cannot be directly used for LiDAR-based platforms. Therefore, it is necessary to find methods that enable the utilization of these visual databases for LiDAR based systems. To address this problem, image-to-point-cloud fusion has Fig. 1: Example of our cross-modal matching scenario to overcome the database scarcity of point clouds. For instance, when a LiDAR-based SLAM system traverses through an area without prior point clouds (yellow dash), we propose correcting the global poses with the geog candidates (green) from the image database and re-localizing the LiDAR-based system. been studied recently. Many researchers have studied methods for extracting structural information from visual data to align with the point cloud [10, 11, 12, 13, 14]. Other studies have proposed directly matching visual data with point cloud using deep neural networks (DNNs) [15, 16, 17]. These studies demonstrated some possibilities that allowed vision-based systems to operate within known point cloud maps (i.e. 2D-to-3D matching). However, these methods are not appropriate for achieving place recognition using a point cloud as a query on the visual databases (i.e. 3D-to-2D matching). Additionally, 3D-to-2D matching is more difficult because the precision of structural details extracted from the images is insufficient to directly build a point cloud map for LiDAR-based place recognition. Hence, 3D-to-2D matching has not yet been adequately examined. Therefore, we propose a novel method, called _(LC)\({}^{2}\)_, to achieve 3D-to-2D matching for LiDAR-based systems. To this end, we formulate this problem as depth image matching by transforming both the image and point cloud data into a depth form. Not to limit the application within scale-aware depth obtained from SLAM, we created a database of unscaled depths from images and LiDAR scans, and trained a neural network to encode depth into localization descriptors. Further, with the relative poses between the LiDAR scan and the geotags of images, a loop closure is composed without a point cloud database, as illustrated in Fig. 1. The main contributions of this study are as follows: * We propose a vision-based place recognition pipeline for LiDAR-based navigation systems to enable LiDAR localization without a point cloud database. * Our module provides a shared embedding between the point clouds and images along with cross-modal loop closures, which could be formulated as global pose constraints for both vision and depth-based systems. * We evaluate our system using public datasets with broad environmental changes and verify that the proposed LiDAR-to-camera matching is robust to appearance changes. ## II Related Works In this section, we first review place recognition and localization methods based on visual and depth features and then introduce the multi-modal fusion methods. ### _Image-Based Place Recognition_ Visual place recognition has been developed based on bag-of-words [2] or view synthesis [3], which primarily use handcrafted features or their vector of locally aggregated descriptors (VLAD) [18] and is less robust to noise. To improve the generalization ability, CNN models have been proposed to transform the image features into localization descriptors [4]. This approach has exhibited superior robustness to changes in appearance. To efficiently translate place-distinctive image encoding into localization descriptors, the negative and positive samples were trained in pairs using weak supervision in NetVLAD [5] with a triplet loss [19]. Radenovic _et al._ discovered more consistent and distinctive feature representation from images, such as contrastive loss with generalized mean (GeM) pooling [20]. The study considers every image with sufficiently large number of co-observed 3D points or similar features for a training pair. This differs from weak supervision with triplet loss, which selects pairs by the image location and their descriptor distance. In [21], a method to improve feature matching with depth estimation has been introduced, opening a potential to enhance place recognition with estimated depth. In recent studies, methods unifying local and global features have been proposed for expansion over large-scale visual place recognition [22] to geometrically verify that the local feature matches after global image searching. ### _Point-Cloud-Based Place Recognition_ LiDARs are widely used in robotics owing to their high spatial resolution. However, the rich textures in RGB images are rarely recorded in LiDAR and only structural information can be used for place recognition. As a result, point cloud-based localization is reduced to a problem of efficiently transforming the geometrical information between points. PointNetVLAD [23] was proposed to adopt PointNet [24] as a module to transform input point clouds into localization descriptors. The study employed triplet and quadruplet losses to train a discriminative network. The point cloud submaps were cropped and downsampled to a 25 \(\times\) 25 m bounding box and then transformed to a descriptor by the network. Point cloud representation is spatially sparse and an appropriate downsampling approach must be identified. A method of outlier rejection by a robust kernel with a reduced degree of freedom [9] was proposed to improve voxel downsampling used in PointNetVLAD. Previous studies have focused on efficient and robust point cloud representation. However, a projective depth image representation may be sufficient for place recognition tasks that mainly consider on-sight objects. To exploit range images with a dense form, range-image-based classification [25, 26] or Monte Carlo localization [27] based on LiDAR-generated range images was studied. Because our goal is to match point cloud to RGB images, we focus on the LiDAR-generated projective range image to match with the natively projective images. ### _Cross-Modal Place Recognition_ Cross-modal matching, particularly camera-LiDAR fusion, is required in two scenarios. The first is to enhance the less accurate sensor using more sophisticated sensors and the second is to overcome the absence of references with a less accurate but easily accessible database. In the early stages of image-to-point-cloud fusion, LiDAR points were augmented with intensity values and rendered for further matching [28]. However, the process of photometric rendering and matching is computationally expensive. Therefore, subsequent studies focused on extracting shared representation between modalities. In [10], the researchers used the 3D locations of visual features with a triangulated depth to construct local feature clouds and aligned them with the LiDAR map via graph optimization. Similarly, in [11], the dense depth from a stereo camera was used to align the images and LiDAR data. In the study, relative poses were calculated by minimizing the sum of the depth residual between the sensor point clouds. To use higher-level features, Yu _et al._ aligned the images and point clouds with co-visible 2D to 3D edge constraints [13]. For a generalization with deep learning-based approaches, 2D3D-MatchNet [12] used a network to cross-match the features in both the images and ground-removed point cloud submaps. In the P2-Net [14], a batch-hard detector [29] was selected to produce shared embedding between the different sensor modalities. However, training and testing were performed only in submaps with sizes of less than a meter, thus this approach remains unsuitable for robotic applications. For large-scale place recognition, global 2D pose was provided using a satellite image and the radar matching network [30], or by satellite image and building outline matching [31]. Additionally, the possibilities of camera-LiDAR shared embedding were presented in [32] and [33] by transforming a pair of the image and point cloud with CNNs or by transforming the point clouds and images using 3D and 2D CNN, respectively. Similar to the approaches that establish a shared embedding between the image and point cloud, we propose the transformation of images and point clouds into localization descriptors. However, unlike the previous approaches that directly match the Cartesian sparse point cloud to the dense projective images, we propose to avoid this modality discrepancy by transforming both data into a projected depth form. Subsequently, with the depth images from the image and point cloud, respectively, we search for the best matching images in the database and utilize the best match as a loop constraint. Moreover, we propose to use the location of matched images as a global loop closure factor with pose graph optimization considering that the geotags of images are often available in GPS-denied regions [34, 35]. ## III (Lc)\({}^{2}\): Cross-Modal Place Recognition In this section, we describe the details of our LiDAR place recognition with an image database, as illustrated in Fig. 2. To fuse information from different modalities, we first transform the images and point clouds into the same domain to obtain the depth images. We use a single image for depth estimation; however, sequences of images may also be selected for better performance. We then train a network to learn the descriptors from each image. Finally, a LiDAR pose graph is optimized with visual loop constraints by running a pose graph optimization based on the match candidates. ### _Data Preprocessing_ #### Iii-A1 Range Image Generation LiDAR point clouds are provided in a sparse form, with Cartesian coordinates (\(x,y,z\)). To reduce the expressive differences of different modalities, we represent a point cloud as a range image, \(I(u,v)\), whose size is \(H\times W\); \(H\) and \(W\) denote the number of ray channels at its elevation angle and the number of pulses for every single channel along the horizontal direction, respectively. Each pixel, \((u,v)\), is then assigned to the corresponding point \(p(x,y,z)\) by the following relation: \[\begin{split} u&=\frac{W}{2}\cdot(1-\tan^{-1}(y,x) \cdot\pi^{-1})\\ v&=H\cdot(F_{\text{up}}-\sin^{-1}(z\cdot d^{-1})) \cdot F^{-1},\end{split} \tag{1}\] where \(F_{\text{up}}>0\) and \(F>0\) denote the upward and total vertical field of view (FoV), respectively, and \(d=\sqrt{x^{2}+y^{2}}\) denotes the depth value. The range image is then resized to the appropriate size for the network input. #### Iii-A2 Depth Image Generation Because the images captured by camera sensors do not directly contain the distance, depth estimation must be performed to generate depth images. Among the various depth estimation methodologies [36, 37], we exploit ManyDepth [37] to create depth images because it allows the module to extract depths from both single frame and short sequences. These depth images are fed into the network in a disparity form. Each depth value follows the inverse depth parametrization to make closer objects numerically dominant. ### _Depth-Image-Based Matching_ #### Iii-B1 Degree of Similarity Our proposed pipeline requires sets of geotagged range and monocular images for training. Fig. 2: Pipeline of our LiDAR scan matching with an image database. We first transform the image database into unscaled depth images and LiDAR scan to a range image. Either the range and unscaled depth (i.e. disparity) is used as input for the encoders above. After the image descriptors are delivered from the network, we run a global search (dashed arrow) and pose graph optimization based filtering to ignore the false-positive loops. However, although the geo-tagged data pairs are provided, we still need to verify whether the range and corresponding images are sufficiently overlapped. Otherwise, waste pairs whose viewpoints are not overlapped can be regarded as inputs, hindering the training convergence. In particular, as shown in Fig. 3, the vertical FoV of a typical LiDAR is vertically smaller than that of a camera; it is extremely important to check the extent to which the two areas are overlapped. To resolve this overlap determination problem, the degree of similarity [6] is defined to quantitatively approximate the overlapped field of interest between two sensors, as shown in Fig. 4. A constant value is assigned to the similarity of the range-disparity pairs measured simultaneously because the extrinsic parameters do not change during the experiment. The overlap measure is calculated with ground truth poses for the pairs defined over different measurements. This degree of similarity \(\psi\ \in\ [0,1]\) is used for weighting the distance between the localization descriptors. #### Iii-B2 Network Architecture We deploy a series of encoders based on VGG16 [38] and a pooling layer. Given the two types of depth images, these two inputs \(x\) are transformed into the feature representations \(f(x)\ \in\ \mathbb{R}^{H_{x}\times W_{x}\times D}\), by the Siamese network architecture (Fig. 2); \(H_{e}\) and \(W_{e}\) denote the sizes of the outputs from the last layer of encoders, and \(D\) represents the dimension of the feature space. It should be noted that these two inputs, \(x\), are range images from the LiDAR and an unscaled depth from the camera; however, the inputs, \(x\), can be freely selected between the range or disparity (see Section IV). The dimension of \(f(x)\) is identical to those of the outputs of the last convolutional layer of VGG16, which is conv5. The features \(f(x)\) are then transformed into the localization descriptor \(\hat{f}(x)\ \in\ \mathbb{R}^{D_{loc}}\) by the pooling layer, where \(D_{loc}\) is the dimension of the localization descriptor. To achieve cross-modality learning, two training phases are proposed by changing the pooling layer. First, we train the network with GeM [20] as a pooling layer to generate a similar descriptor when the data is actually measured in a similar place, even if this architecture introduces inputs of the different modalities. Next, the pooling layer is changed to NetVLAD [5] in Phase 2, which forces the encoder output to converge to the appropriate place-distinctive features. This is detailed as follows. #### Iii-B3 Phase 1 Using Contrastive Loss Empirically, it was found that single-phase learning or weight sharing failed to converge owing to the large modality differences between the two sensors, and the difference in noise characteristics between them caused the divergence of the Siamese network architecture. Therefore, we need to pre-train the two distinct encoders from scratch with the depth images. We aim to create place-distinctive descriptors from scratch, despite the modality discrepancy. To this end, we use self-supervised pre-training, which has been demonstrated to be highly effective for initialization [39]. We propose a modified contrastive loss that can successfully pre-train the encoders. First, we search every overlap (\(\psi\neq 0\)) between the measurements in the training set and select them in pairs (\(i,j\)). During Phase 1, range images from the LiDAR scans are horizontally cropped by a fixed size, to increase the number of samples. Details are in Section III-C1. Not only pairs from identical image sources but also cross-sensor pairs are selected to enforce the network to learn the consistent representation over the modality. Subsequently, with the degree of similarity calculated for each pair and a predefined constant \(\tau\), the proposed loss enforces the encoders to learn common representations between the overlapping depth images for each pair (\(i,j\)): \[\begin{split}\mathcal{L}_{i,j}^{\text{M}}=\psi_{i,j}\cdot d(x_{i },x_{j})^{2}+\\ (1-\psi_{i,j})\cdot\text{max}(\tau-d(x_{i},x_{j}),0)^{2},\end{split} \tag{2}\] where \(x_{i}\) and \(x_{j}\) denote the two input images, \(d\) is the distance in the localization descriptor defined by \(d(x_{i},x_{j})=||\hat{f}(x_{i})-\hat{f}(x_{j})||\), and \(\hat{f}\) is the output from the pooling layer, as defined in the previous section. The similarity between the pairs (\(i,j\)) is represented as \(\psi_{i,j}\) and it motivates the localization descriptors to have a distance equal to the degree of similarity. The overall loss is then defined by the summation over all the overlapping pairs (\(i,j\)): \(\mathcal{L}^{\text{M}}\ =\ \sum_{(i,j)}\mathcal{L}_{i,j}^{\text{M}}\). #### Iii-B4 Phase 2 Using Triplet Loss After the convergence of contrastive training, we change the pooling layer to NetVLAD and apply triplet margin loss to force the network to reweigh the importance of the depth features for the place recognition task. In this stage, the range images from a LiDAR are cropped by the size of the camera FoV to ensure that the descriptors converge. The triplet samples are selected based on the geometrical distances between the features. For example, positive pairs \(\mathbf{p}\) are selected from the samples within 10 meters, whereas the negative pairs \(\mathbf{n}\) are randomly sampled from measurements farther than 25 meters. The triplet margin loss consists of the summation over every triplet \(k\) defined for a sample \(x_{i}\), as follows: \[\mathcal{L}_{i}=\sum_{(i,k)}l\Big{(}d(x_{i},\mathbf{p}_{i,k})-d(x_{i},\mathbf{ n}_{i,k})+m\Big{)}, \tag{3}\] where \(l\) is the hinge loss \(l(x)=\text{max}(x,0)\) and \(m\) is a margin. By applying this triplet loss for every sample \(i\) and its selected pair \(j\), we train the network to learn place-distinctive localization descriptors from the depth images. Fig. 4: Visualization of the degree of similarity based on FoV overlap. Here, the LiDAR’s (circle) interest area is marked in cyan and the camera’s (triangle) in yellow. FoV and maximum effective range were used to define the interest area. The value of \(\psi\) is displayed that represents the interest area overlap between 0 (completely distinct) and 1 (completely overlapped). Fig. 3: Sample overlay of RGB and range images from a camera and single LiDAR scan (360\({}^{\circ}\)). As illustrated above, only small fractions of sensor measurements overlap. ### _Data Augmentation_ We use the sequences of Vision for Visibility Dataset [40] for training. The depth estimation module is fine-tuned with _driving-vision_ sequences and all the other experiments are conducted using _driving-full_ sequences. All the _day_ sequences except _day2_ are used for the Phase 2, and tests are conducted with the _evening_ and _night_ sequences. The geotags in the image are assigned by the GNSS signal obtained at the time of image acquisition. #### Iii-C1 Range Image Augmentation To define the similarity metric between the LiDAR range images and increase the number of samples, we divide the panoramic range image into eight FoV-masked overlapping images, as in [41], and calculate the degree of similarity for training Phase 1. #### Iii-C2 Scale Augmentation The network is trained to minimize the scale estimation error from the monocular depth estimates, by directly multiplying a random constant to the depth. For training Phase 1, the multiplied random variable modifies up to \(r\)% of each depth image, as shown in Fig. 5. ### _Loop Closure as a Factor in Graph_ As a result of the training discussed earlier, the place recognition results are obtained as the \(N\) closest indices of the database for each query image. However, the raw correspondences are not always correct and must be filtered for trajectory optimization. Because the false-positive loop constraints can result in SLAM divergence, we propose filtering the false-positive loops using pose graph optimization. Assuming sufficient inliers, we construct a pose graph based on the odometry constraints from the LiDAR odometry and raw loop closures from our cross-matching place recognition module. In the factor graph \(F\), consisted of nodes \(\phi\), the keyframes from odometry \(\phi_{i}\) and geo-tag locations \(\phi_{i}^{G}\) from each best-matching data are set up as factor nodes and their locations \(X_{i}\), \(X_{i}^{G}\) are arranged as variable nodes. Our goal is to identify informative loop closure edges \(e_{i}^{G}\) defined between \(\phi_{i}\) and \(\phi_{i}^{G}\) by solving the maximum a posteriori problem and calculating the information of each loop closure. We assign the initial locations of \(X_{j}\) in a local coordinate and iteratively solve the optimum of the following equation: \[\phi(X)=\prod_{i}\phi_{i}(X_{i}). \tag{4}\] The covariances of loops are set higher than the odometry covariances; for example, we set up the odometry covariance as \(10^{2}\) and loop covariance as \(10^{4}\). (4) is solved using a Levenberg-Marquardt optimizer and we obtain both variables after optimizing \(x_{j}\) with the information \(I_{i}^{G}\) of each edge \(e_{i}^{G}\). We then filter the false-positive loop closures by the \(L2\)-norm of the diagonal elements in \(I_{i}^{G}\). ## IV Experimental Results ### _Place Recognition_ #### Iv-A1 Vision for Visibility Dataset We trained and tested our algorithm on a public dataset ViViD++ [40], consisting of training set around \(233\)k images and \(30\)k scans before augmentation. The similarity constant \(\tau\) is set to \(0.5\), the margin \(m\) is set to \(0.1\) and the depth scaling variable \(r\) is set to \(20\). The sequences consisted of multiple repetitions over a similar trajectory during day and night, with changing lighting conditions. For the experiment, we set up a scenario of revisiting places at different times from prior visits. Because an image database could be constructed at the time when the lighting conditions were favorable, we used images from _day2_ as the database. As illustrated in Figs. 6 and 7, the re-identification of a place was easily accomplished with every module when matching images in similar lighting conditions. However, at night, image-based place recognition failed to identify the closest matches. Conversely, our cross-matching and point-cloud-based baseline were not affected by the lighting condition changes. Both the proposed method and the baseline use LiDAR scans, but our method does not require a point cloud database for matching. It should be noted that all the results in this subsection were obtained without using the information-based filtering presented in Section III-D. In Tables I, II, and III, we report the recall performance of every matching scenario within and across the data sources' modality, e.g. 3D-2D represents LiDAR queries matched into image database. As is evident from Tables I and II, matching between the identical modalities resulted in the best top-1 performance and our module performed better at 1% recall. This is because of the limitations of cross-modal matching Fig. 5: Depth augmentation during Phase 1. We multiplied a random scaling constant \(r\) to the estimated depth values from the monocular disparity image, to randomly scale \(\pm r\)% of the depth image. The examples of the depth modification are shown from the second to fourth columns. Fig. 6: Top N recall of visual place recognition (solid yellow), point-cloud-based place recognition (dotted black), and the proposed cross-modal place recognition (magenta solid). Fig. 7: Precision-recall curve of visual place recognition (yellow) and the proposed cross-modal place recognition (magenta). with unscaled depth images. For example, unconstrained depth images fail the matching. The cases are explained in more detail in Section V-C. #### Iv-A2 Oxford Robotcar Dataset To evaluate environmental changes other than appearance, we used another public dataset with seasonal changes. The Oxford Robotcar Dataset [42] involves multiple runs over the same trajectory, repeated forty-four times for more than a year. From the script that produces a 3D point cloud map from 2D scans using ground truth poses, we first constructed a point cloud map from the scans and transformed it into a LiDAR range image using the projection method proposed in Section III-A1. The test and train sequences were split by non-overlapping regions, 70% and 30% for training and testing, respectively, as in [33]. The performance metrics of our module are listed in Table III. Our module outperformed the baseline in cross-matching; however, showed lower performance when matching identical data types. We presume that the lower performance was resulted from the limitations of the depth module and information loss during the image-to-depth transformation in 2D-2D and the cropped LiDAR FoV in 3D-3D. ### _Loop Closure_ As mentioned in Section III-D, we filtered the raw correspondences from cross-matching with factor graph optimization which was implemented using GTSAM [43]. Fig. 8 presents the matching results for two different trajectories (_day1_ and _night_). The raw correspondences from our module before filtering are shown in the left column, whereas the optimized correspondences after filtering are shown in the right column. Only matches with a descriptor distance lower than 0.1 were plotted, and only 46% (285 out of 610) matches were correct; the others were false-positive loops. After information-based filtering, 90% (87 out of 97) matches were found to be correct. There was a trade-off between the number of remaining matches and ratio of the correct match; the threshold \(\tau\) applied to the trace of the information matrix should be raised if the locations from the geo-tags are unreliable. ## V Discussions & Future works ### _Image and Depth-Based Local Features_ To identify the image regions where the network extracts informative descriptors, the matching results of the local deep features are presented in Fig. 9. The figure shows the matching results of the local descriptors for the RGB and depth images. The local descriptors were extracted from the outputs of their ground truth pairs from conv5. It can be seen that most image-based features are based on visual features such as crosswalks, color boundaries, or empty textures in the sky. Meanwhile, structural features like road signs, trees, or road boundaries are extracted in our encoder module. Assuming a LiDAR to camera matching case, these correspondences become a perspective-\(n\)-point problem, and relative poses are solved by solutions such as EP\(n\)P [44]. ### _Image-Based Loop Closures for LiDAR-Based Systems_ In Fig. 10, we compared the performance of our cross-matching with visual place recognition (VPR)-based loop \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Top-1** & 2D-2D & 3D-3D & 2D-3D & 3D-2D \\ \hline PointNetVLAD [23] & - & **0.8133** & - & - \\ \hline NetVLAD [5] & 0.0093 & - & - & - \\ \hline GCL [6] & 0.0335 & - & - & - \\ \hline Ours & 0.0078 & 0.5482 & 0.0046 & **0.4938** \\ \hline \hline **Top-1\%** & 2D-2D & 3D-3D & 2D-3D & 3D-2D \\ \hline PointNetVLAD [23] & - & 0.9214 & - & - \\ \hline NetVLAD [5] & 0.9252 & - & - & - \\ \hline GCL [6] & 0.9606 & - & - & - \\ \hline Ours & **0.9685** & **0.9606** & **0.9598** & **0.9457** \\ \hline \end{tabular} \end{table} TABLE II: Recalls of night-day2 matching. Fig. 8: Raw (left) and filtered (right) loop closures based on feature distance and information. True-positive loop closures are indicated in green and false-positives in red. Fig. 9: Local feature matches of the same place at different times, from (a) image-based features (query at night and database from daytime) and (b) depth-based local features (LiDAR query at night and database of disparity image, transformed from daytime). In contrast to few and inaccurate match pairs (e.g. matches in the featureless sky) in the RGB images, the depth images are invariant and most features are re-identified despite the appearance change. closures after factor graph optimization. For the experiment, we generated noisy odometry with a heading error of 30\({}^{\circ}\) and assigned relative poses from top-1 estimates of every query image. For a fair comparison, the depth values were assigned from deskewed LiDAR scan for image-based features. Consequently, VPR-based loop closures succeeded only in the day-day matching but failed in night-day matching owing to the false-positive loop closures and the failure of depth estimator upon the illumination changes. However, our cross-matching based on the LiDAR scan query is nearly invariant to lighting conditions, and our module maintains a similar RMSE level on both matching scenarios. ### _Limitations of Projections for Place Recognition_ As mentioned in Section V-B, non-informative loop closures could be filtered out through pose graph optimization. However, some cases of false-positive loop closures still exist. Filtering based on the feature distance was insufficient for such failure cases, as shown in Fig. 11. In the first case, close and unique objects are absent, and only distant landscapes are present, as shown in Fig. 11(a). Large structures at a distance do not experience significant viewpoint changes after translation and the accurate location cannot be determined even when the matching score is high. As shown in the figure, large buildings do not experience viewpoint changes, although the lower picture of Fig. 11(a) were taken 86 m apart from the upper one. Moreover, urban structures such as street lamp poles are non-informative and highly repetitive, thus lowering the feature distances of false positive matches. For cases with underdetermined poses, there were no sufficient planes to constrain the translation along a specific axis, as shown in Fig. 11(b). These circumstances were the main sources of false-positive loop closures in our experiments. ## VI Conclusion In this paper, we proposed a cross-modal place recognition algorithm for LiDAR-camera systems. We experimentally demonstrated that images and point clouds can be transformed into shared neural embedding. The developed method is expected to widen the possibility of fusion between two complementary sensors, namely LiDAR and camera, starting from data intercompatibility and further to spatial AI.
2303.03792
The Formulation of Scaling Expansion in an Euler-Poisson Dark-fluid Model
We present a dark fluid model described as a non-viscous, non-relativistic, rotating, and self-gravitating fluid. We assumed that the system has spherical symmetry and the matter can be described with the polytropic equation of state. The induced coupled non-linear partial differential equation system was solved by using a self-similar time-dependent ansatz introduced by L. Sedov and G. I. Taylor. These kinds of solutions were successfully used to describe blast waves induced by an explosion since the Guderley-Landau-Stanyukovich problem. We showed that these kinds of solutions can provide new solutions that are consistent with the Newtonian cosmological framework. We have found that such solutions can be applied to describe normal-to-dark energy on the cosmological scale.
Balázs Endre Szigeti, Imre Ferenc Barna, Gergely Gábor Barnaföldi
2023-03-07T11:00:13Z
http://arxiv.org/abs/2303.03792v3
# The Formulation of Scaling Expansion in an Euler-Poisson Dark-fluid Model ###### Abstract We present a dark fluid model described as a non-viscous, non-relativistic, rotating, and self-gravitating fluid. We assumed that the system has spherical symmetry and the matter can be described with the polytropic equation of state. The induced coupled non-linear partial differential equation system was solved by using a self-similar time-dependent ansatz introduced by L. Sedov and G. I. Taylor. These kinds of solutions were successfully used to describe blast waves induced by an explosion since the Guderley-Landau-Stanyukovich problem. We have found that such solutions can be applied to describe normal-to-dark energy on the cosmological scale or dark-fluid velocity profile on the galactic scale. Dark Fluid; Sedov-Taylor Ansatz, Self-similarity + Footnote †: journal: Article ## 1 Introduction In the second half of the 20\({}^{\text{th}}\) century, various self-similar solutions have been found after Gottfried Gunderley's famous discovery of spherically symmetric self-similar solutions that describe an imploding gas that collapses to the center [1]. In this paper, we used those kinds of self-similar solutions which were found by Leonid Ivanovich Sedov and Sir Geoffrey Ingram Taylor independently during the 1940s [2, 3]. Despite the fact that such models are well-known for decades they have recently received attention again. This _ansatz_ has been already applied successfully in several hydrodynamical systems, like the 3-dimensional Navier-Stokes and Euler equations [4], and heat equation [5, 6], or star formulation [7]. The existence of the dark matter was first proposed by the Dutch astronomer Jacobus Cornelius Kapteyn [8] and became widely known through Zwicky's famous work from 1933 [9]. During the second half of the century, solid experimental evidence was provided by Vera Rubin, Ken Ford, and others [10, 11]. However, the general existence and specified properties of dark matter, are still one of the most disputed topics in theoretical astrophysics. Dark fluid is one of the theoretical attempts to describe the properties of dark matter and its unification with dark energy into one hypothesized substance [12]. Our goal is to use the Sedov-Taylor _ansatz_ to describe the time evolution of a dark fluid-like material characterized by coupled, non-linear partial differential equation system. In our model, we studied one of the simplest dark fluid material described by a polytropic equation of state. The dynamical evolution of the dark fluid is governed by the Euler equation and the gravitational field is described by the corresponding Poisson equation. We found time-dependent scaling solutions of the velocity flow, density flow, and gravitational fields, which can be good candidates to describe the evolution of the Universe. The aim of this study was to broaden our knowledge about time-dependent self-similar solutions in these dark fluid models, which improve and extend our previous model [13]. We tested our model on two different examples on cosmological and astronomical scales. ## 2 The Model We consider a set of coupled non-linear partial differential equations, which describes the non-relativistic dynamics of a compressible fluid with zero thermal conductivity and zero viscosity [14], \[\partial_{t}\rho+\nabla(\rho\mathbf{u}) =0\, \tag{1a}\] \[\partial_{t}\mathbf{u}+(\mathbf{u}\nabla)\mathbf{u} =-\frac{1}{\rho}\nabla p+g\,\] (1b) \[p =p(\rho). \tag{1c}\] These equations are the continuity, the Euler equation, and the equation of state (EoS), respectively. We assume that the system has spherical symmetry and we are interested to solve it in one dimension. If we imply that the fluid is ideal and the system has spherical symmetry we can reduce the multi-dimensional partial differential equation (PDE) system into the one-dimensional, radial-dependent one \[\partial_{t}\rho+(\partial_{r}\rho)u+(\partial_{r}u)\rho+\frac{2 u\rho}{r} =0\, \tag{2a}\] \[\partial_{t}u+(u\partial_{r})u =-\frac{1}{\rho}\partial_{r}p+g\,\] (2b) \[p =p(\rho). \tag{2c}\] Here, the dynamical variables are the \(\rho=\rho(r,t)\), \(u=u(r,t)\), and \(p=p(r,t)\) which mean the density, the radial velocity flow and the pressure field distributions, respectively. The \(g\) is the radial component of an exterior force density. As we presented briefly in the introduction we have used the following general linear equation of state \[p=w\rho^{n},\qquad n=1. \tag{3}\] Several forms of the equation of state are available in astrophysics and polytropic ones were successfully used in the past, see Emden's famous book [15]. A great variety of applications can be found in Ref. [16]. In the equation Eq. (3), the \(w\) parameter can vary depending on the type of matter that governs the system's evolution. Traditionally, the \(w=0\) is used which value corresponds to the EoS for ordinary non-relativistic matter or cold dust. For our case, we can also choose a negative value for the \(w\) which leads us to different kinds of dark-fluid scenarios as was presented in detail by Perkovic [17]. In this paper, we chose \(w=-1\) which represents the simplest case of expanding universe governed by dark matter. Smaller values could cause the Big Rip. The adiabatic speed of sound can be evaluated from the Eq. (3) and it is easy to show that it will be constant \[\frac{\mathrm{d}p}{\mathrm{d}\rho}=c_{s}^{2}=w\, \tag{4}\] which is a necessary physical condition. Furthermore, let us assume that we have an additional self-gravitating term in the Eq. (2b). In this case, the exterior force density, \(g\) can be expressed in the following way: \[g=-\partial_{r}\Phi\, \tag{5}\] where the \(\Phi=\Phi(r)\) is the Newtonian gravitational potential and it satisfies the Poisson equation which will couple to the previously proposed PDE system [18], \[\nabla^{2}\Phi=4\pi G\rho\, \tag{6}\] where the \(G\) is the universal gravitational constant which is set to unity in further calculations. One can notice that we can also add an additional constant term \(\Lambda\) to the Eq. (1b) which has a similar role as the cosmological constant in Einstein's equations. \[\partial_{t}u+(u\partial_{r})u=-\frac{1}{\rho}\partial_{r}p-\partial_{r}\Phi(r )+\Lambda. \tag{7}\] We are going to show below that, this constant cannot be used since it does not lead to a consistent self-similar solution, which is what we are looking for. Note, this observation in our model can be an _indirect proof of the non-existence of the static Universe picture_. We can extend further the exterior force density with a rotating term. In this case, we would like to add a phenomenological rotation term to the Eq. (2b), thus the equation will take the following modified form \[\partial_{t}u+(u\partial_{r})u=-w\frac{1}{\rho}\partial_{r}\rho-\partial_{r} \Phi(r)+\frac{\sin\theta\omega^{2}r}{t^{2}}\, \tag{8}\] where \(\omega\) is a dimensionless parameter that describes the strength of the rotational effect and \(\theta\) is the polar angle. We construct this equation by assuming that the spherical symmetry is not broken, therefore the rotation is slow. This statement is satisfied if the \(\omega\) parameter is sufficiently small, implying that the rotational energy is negligible compared to the gravitational energy. The self-similar analysis of various rotating and stratified incompressible ideal fluids were investigated in two Cartesian coordinates [19]. Note, for calculations below, geometrized unit system (\(c=1\), \(G=1\)) were applied, which can be converted to other units. See Appendix A for more details. ## 3 Scaling Solution and Sedov-Taylor _Ansatz_ We would like to find and study analytic solutions of the equations by applying the long-established self-similar _ansatz_ by Sedov and Taylor [2; 3] which can be expressed in the following form. \[u(r,t) =t^{-\alpha}f\bigg{(}\frac{r}{t^{\beta}}\bigg{)}\, \tag{9a}\] \[\rho(r,t) =t^{-\gamma}g\bigg{(}\frac{r}{t^{\beta}}\bigg{)}\,\] (9b) \[\Phi(r,t) =t^{-\delta}h\bigg{(}\frac{r}{t^{\beta}}\bigg{)}, \tag{9c}\] where the \(r\) means radial and \(t\) means time dependence. One can notice that the so-called shape functions \((f,g,h)\) only depend on the \(rt^{-\beta}\), thus we introduce a new variable \[\zeta=rt^{-\beta}. \tag{10}\] The \(\zeta\) has a length dimension since the \(\beta\) is zero. The not yet determined exponents are called similarity exponents (\(\alpha\), \(\beta\), \(\gamma\), and \(\delta\)) and they indeed have physical relevance. The \(\beta\) describes the rate of spread of the spatial distribution during the time evolution if the exponent is positive or the contraction if \(\beta<0\). Also, the other exponents describe the rate of decay of the intensity of the corresponding field. Solutions with integer exponents are called self-similar solutions of the first kind, while the second kind denotes the non-integer ones. Self-similarity is based on the concept that the physical quantities will preserve the shape during time evolution. A general description of the properties of these kinds of scaling solutions can be found in our former publication [13]. We assumed that the shape functions are sufficiently smooth and it is at least continuously differentiable (twice) in \(\zeta\) over the entire domain. Thus, we have calculated the relevant time and space derivatives of the shape functions and substituted them into the equations (2). As a consequence, we got usually an overdetermined algebraic equation system for the similarity exponents. Other possible scenarios may play out and these were presented in detail in this paper [13]. We obtained the following numerical value for the exponents \(\alpha=1\), \(\beta=0\), \(\gamma=1\), and \(\delta=0\) for both the non-rotating and the rotating cases. If we add the \(\Lambda\) constant to the Euler equation, such a solution for the similarity equation cannot be found. The Eq. (9c) and the \(\beta\) and \(\delta\) values show that the gravitational potential is constant in time and only has radial dependence. From the results, it is evident that the dynamical variables such as the velocity, and density flow have spreading properties. Our physical intuition says that spreading is somehow similar to expansion which is a basic property in the Universe at astronomical, galactical, or cosmological scales. By substituting the obtained numerical values of the similarity exponents, we have reduced the induced PDE system into an ordinary differential equation (ODE) system that depends only on the \(\zeta\) independent variable. We found that the obtained equation system has the following form, \[-\zeta g^{\prime}(\zeta)+f^{\prime}(\zeta)g(\zeta)+f(\zeta)g^{ \prime}(\zeta)+\frac{2f(\zeta)g(\zeta)}{\zeta} =0, \tag{11a}\] \[-\zeta^{2}f^{\prime}(\zeta)+\zeta f^{\prime}(\zeta)f(\zeta) =-wg^{\prime}(\zeta)-h^{\prime}(\zeta)\zeta+\omega^{2}\sin\theta \zeta^{2},\] (11b) \[h^{\prime}(\zeta)+h^{\prime\prime}(\zeta)\zeta =g(\zeta)4\pi G\zeta. \tag{11c}\] One can easily notice that the presented ordinary differential equation system Eq. (11) cannot be solved analytically. For linearized non-autonomous ordinary differential equation systems, the stationer point of the phase space can be found as well as one can say something about the general asymptotic behavior of the solutions [20]. Nonetheless, there is no generally known method for non-linearized non-autonomous differential equation systems. Also, the existence and uniqueness of smooth solutions have not yet been proven in multiple dimensions. Therefore, it is a reasonable approach to solve the obtained ordinary differential equation system, Eqs. (11) numerically for a large number of parameter sets (based on physical considerations) to explore the behavior of the solution of the system with different boundary- and initial conditions. One example of the numerical solution can be seen in Fig. 1. As an example, at a specific parameter and initial condition set, the shape function of the velocity, \(f(\zeta)\) is almost linear and increasing giving a hint for a Hubble expansion-like behavior. Shape function, \(g(\zeta)\) is asymptotically flat after a quick ramp up, correspondingly to the conservation of matter. The last shape function, \(h(\zeta)\) has an increasing polynomial trend with a slight positive exponent, connected to the gravitational potential. To obtain a sufficiently smooth numerical solution we solved the ODE system by using an adaptive numerical integration provided by _Wolfram Mathematica 13.1_[21]. For all of our calculations, the integration limits were \(\zeta_{0}=0.001\) and \(\zeta_{max}=40\) as in Ref. [13]. As was said before, we established some initial conditions to obtain the numerical solution, due to this reason we have used ranges, \(\mathcal{R}\) of \(f(\zeta_{0})=0.005-0.5\), \(g(\zeta_{0})=0.001-0.1\) and for the second order differential equation, we have the \(h(\zeta_{0})=0\) and \(h^{\prime}(\zeta_{0})=1\). This choice of initial condition reflects that firstly it is physically reasonable that the density flow range is \(\mathcal{R}(g)\subset\mathbb{R}^{+}\) and finite. Some recent results suggest that dark fluid can possibly have negative mass [22], but in this model, this leads us to singular solutions. Secondly, this choice of the initial velocity flow \(\mathcal{R}(f)\subset\mathbb{R}^{+}\) means an initially radially expanding fluid. We have seen that if the initial value for \(f(\zeta)\) and \(g(\zeta)\) were set outside of the previously given range, the solution of the differential equation becomes singular. We also saw that the variation in the initial condition corresponding to the shape function of the gravitational potential does not affect the trend of the time-evolution of the system it only causes vertical shifts. Therefore, we set the initial numerical value equal to zero. We are interested to find the solution for the ODE system as a function of the spatial and time coordinates. We transform our one-variable numerical solutions into two-variable functions, for this we used the inverted form of the Eq. (10). One can easily notice if we look at the shape of the _ansatz_ that the solution will have a singularity at \(t=0\). Thus, we used the \(0.001\leq t\leq 25\) and \(0.001\leq r\leq 25\) domains to obtain the space and time-dependent initial dynamical functions, \(u(r,t)\), \(\rho(r,t)\), and \(\Phi(r,t)\). ## 4 Results Here, we present the solutions of the self-gravitating non-relativistic dark fluid. Firstly, we give a detailed introduction to the global properties of the solutions in a non-rotating system. Secondly, we will show the effect of the slow rotation on the solutions. In addition to that we compare the results from the Figure 1: Numerical solutions of the shape functions, the integration was started at \(\zeta_{0}=0.001\), and the initial conditions of \(f(\zeta_{0})=0.5\), \(g(\zeta_{0})=0.01\), \(h(\zeta_{0})=0\), and \(h^{\prime}(\zeta_{0})=1\) were used. For the better visibility function \(g(\zeta)\) was scaled up with a factor of \(200\). The values are given in geometrized unit. two cases with each other and with the previous results in Ref. [13]. Note, the spherical symmetry of the system was kept conserved for all the cases. ### Non-rotating system In the first case, we set the \(\omega\) parameter to zero and we used the obtained numerical values for the similarity exponents (\(\alpha\), \(\beta\), \(\gamma\), and \(\delta\)) to obtain the exact ordinary differential equation. For the numerical integration, we used the same initial condition \(f(\zeta_{0})=0.5\) and \(g(\zeta_{0})=0.01\) for the velocity and density flow respectively, which is applied in our previous paper. Firstly, we used time and radial projection of the unknown functions for better understanding. Fig. 2 illustrates the spatial and time projection of the obtained velocity, density, and gravitational potential. These velocity flow and density flow results are consistent with our initial statement that these kinds of solutions of dark fluid can be used as a model to describe the exploding system (e.g. the Universe). We can see similar behavior for the radial velocity and the density, they have a quick decay in time at all distances. Also, they have a real singularity at \(t=0\), due to the shape of the _ansatz_. However, the radial distribution shows different nature. The density increases excessively near the center of the explosion distances and becomes linear at large distances. On the contrary, the velocity grew polynomially with the radial distance. Compared to the previous, non-rotating, model with two-equations presented in Ref. [13], we found a different radial velocity profile. Also, the similarity exponents are different \(\alpha=0\), \(\beta=1\), and \(\gamma=-1\) in the two-equation model. This is most likely due to the new smoother solution forming as a consequence of the effect of the second derivative appearing in the Poisson equation. Also, we have seen that the solution depicted above in Fig. 2 is numerically stable on the specified initial and boundary condition range. It is more relevant to investigate the dynamics of the complete fluid in time and space to understand some general trends or physical phenomena as the function of the initial conditions. Due to this reason, we evaluated the related energy densities, which are the following \[\epsilon_{kin}(r,t)=\frac{1}{2}\rho(r,t)u^{2}(r,t),\qquad\Phi(r,t)=h(r),\qquad \epsilon_{tot}(r,t)=\epsilon_{kin}(r,t)+\Phi(r,t). \tag{12}\] Figure 2: Different radial (left) and time (right) projections of the velocity flow (\(1^{\rm st}\) row), density (\(2^{\rm nd}\) row), and gravitational potential (\(3^{\rm rd}\) row) for the non-rotating case, respectively. A detailed explanation is given in the main text. The domain range is given in geometrized unit. Fig. 3 illustrates that the kinetic energy density has a singularity at \(t=0\) as we have seen in the case of radial velocity. It has linearly enhancing maxima at larger distances and has a quick decay in time for all radial distances. As we mentioned above, the gravitational potential is stationary in time. Thereby, we can obtain the total energy density of the system. From the total energy density distribution, it is apparent that the short-time behavior of the system is predominated by the initial explosion, and the long-range structure is regulated by the gravitational potential. Figure 3: Numerical solutions of the velocity flow \(u(r,t)\), density flow \(\rho(r,t)\), and gravitational potential \(\Phi(r,t)\) as a function of the spatial and time coordinates in case of a non-rotating system. We also present the distribution of the total and kinetic energy density. For the numerical integration we used \(\zeta_{0}=0.001\), and the initial conditions were \(f(\zeta_{0})=0.5\), \(g(\zeta_{0})=0.01\), \(h(\zeta_{0})=0\), and \(h^{\prime}(\zeta_{0})=1\). ### Rotating system In this section, we analyzed the effect of slow rotation compared to the non-rotating case. Firstly, we studied the effect of the variation of the maximal angular velocity, \(\omega\) parameter. We have chosen the polar angle, \(\theta\) parameter at the equatorial, which gives the largest effect. As previously stated we assumed for further analysis that the spherical symmetry is not broken. According to that, we fixed that the gravitational force density is at least a magnitude larger than the centrifugal force density at every time and space (\(\|f_{grav}\|\gg\|f_{centr}\|\)). Numerical results showed us that an \(\omega\) range can be found where the constraint will be fulfilled if the previously specified initial condition set is valid. We demonstrate that the asymptotic behavior of the numerical solution has a significant \(\omega\) dependence on the acceptable (0 \(<\omega\)\(<\) 0.3) domain of parameters and initial conditions. Figure 4: The time and radial projections of the velocity flow (1\({}^{\rm st}\) row), density (2\({}^{\rm nd}\) row), and gravitational potential (3\({}^{\rm rd}\) row) respectively for the rotating system (\(\omega=0.2535\)). For the numerical integration we used \(\zeta_{0}=0.001\), and the initial conditions were \(f(\zeta_{0})=0.5\), \(g(\zeta_{0})=0.01\), \(h(\zeta_{0})=0\), and \(h^{\prime}(\zeta_{0})=1\). If we compare the results shown in Fig. 4 with the non-rotating case (Fig. 2), it is evident that slow and constant rotation does not affect the time and spatial distribution of the gravitational potential. Moreover, we can see that the radial density profile of the system is nearly uniform and identically to the previous case it decreases rapidly over time. Thus we can conclude, that the rotation accelerates the even distribution of the material in space and speeds up inflation. The singular behavior close to the \(t=0\) does not affect by the rotation as was expected. However, a significant difference can be seen as one looks at the first graph (top left panel of Fig. 4). One can see that the radial profile of the velocity flow starts from zero in the origin and it shows exponential growth for the short-range behavior. Fig. 5 illustrates that the \(\omega\) value has a critical influence on the long-range asymptotic behavior of the time evolution of the velocity flow. Moreover, the increase of the \(\omega\) value causes significant modification in the radial profile for both the velocity and density flow but leaves the time evolution unaltered. In the analysis of the behavior of the obtained numerical solutions, we found that on the inspected initial value range shows similar behavior. An \(\omega<1\) can be found for every initial and boundary condition where the long-range asymptotic structure alternates, an example of this can be seen in Fig. 5. Likewise, in the previous case, we studied the properties of the relevant dynamic variables. The energy density associated with rotation and the total energy is \[\epsilon_{rot}(r,t)=\frac{1}{2}\rho(r,t)\omega^{2}r\qquad\epsilon_{tot}(r,t)= \Phi(r,t)+\epsilon_{kin}(r,t)+\epsilon_{rot}(r,t). \tag{13}\] Figure 5: The maximal angular velocity \(\omega\) dependence of the space and time evolution. Different lines correspond to different angular velocity values, \(\omega\). The curves were evaluated at a particular time (right) and radial (left) coordinates were given on the vertical axis. A detailed explanation is given in the main text. ## 5 Discussion According to current scientific understanding, dark matter and dark energy make up about 95% of the total energy density of the observable Universe today. The dark fluid theory suggests that a single substance may explain dark matter and dark energy. The behavior of the hypothetical dark fluid is believed to resemble that of cold dark matter on galactic scales while exhibiting similar characteristics to dark Figure 6: Numerical solutions of the velocity flow \(u(r,t)\), density flow \(\rho(r,t)\), and gravitational potential \(\Phi(r,t)\) as a function of the spatial and time coordinates for the rotating case. We also present the distribution of the total and kinetic energy density. For the numerical integration we used \(\zeta_{0}=0.001\), and the initial conditions were \(f(\zeta_{0})=0.5\), \(g(\zeta_{0})=0.01\), \(h(\zeta_{0})=0\), and \(h^{\prime}(\zeta_{0})=1\). We have used the \(\omega=0.2535\) parameter. energy at larger scales [22]. Predictions can be obtained from our Sedov-von Neumann-Taylor blast wave-inspired non-relativistic dark fluid model on galactic and cosmological scales. A useful feature of the model is that the initial value problem of the reduced ordinary differential equation system is easier to handle than the boundary- and the initial condition problem of the original partial differential equations. To provide a reliable practical basis for our dark-fluid model, we test the theoretical results on two different astrophysical scales, presenting the similar nature of the solution on several orders of magnitude. ### Cosmological Scale The solution was developed based on cosmological observations, using the Hubble law to scale the expansion of the Universe. Our model includes various scaling mechanisms through the use of a Sedov-type self-similar _ansatz_, which allows describing different time decay scenarios [23]. In the case of the non-rotating system, we can conclude that the radial velocity profile of the solution provides Hubble-like expansion. However, the non-rotating model provides inflation-like behavior, \(u(r,t)>1\), on a long-range timescale, which cannot be physical (causal). We have also seen that the high initial velocity of the dark fluid will relax to a small, constant non-relativistic value at the long timescale (\(t>6\) billion years). Also, an interesting feature of the model is that the gravitation potential is constant in time. One notable aspect of the rotating model, it does not show inflation-like behavior in the expected time range, contrary to the non-rotating model. Simultaneously, we have found that the radial profile of the density will saturate and becomes close to flat at far distances from the initial point (see top right panel of Fig. 6). One may also set the initial condition according to that the Universe today is observed as flat Euclidean, with the density parameter \[\Omega=\rho_{i}/\rho_{c}\approx 1,\text{ where }\quad\rho_{c}=\frac{3H_{0}^{2}}{8\pi G} \tag{14}\] is the corresponding critical density with Hubble-parameter, \(H_{0}\). The flatness of the Universe is indicated by the recent measurements of WMAP [24]. The sum index stands for baryonic (\(B\)), dark energy (DE), and cold-dark matter (CDM) respectively. We can define the matter part of the \(\Omega_{M}=\Omega_{B}+\Omega_{CDM}\), and the full \(\Omega=\sum_{i}\Omega_{i}\) and \(i\in\{B,CDM,DE\}\)[25]. Then we assumed the following identity, \[\frac{\Omega_{M}}{\Omega}\sim\frac{E_{kin}}{E_{tot}}=0.26 \tag{15}\] Accordingly, we can determine the relevant time and radial coordinates from the obtained results (see Fig. 3) which corresponds to this specific energy ratio \((r=93\,\text{Gly},t=1.0\pm 0.1\,\text{Gy})\). Relevant radial distribution and time-evolution of the Universe can be seen in Fig. 7. In the absence of dark fluid, the Universe will continue to expand indefinitely, but at a gradually slowing rate that will eventually approach zero. This will cause an open topology universe. The ultimate fate of the Universe is that the temperature asymptotically approaches absolute zero, the so-called "Big Freeze". At the same time, it is important to mention, that our non-relativistic model weakness does not provide as precise results as Friedmann-equation-based models [22]. ### Galactic Scale As it was mentioned before, dark fluid is expected to behave as cold dark matter on galactic distance scales, besides the cosmological scale. Thus, let us probe our model to describe the relationship between the radial distance from the center of a disc galaxy and the orbital velocity of the matter. It is widely known that there is a discrepancy between the predicted rotation curves based on the centrally concentrated mass associated with observable luminous material and the actual rotation curves observed in galaxies. Cold-dark matter halo models are the main presumed solutions to describe this anomaly. One notable aspect of the rotating model is that it shows a similar radial velocity profile as it was already found in dark matter halo galactic models [26]. However, there are other successful models based on modified Newtonian gravity or thermodynamical considerations [27]. We compared the theoretical curves from our rotating non-relativistic self-gravitating dark fluid model with high-quality rotation curve data of spiral galaxies from THINGS [28] and SPARC [29] database. We have used three different galaxies (NGC 3917, NGC 3198 [31], NGC 2403 [32]) to demonstrate the cold dark matter characteristic of our dark fluid model, on the galactic scale. Figure 7: The radial and time distribution of the Universe on the cosmological scale is presented. Different lines correspond to different angular velocity, \(\omega\) values, which are in the geometrized unit. The curves were evaluated at a particular time (right) and radial (left) coordinates were given in the figure. A detailed explanation is given in the main text. In Fig. 8 we used the same initial and boundary value conditions as for results presented in Fig. 4. One can see that in the long-distance range, the theoretical curves provide a satisfying correlation with the observational data. However, close to the galactic center, the model will vary from the measurements. It arises from the fact, that our non-relativistic model provides different dark-fluid density distribution contrary to the traditional models. In more detail, the density of the dark matter does not vanish on a given radial domain in our model. It comes inherently from the shape of the _ansatz_ since we studied a non-compact solution of the given hydrodynamical system. However, the obtained magnitude of the cold dark matter surface density will agree with the estimated results (\(\rho\sim 10^{9}M_{\odot}/\mathrm{kPc}^{-2}\)). Moreover, it is easy to notice, that our model has significant limitations because it does not show the fluctuation in the velocity profile caused by the spiral arms of the galaxy. For our analysis, we endeavored to select galaxies that are in different stages of galaxy evolution. ## 6 Conclusions and Outlook In this paper, we studied the behavior of self-similar time-dependent solutions in a coupled non-linear partial differential equation system describing a non-viscous, non-relativistic, and self-gravitating fluid (Euler-Poisson system). The reason behind the applied self-similar solutions is that they are proven to be a very efficient method to analyze various kinds of physical systems. Especially to analyze the hydrodynamical description of systems that involves collapse and explosion. The analysis presented in this paper is an Euler-Poisson extension of our previous model [13]. We have found that Sedov-Taylor type of solutions exists, and the algebraic equation obtained for the similarity exponents has only one unique solution. We have used the obtained solution to describe the behavior of the non-relativistic dark fluid on two different astrophysical distance scales. On one hand, we studied the nature of the dark fluid on cosmological scales, and we presented the relevant kinematical and dynamical Figure 8: Velocity curves of galaxies, NGC 3917, NGC 3198, and NGC 2403 are depicted. The lines correspond to the radial projection with different time slices and different markers corresponding to observational data from database [28; 29]. The \(\omega\) value is given in geometrized unit. A detailed explanation is given in the main text. quantities. On the other hand, we analyzed what predictions can be obtained about the velocity and density distribution on the many orders of magnitude smaller galactic size scale. Although, one can easily notice that the model has certain limitations, due to its classical nature. Yet, it does provide relatively adequate results on both scales. It has the practical benefit, that the calculation does not need high computing performance. Therefore, it could be used to estimate the physical value of the initial- and boundary values, when more sophisticated theoretical or numerical simulations are used. Moreover, it can provide a reliable basis for comparison for 2- or 3-dimensional hydrodynamical simulations. Also, it is possible to improve this model with reasonable effort to describe even relativistic matter [33], Chaplygin gas [34], or two-fluid models. Original idea: I.F. B., Formal analysis, B.E.Sz.; Software, B.E.Sz.; Visualization, B.E.Sz.; Writing--original draft, B.E.Sz., Correction I.F.B, and G.G.B. All authors have read and agreed to the published version of the manuscript. Authors gratefully acknowledge the financial support by the Hungarian National Research, Development and Innovation Office (NKFIH) under Contracts No. OTKA K135515, No. NKFIH 2019-2.1.11-TET-2019-00078, and No. 2019-2.1.11-TET-2019-00050 and Wigner Scientific Computing Laboratory (WSCLAB, the former Wigner GPU Laboratory). Authors gratefully acknowledge the useful discussions with N. Barankai. One of the authors (IFB) offers this study in memory of an astronomer Gyorgy Paal (1934 - 1992) who taught him physics and sailing in the summer of 1990 at Lake Balaton. The authors declare no conflict of interest.
2305.11018
Sizing multimodal suspensions with differential dynamic microscopy
Differential dynamic microscopy (DDM) can be used to extract mean particle size from videos of suspensions. However, many suspensions have multimodal particle size distributions (PSDs), for which this is not a sufficient description. After clarifying how different particle sizes contribute to the signal in DDM, we show that standard DDM analysis can extract the mean sizes of two populations in a bimodal suspension given prior knowledge of the sample's bimodality. Further, the use of the CONTIN algorithm obviates the need for such prior knowledge. Finally, we show that by selectively analysing portions of the DDM images, we can size a trimodal suspension where the large particles would otherwise dominate the signal, again without prior knowledge of the trimodality.
Joe J Bradley, Vincent A Martinez, Jochen Arlt, John R Royer, Wilson C K Poon
2023-05-18T14:51:27Z
http://arxiv.org/abs/2305.11018v2
# Sizing multimodal suspensions with differential dynamic microscopy ###### Abstract Differential dynamic microscopy (DDM) can be used to extract mean particle size from videos of suspensions. However, many suspensions have multimodal particle size distributions (PSDs), for which this is not a sufficient description. After clarifying how different particle sizes contribute to the signal in DDM, we show that standard DDM analysis can extract the mean sizes of two populations in a bimodal suspension given prior knowledge of the sample's bimodality. Further, the use of the CONTIN algorithm obviates the need for such prior knowledge. Finally, we show that by selectively analysing portions of the DDM images, we can size a trimodal suspension where the large particles would otherwise dominate the signal, again without prior knowledge of the trimodality. + Footnote †: _School of Physics & Astronomy, The University of Edinburgh, Peter Guthrie Tait Road, Edinburgh EH9 3RD, United Kingdom. E-mail: [email protected]_ + Footnote †: _School of Physics & Astronomy, The University of Edinburgh, Peter Guthrie Tait Road, Edinburgh EH9 3RD, United Kingdom. E-mail: [email protected]_ ## 1 Introduction Particle sizing is important across many industrial sectors. A modern text [1] lists seven categories of methods: microscopy, sieving, electrozoning, laser diffraction (= static light scattering, SLS), ultrasound extinction, sedimentation, and dynamic light scattering (DLS). Some of these measure particles one at a time (microscopy, electrozoning), others deal with collections of particles _en masse_. Many are optically based (various light microscopies, SLS, DLS). These methods are calibrated against quasi-monodisperse spherical particles, where the polydispersity, defined as the standard deviation of the particle size distribution (PSD) normalised by the mean, is typically \(\lesssim 10\%\), and can even be \(\lesssim 2\%\)[2]. The sizing of such particles poses few problems; reporting simply a mean diameter and a polydispersity generally suffices. While quasi-monodisperse spheres find use in research and specialised applications, most real-life suspensions are significantly polydisperse, often with strongly-peaked, multimodal PSDs. Examples include raw and UHT milk, with a bimodal mixture of large fat droplets and smaller casein micelles [3], sunflower tahini with a reported trimodal PSD [4], and chocolate, where the PSD shifts from trimodal to bimodal as refining proceeds [5]. Multimodal PSDs can result from aggregation, for example nanoparticles used for biomedical applications often develop a second population of large agglomerates when dispersed in a physiological buffer [6]. Reporting a mean and polydispersity for a multimodal suspension is almost meaningless; ideally, one wants to capture the full PSD. In practice, it is often difficult to detect multimodality in the first place, let alone obtain mean sizes for each population. While direct imaging is considered the 'gold standard' of sizing, the PSD must be built up particle by particle, accumulating statistics requires a large number of measured particles, \(N\), with the relative uncertainty dropping only weakly as \(N^{-\frac{1}{2}}\). Moreover, it is difficult to guarantee representative sampling and preparation (e.g. drying for SEM) can affect the particles. Scattering allows better statistical averaging, because the scattering volume typically contains many more particles than can be practically imaged. However, analysis requires inverting a Laplace transform, where the unknown PSD occurs under an integral sign, so that a unique solution does not exist and the problem is notoriously sensitive to noise [7]. Nevertheless, various scattering methods, especially SLS and DLS, are popular, with many available commercial instruments and sophisticated analysis software (e.g. CONTIN for DLS). Impressive answers can be obtained if some sample details are known. For example, SLS has been applied to a multimodal suspension with 5 populations varying in size over several orders of magnitude [8]. However the analysis requires accurate prior knowledge of the particles' refractive indices, which is not trivial to obtain. Differential dynamic microscopy (DDM) is a technique for high-throughput sizing in which the intermediate scattering function (ISF), familiar from DLS, is obtained from microscopy images without the need to resolve the particles [9]. Since DDM and DLS both access the ISF, there is significant overlap in data analysis. Yet DDM offers some distinct advantages, such as the ability to cope with significant turbidity [10]. Here we show that DDM is notably well-suited for sizing multimodal suspensions because it probes spatial fluctuations at very low wave vector, \(k\), even \(\lesssim 0.5\mu\)m\({}^{-1}\), by imaging large fields of view at low magnifications. Reaching equivalent scattering angles of \(\lesssim 2^{\circ}\) in DLS requires complex instrumentation [11, 12], and is seldom attempted. In SLS and DLS, the electric field scattered by a single homogeneous sphere of radius \(R\) at scattering vector \(k\) is given by [13] \[b(k)=\left[\frac{4}{3}\pi R^{3}\right]\Delta n(k)P(k), \tag{1}\] where \(\Delta n(k)\) is the difference in refractive index between the particle and the solvent, and \[P(kR)=3[\sin(kR)-(kR)\cos(kR)]/(kR)^{3}, \tag{2}\] is the form factor with \(P(kR\to 0)\to 1\), typically accessible experimentally only as the squared form factor \(P^{2}(kR)\), Fig. 1. \(P(kR)\) displays successive zeros, with the first at \(k_{0}R=4.493\). Two consequences follow from Eq. 2. First, in the dilute limit, the DLS signal scales as \(Nb^{2}(k)\) for \(N\) particles in the scattering volume [13], so at low \(k\) as \(NR^{6}\sim\phi R^{3}\), where \(\phi\) is the particle volume fraction. Secondly, particles with \(R\approx 4.493/k_{0}\) in a polydisperse sample contribute little signal at scattering angles around the minimum. This effect can be used to measure low polydispersities accurately in a multi-angle experiment [14], but may generate large errors in commercial single-angle instruments [15]. The DDM signal is also dependent on \(P(kR)\)[16]. However, its first minimum has little effect, because for all Brownian suspensions DDM can operate with \(kR\ll 1\) where \(P(kR)\to 1\). So, Safari _et al._[17] were able to use DDM to size a bidisperse suspension with a 1:20 particle size ratio and up to 3% by volume of the large particles, where in all but one case, DLS fails to size the minority (large) species. However, these authors explicitly input the bimodality of their suspensions to their DDM analysis. In this work, we demonstrate the use of DDM to size a bidisperse suspension with a significantly smaller size ratio of 1:4.6 _without_ assuming bimodality in the analysis, and probe the technique's efficacy when the number ratio of the species is systematically changed. Furthermore, we show how to extend the limits of applicability of DDM further by selecting regions of interest in our image sequences for analysis. The method is demonstrated by sizing a trimodal system in which the signal from the largest particles dominates, without assuming trimodality _a priori_. Below, we first present expressions for fitting DDM results to data from polydisperse suspensions, explaining how signal contribution scales with particle size. Next, we explain our experimental and data fitting methods. After validating our predicted signal vs. size scaling, we demonstrate the application of DDM to bi- and tri-modal dispersions, concluding with a recommended protocol for sizing multimodal suspensions with DDM. ## 2 DDM for Polydisperse Suspensions In the first DDM experiment [9], brightfield microscopy with partially coherent illumination was used. However, subsequent work has used much wider light-source apertures for less coherent illumination [18] or fluorescence imaging [19], where (unlike in SLS and DLS) coherence is not assumed or essential. Conceptually, this is because DDM accesses the ISF by directly correlating density fluctuations from real-space images, albeit in Fourier space. The key quantity in DDM is the differential image correlation function (DICF), \(g(k,\tau)\), which is the squared Fourier transform of the difference between an image at time \(\tau\) and a reference image at time zero. We have shown [16] that for \(N\) identical particles in the image, the DICF is related to the ISF, \(f(k,\tau)\), by \[g(k,\tau) = A(k)\left[1-f(k,\tau)\right]+B(k), \tag{3}\] \[A(k) = 2Na^{2}(k)S(k), \tag{4}\] where \(B(k)\) is the system's noise spectrum. The signal amplitude, \(A(k)\propto a^{2}(k)\), the contribution to the signal from a single particle, and \(\propto S(k)\), the particles' structure factor. In DDM, \(k\) is a Fourier component of density fluctuations and not a scattering vector.1 Footnote 1: To show that the signal measured at scattering vector \(k\) in DLS in fact characterises density fluctuations with that wave vector requires considerable analysis [13]. In a monodisperse suspension of non-interacting spherical particles of radius \(R\), the ISF is \(f(k,\tau)=\exp\left[-Dk^{2}\tau\right]\), with the diffusivity \(D=k_{B}T/6\pi\eta R\) in a suspending medium of viscosity \(\eta\) at temperature \(T\) (and \(k_{B}\) is Boltzmann's constant). So, fitting the measured \(g(k,\tau)\) to Eq. 3 returns \(D\) and therefore \(R\). Appendix A shows that these results generalise naturally to a suspension of polydisperse spheres, with the amplitude and the ISF now being suitably-weighted sums over the \(M\) particles species \(i=1\) to \(M\): \[A(k) = \sum_{i}^{M}A_{i}(k), \tag{5}\] \[f(k,\tau) = \sum_{i}^{M}C_{i}(k)f_{i}(k,\tau)\;\;\text{with}\] (6) \[C_{i}(k) = \frac{A_{i}(k)}{A(k)},\;\;\text{and}\] (7) \[f_{i}(k,\tau) = \exp\left[-D_{i}k^{2}\tau\right]\;\text{with}\;\;D_{i}=\frac{k_{ B}T}{6\pi\eta R_{i}}. \tag{8}\] To interpret results obtained by fitting these expressions to data, we need to understand how the population weights, \(\{C_{i}(k)\}\) in Eq. 6, scale with particle radius, \(R\). The literature has occasionally implied that the DDM signal scales with \(R\) more weakly than the \(NR^{6}\sim\phi R^{3}\) for (homodyne) DLS [20]. However, there is no explicit analytic or experimental treatment of this issue to date. The key is to realise that \(a(k)\) in Eq. 4 is the two-dimensional (2D) Fourier Transform (FT) of \(a(r)\), the intensity pattern of the image of one particle centred at the origin of the image plane (with radial coordinate \(r\) only in the case of circular symmetry) [16]. For a homogeneously fluorescent particle that is much smaller than the depth of focus, \(a(r)\) should, to a good first approximation, be given by the 2D projection of a solid sphere (mathematically, a 3-ball, \(B_{3}\)) onto the equatorial plane, \(\mathcal{P}_{2}(B_{3})\), transmitted through the microscope's optics. The Projection-Slice Theorem states that the 2D FT of a projection of a 3D object is given by a slice (perpendicular to the projection) through the origin of the FT of the 3D object [21]. So, the FT of \(\mathcal{P}_{2}(B_{3})\) is \(\frac{4}{3}\pi R^{3}P(kR)\) with the \(P(kR)\) in Eq. 2, only now \(k\) is the magnitude of wave vectors in a 2D rather than 3D Fourier space. In the dilute limit, where Fig. 1: Theoretical squared form factor, \(P^{2}(kR)\), for a monodisperse sphere of radius \(R\) as a function of the scattering vector or Fourier component \(k\) non-dimensionalised by the radius, \(kR\), Eq. 2, with minima positions given. The red curve is the Guinier approximation. \(S(k)\to 1\), Eq. 3 becomes \[g(k,\tau)=\underbrace{2N\rho^{2}\left[\frac{4}{3}\pi R^{3}P(kR)\right]^{2}}_{A(k) }[1-f(k,\tau)]+B(k), \tag{9}\] with contrast density \(\rho\) (e.g., dye concentration in fluorescence), assumed here to be homogeneous and the same for all particles. In phase contrast imaging, the image is a projection of the optical path length, so can again be approximated by \(\mathcal{P}_{2}(B_{3})\). The bright-field image in the geometric limit is a shadowgraph which can be approximated by \(I_{0}-\beta\mathcal{P}_{2}(B_{3})\), where \(I_{0}\) and \(\beta\) are constants. In either case, Eq. 9 is recovered.4 Quite generally, then, Footnote 4: We note that this scaling is also consistent with a heterodyne scattering perspective [20, 21] in which the contribution of individual particles, \(a(k)\propto k^{3}\), comes from interference between the scattered and transmitted electric fields and \(A(k)\sim Na^{2}(k)\). \[A(k)\sim Nk^{6}P^{2}(kR)\sim\varphi R^{3}P^{2}(kR). \tag{10}\] Since \(P(kR)\to 1\) at the low \(k\) accessed in DDM, this predicts an \(NR^{6}\) or \(\varphi R^{3}\) scaling of signal with particle size, as found in DLS. Our findings readily generalise to arbitrary-shaped anisotropic particles when there are enough (independently oriented) particles to sample the orientational distribution, or when their rotational diffusion is fast compared to the relevant timescales in a DDM experiment. In this limit, a slice through the spherically-symmetric orientationally-averaged 3D form factor is the single particle contribution to \(a(k)\) in Eq. 4. The well-known Guinier approximation to the low \(k\) form factor,[22] Fig. 1, then gives \(a(k)\sim V_{p}^{2}e^{-k^{2}R_{p}^{2}/3}\), where \(R_{g}\) (\(=\sqrt{3R^{2}/5}\) for a sphere) is the particle's gyration radius. Now, the DDM signal scales as \(NV_{p}^{2}\sim\phi V_{p}\), which is the generalisation of Eq. 10. Eq. 9 does not take into account the finite depth of field in the \(z\) direction. To do so, note first that for bright-field imaging at low numerical apertures typical in DDM, \(k_{z}\lesssim\text{NA}\times k\), so that the longitudinal dynamics are much slower than the in-plane dynamics. We can then take \(f(k,k_{z},\tau)\approx f(k,0,\tau)\).[23, 24] The effect of a finite depth of field on \(A(k)\) and limited lateral resolution can be included by convolving the real-space density with the optical point-spread function, or multiplying the density by the optical transfer function, OTF\((k,k_{z})\), in reciprocal space to obtain \[A(k)=2N\rho^{2}\left[\frac{4}{3}\pi R^{3}\right]^{2}\underbrace{\int|\text{ OTF}(k,k_{z})|^{2}\;P^{2}\left(\sqrt{k^{2}+k_{z}^{2}}\;R\right)dk_{z}}_{P^{2}_{\text{ eff}}(k,R)}. \tag{11}\] The averaging of \(P^{2}(k,k_{z})\) over \(k_{z}\) weighted by \(|\text{OTF}(k,k_{z})|^{2}\) gives a squared effective form factor, \(P^{2}_{\text{eff}}(k,R)\), so that \[A(k)\sim NR^{6}P^{2}_{\text{eff}}(k,R)\sim\varphi R^{3}P^{2}_{\text{eff}}(k,R), \tag{12}\] preserving the \(R^{6}\) scaling. Substitution into Eq. 6 gives \[f(k,\tau)\sim\sum_{i}^{M}\phi_{i}R_{i}^{3}P^{2}_{\text{eff}}(k,R_{i})f_{i}(k, \tau), \tag{13}\] where \(\phi_{i}\) is the volume fraction of species \(i\). If all species are small enough such that \(kR_{i}\ll 1\), then \(P\to 1\) for all species and \(P_{\text{eff}}(k,R_{i})=P_{\text{eff}}(k)\) becomes the square of the projection of OTF onto the \(k_{z}=0\) plane, dropping out of \(f(k,\tau)\) so there will be no form-factor minima effects. However, for larger particles the form factor \(P(kR_{i})\) can drop noticeably below unity over the range of \(k\) probed by DDM, with a corresponding drop in \(P_{\text{eff}}(k,R)\) and potential form-factor effects, including a departure from \(R^{6}\) scaling. ## 3 Materials and Methods ### Experimental We used polystyrene spheres, which have been routinely characterised using DDM.[23, 25, 9] Dispersions from Thermo Scientific 5000 series with sizes (diameters here and throughout) of 60 nm, 120 nm, 240 nm, 500 nm, 1.1 \(\mu\)m and 2.1 \(\mu\)m were diluted using Milli-Q water to give stock solutions of various concentrations, from which we prepared various bimodal or trimodal mixtures. Samples were loaded into 0.4\(\times\)4\(\times\)50 mm glass capillaries (Vitrocom Inc.) and sealed with Vasculine to prevent evaporation. Bright-field videos were captured using a Nikon Ti-E inverted microscope with a Hamamatsu Orca Flash 4.0 camera. We imaged far from the sides of the capillary and 100 \(\mu\)m from the base. For each measurement a series of five videos were captured immediately after loading to minimise sedimentation. Each video is 5000-6000 frames of 512\(\times\)512 pixels. Specific choices of frame rate and objective, detailed below, reflect these considerations: * DDM does not require resolvable particles and pixel \(\gtrsim\) particle size typically gives the best results: large pixels mean lower \(k\), minimising form factor effects. * Chosen to capture \(\lesssim\) 4 Gb 16-bit TIFF data including both short - and long-time plateaus of the ISF Small changes to the settings did not in general significantly impact results except in the extreme cases treated in Section 6. The DICF is extracted from videos using previously-described LabView software.[26] The uncertainty in the DICF is estimated as the standard error on the mean from the azimuthal averaging of \(k\). A more theoretical approach requires quantifying the variance of background image intensity;[27] but such rigour is not needed here and is likely too demanding for general application. ### Data fitting The extracted DICFs are fitted to Eq. 3 with model \(f(k,\tau)\) to extract the diffusivities, \(\{D_{i}\}\); here we outline the case of bidispersity. To decide a suitable range of \(k\) for analysing each system, we carried out DDM experiments with the two individual populations of particles used to make a bidisperse sample, and used independent 1D fits to each \(k\) dataset to extract the \(k\)-dependent average diffusivities \(D_{1}(k)\) and \(D_{2}(k)\). The range of \(k\) values over which these are both flat to within noise is used for all subsequent data fitting with these particles and microscope settings. #### 3.2.1 Least Squares Global least-squares (LS) fits at all \(k\) within the chosen range are performed simultaneously using the Levenberg-Marquardt algorithm implemented in Scipy.[28] Other algorithms often failed to converge or returned biased diffusion coefficients in multimodal fits. We fitted \(g(k,\tau)\) with \(\{A(k)\}\) and \(\{B(k)\}\) as free parameters for each \(k\) and \(k\)-independent fit parameters in \(f(k,\tau)\) (e.g. diffusivities). Three models of the ISF were used: 1. \(f(k,\tau)=\exp(-Dk^{2}\tau)\) - for monodisperse diffusing particles. 2. \(\ln(f(k,\tau))=-\mu_{1}k^{2}\tau+\mu_{2}(k^{2}\tau)^{2}/2-\mu_{3}(k^{2}\tau)^{3} /6+...\) - the cumulant expansion [29, 30, 31] typically used to extract the mean diffusivity (\(\mu_{1}\)) and polydispersity (from \(\mu_{2}\)) in low-polydispersity monomodal samples. 3. \(f(k,\tau)=C_{1}\exp(-D_{1}k^{2}\tau)+(1-C_{1})\exp(-D_{2}k^{2}\tau)\) - for two monodisperse populations with diffusivities \(D_{1}\) and \(D_{2}\) contributing fractions \(C_{1}\) and \(1-C_{1}\) to the signal respectively. Figure 2a illustrates the information extracted by fitting these models to a simulated ISF from a bidisperse distribution of diffusivities. Model (1) finds an essentially meaningless 'average' that misses both populations. Model (2) suffers from the same problem as far as the mean value is concerned, but gives a credible description of the notional 'polydispersity'. Model (3) returns more or less correct average sizes and contributions for the two populations, but does not deal with the polydispersity within each. #### 3.2.2 Contin The CONTIN algorithm [32, 33] has long been used to extract the distribution of diffusivities from measured ISFs in DLS. It returns the amplitude of the different contributions to the composite ISF, \(\{C_{i}(k)\}\) in Eq. 6, for a finite number of user-determined bins, giving a normalised signal-weighted distribution of diffusivities, the 'Particle Diffusivity Distribution (PDD)', \(P(D)\), which is linked to the PSD by the Stokes-Einstein relation. The presence of noise in the data renders this inverse problem ill-posed. CONTIN deals with this by'regularisation', [7] i.e., balancing fit quality against parsimony by favouring a certain degree of'smoothness' in \(P(D)\). We investigated a variety of criteria for optimising this balance (via tuning \(\alpha\), the'regularisation parameter' in CONTIN), including the L-curve [34] and reduced-\(\chi^{2}\) statistic. However, Provencher's method of selecting \(\alpha\) by comparing the impact of regularisation and the noise in the data, which is implemented as part of CONTIN, [32, 33] was consistently found to work best. Figure 2b illustrates the result of this procedure in fitting the ISF from a bimodal distribution of diffusivities, inputting only the desired binning of the output histogram. CONTIN's regularisation selection works exceptionally well because the only noise applied to the simulated \(f(k,\tau)\) is Gaussian with a known amplitude (\(\sigma=10^{-5}\)), and is independent for each \(k\)-\(\tau\) pair. With real experimental noise, the selection of \(\alpha\) is generally more difficult. CONTIN is designed for linear problems, but extracting \(\{C_{i}\}\) from the DICF by fitting Eqs. 5-8 to Eq. 3 is non-linear, because \(A(k)\) and \(B(k)\) are unknown. We therefore must estimate these parameters before using CONTIN. One approach is to perform a least-squares fit of \(g(k,\tau)\) with an approximate model (e.g. a cumulant expansion) and use the returned \(A(k)\) and \(B(k)\) to extract an ISF to pass on to CONTIN. However, we found that this encoded the approximate model into the CONTIN results. Alternatively, since \(g(k,\tau\to 0)=B(k)\) and \(g(k,\tau\rightarrow\infty)=A(k)+B(k)\), the long- and short-time 'plateau values' of \(g(k,\tau)\) can in principle give \(A(k)\) and \(B(k)\). [25] Under practical experimental conditions, however, it is often challenging to access one or the other of these limits. For us, the long-time plateau \(g(k,\tau\rightarrow\infty)\) is typically accessible whilst the short-time plateau \(g(k,\tau\to 0)\) is difficult to reach even at the highest frame rates, and errors in estimating \(B(k)\) can significantly impact results. [27] Instead, we extract \(B(k)\) by fitting the first 10-15 time points of \(g(k,\tau)\) to a second order polynomial of the form \[g(k,\tau)\approx B(k)+\beta_{1}(k)\tau-\beta_{2}(k)\tau^{2} \tag{14}\] for each \(k\). This form can be justified by substituting \(f(k,\tau)=\int P(D)\exp(-Dk^{2}\tau)\;dD\), the continuum version of Eq. 6, into Eq. 3 and Taylor expanding around \(\tau=0\), making use of the fact that \(P(D)\) is normalised. For completeness, we find \(\beta_{1}(k)=k^{2}A(k)\int P(D)D\;dD\) and \(\beta_{2}(k)=\frac{1}{2}k^{4}A(k)\int P(D)D^{2}\;dD\). The fitted value of \(B(k)\) can then be subtracted from the average of the final 10-15 data points to obtain \(A(k)\). With this, the ISF can be extracted from \(g(k,\tau)\) and passed to CONTIN with an uncertainty estimate based on propagation of errors in \(g(k,\tau)\), the standard error of the data points averaged for \(A(k)\), and the polynomial fit uncertainties for \(B(k)\). ## 4 Results: scaling of DDM Signal with particle size To verify Eq. 12, we performed DDM experiments on quasi-monodisperse suspensions with a range of radii, \(R\). A sample of each suspension from Section 3.1 was diluted to a mass fraction \(\psi=10^{-5}\). Five bright-field videos of each were captured at 200 fps using a 20\(\times\)/0.5 objective without binning, giving 325 nm pixels. Using phase-contrast illumination produced equivalent results. A least squares 3rd order cumulant fit of the DICF from each video gives \(A(k)\), \(B(k)\), and average diffusion coefficient. Identical microscope settings ensured that changes in \(A(k)\) are solely due to particle size, and there is no measurable systematic trend in average intensity with \(R\) so turbidity is negligible in all cases. Each \(A(k)\) was normalised by that of the 60 nm particles, for which \(P(kR)\approx 1\) for all \(k\). This removes the significant \(k\) dependence of the OTF. The range \(1.0\mu\mathrm{m}^{-1}\leq k\leq 2.5\mu\mathrm{m}^{-1}\) was used for all Fig. 2: Extracted particle diffusivity distributions, \(P(D)\), from a simulated ISF based on a defined bimodal \(P(D)\) (red curves, \(\mu_{1}=1\), \(\mu_{2}=3\), \(\sigma_{1}=0.25\), \(\sigma_{2}=0.4\), \(C_{1}=0.25\)). ISF generated with Gaussian noise at each \(f(k,\tau)\); \(\sigma=10^{-5}\). (a) Graphical representation of output from three different least squares fits; monodisperse (blue), bidisperse (orange), and a cumulant expansion (green). (b) Output from a CONTIN fit. videos to remove the effect of any additional \(k\) dependence. To isolate the power-law dependence on particle size, we remove the the form factor contribution to \(A(k)\) by dividing by the squared form factor for a sphere, \(P^{2}(kR)\) (Eq. 2). For all sizes, the measured diffusivity agrees with the Stokes-Einstein value, Fig. 3a. We can therefore size particles over two orders of magnitude of \(R\) without changing experimental settings, even though \(A(k)<B(k)\) for the smaller particles. Figure 3b shows that for \(R\lesssim 0.55\,\mu\)m, \(A(k)\propto R^{3}\) at constant \(\psi\) (and therefore \(\phi\)), verifying Eq. 12. Since \(N\propto\psi/R^{3}\) and we have previously confirmed experimentally [35] that \(A(k)\propto N\), this is equivalent to a signal of \(R^{6}\) per particle, matching DLS scaling and our theoretical prediction. For the largest particles, \(A(k)\) increases with \(R\) slower than \(R^{3}\), even after correcting for form factor effect, Fig. 3b. Such deviations are not unexpected, because beyond a certain point the convolution of the point spread function with the particle form factor, Eq. 11, is no longer sufficient to describe the particle's appearance. ## 5 Results: bidisperse systems The simplest multimodal suspension is bimodal, with milk [36] being an everyday example. We mixed \(10^{-4}\) mass fraction dispersions of \(240\,\mathrm{nm}\) and \(1.1\,\mu\mathrm{m}\) particles (size ratio \(\approx\) 1:4.6) to produce bidisperse mixtures in which the small particles should contribute between 1% and 99% of the signal to the ISF according to Eq. 12; Table 1. Videos of each sample and of the parent populations were captured at 100 fps, with a 10\(\times\)/0.3 objective and \(1.5\times\) extra magnification (pixel size \(433\,\mathrm{nm}\)). ### Least Squares Fits Fitting model 3 in Section 3.2.1 to our data, which assumes bidispersity, we extract the mean diffusion coefficient and the relative contribution of each population to the ISF, Fig. 4. Comparison with values obtained from fitting a monodisperse model to the unmixed samples, Fig. 4a, shows that the method works well provided that the 'low-signal component' [1] contributes at least \(\approx 2\%\). We found little quantitative difference in taking \(C_{1}\) in model 3 to be constant or allowing it to vary with \(k\), confirming minimal form factor effects. Practically, allowing \(C_{1}\) to vary increased processing time and occasionally caused issues with convergence. The spread of the five measurements of each sample shows that \begin{table} \begin{tabular}{l l} \hline Large-particle mass fraction & Expected small-particle ISF contribution \\ \hline 70\% & 1\% \\ 50\% & 2\% \\ 25\% & 5\% \\ 10\% & 13\% \\ 5\% & 25\% \\ 2.5\% & 40\% \\ 19\% & 63\% \\ 0.5\% & 77\% \\ 0.1\% & 95\% \\ 0.02\% & 99\% \\ \hline \end{tabular} \end{table} Table 1: \(240\,\mathrm{nm}\) and \(1.1\,\mu\mathrm{m}\) particle mixtures used in section 5. Figure 4: Results of DDM analysis of \(240\,\mathrm{nm}/1.1\,\mu\mathrm{m}\) sphere mixtures with different compositions, showing five measurements at each composition. Red crosses indicate results of least-squares fits to an explicit bimodal PDD, blue triangles are extracted from CONTIN fits. CONTIN results are shifted slightly along the x-axis for clarity, and are only plotted where each population is expected to contribute \(\gtrsim 5\%\) of the signal. a) Diffusion coefficients (points) compared to average values for the monomodal suspensions (dotted lines). b) Signal fraction from large particles (points), and theoretical expectations for DDM, Eq. 12 (solid line) and DLS at various angles (dashed lines). Figure 3: DDM results for polystyrene spheres of different sizes. (a) Extracted diffusion coefficients as a function of manufacturer provided radius. Dashed line shows Stokes-Einstein prediction (b) Average normalised (see main text) \(A(k)\) against particle radius. Green triangles indicate average \(A(k)\) as extracted from videos. Black crosses are the same data corrected for form factor effects. The dashed line has slope 3. In both plots there are 5 points for each size, which often overlap. fit uncertainties shown by the error bars are underestimated, although this is not indicated by the fit statistics (reduced-\(\chi^{2}\approx 1\)). The uncertainties are comparable across different fitting algorithms, including Minuit's MINOS error estimation,[37] which accounts for correlations between fit parameters. The underestimate could be due to correlations in \(g(k,\tau)\). We did not further investigate these uncertainties because the final error bar in the fitted diffusivities (and therefore average sizes) is clearly defined by variability between measurements. Note that using DLS to obtain the correct diffusivities for the two populations would only be possible if the scattering angle, \(\theta\), is optimised to avoid the form factor minima of each. To highlight this, we plot the theoretical fractional contribution of large particles to the DLS signal at different \(\theta\) for DLS using a 532 nm laser and polystyrene spheres (refractive index = 1.59), Fig. 4(b), with \(\theta=12.8^{\circ}\) and \(173^{\circ}\) being typical of some popular commercial devices. Note that these curves assume that each population is monodisperse; any polydispersity would cause significant shifts. By contrast, there is a unique theoretical prediction for DDM, which is calculated only using Eq. 12 from the mean sizes of the two populations, with the latter being extracted from the same experiment, Fig. 4. Since we remain at low \(k\) far from the form factor minimum, polydispersity of the individual populations can be neglected in calculating this curve. Thus, our protocol can give direct information on the composition of the sample. If the size ratio of our bidisperse suspension is reduced from 1:4 to 1:2, fitting DDM data to model 3 yields a significantly biased diffusivity for the smaller particles even when they contribute as much as 40% of the signal (Appendix D.1). At this size ratio, the corresponding timescales for decorrelation are too close for them to be separated cleanly. There is some indication of local minima in the \(\chi^{2}\) minimisation, suggesting that alternative approaches may improve results. However, we also observed this bias in analysis of simulated bimodal ISFs when the peaks in \(P(D)\) begin to overlap, so this may represent a more general limitation that is not unique to DDM. DDM therefore cannot be solely relied upon to size bidisperse samples with such low size ratios. ### CONTIN Analysis Least-squares fitting delivered the correct mean diffusivities of the two populations and their number ratio by assuming bidispersity. Alternatively, the cumulant model (model 2 in Section 3.2.1) can be fitted to the data without this assumption to obtain a single mean and polydispersity (see Appendix B), with no indication of bidispersity or poor fit quality. To do better, we turn to CONTIN. CONTIN delivers \(P(D)\), the particle diffusivity distribution (PDD) histogram on a predefined grid of 60 linearly spaced bins in the interval \(0.01\mu\mathrm{m}^{2}\mathrm{s}^{-1}\leq D\leq 5\mu\mathrm{m}^{2}\mathrm{s}^{-1}\). Figure 13 (Appendix D.3) shows the result for each sample in Table 1. Figure 5 shows the PDDs from the third video of each mixture in which the large particles contribute 0.1%, 1%, 10%, and 25% of the particle mass (or 5%, 37%, 87%, and 95% of the signal). This analysis convincingly returns a bimodal distribution of diffusivities provided that the contribution of low-signal component to the signal remains \(\gtrsim 5\%\), comparable but slightly more stringent than for least-squares fitting. This is because too small a contribution to the signal from either species will be removed as 'noise' by the CONTIN regularisation algorithm, whilst least squares will always return two sizes - fitting noise if necessary. Fitting the weighted sum of two Gaussian distributions to the returned PDDs for each video with \(\geq 5\%\) minority signal yields the mean diffusivity and relative signal contribution of each population, Fig. 4. The variation in these properties is comparable to the equivalent least-squares values, and the more stringent signal contribution requirements are visible as the signal reaches \(\approx 5\%\). These fits also return a polydispersity; but there are significant run-to-run variations in the fitted PDD at each composition, Fig. 13, because the regularisation parameter \(\alpha\) is highly noise sensitive. However, CONTIN fits the data to an integral of the PDD, so that there is _a priori_ reason to surmise that the area of each peak may be far less noise sensitive than either the peak width or height. The peak area is a measure of the (weighted) number of particles, Eq. 12. Figure 4b validates this surmise. So, a CONTIN analysis is able to deliver the sizes and relative number of the two populations in our 1:4.6 bidisperse suspension. We also tested CONTIN analysis for a bidisperse suspension in which the two populations differ in size by only a factor of 2. The method again returns a bimodal diffusivity distribution with essentially the same means as least-squares fits (including the aforementioned bias) whenever the low-signal component contributes \(\gtrsim 5\%\) of the signal, Fig. 11 (Appendix D.1). ## 6 Spatial aspects of DDM The \(NR^{6}\) scaling of signal in DLS and DDM means that even a low concentration of large particles will dominate the signal and render it difficult, if not impossible, to detect smaller species. Thus, for example, in our 1:4.6 bidisperse suspension, we need at least 75% mass fraction of the smaller species to contribute at least 5% of the DDM signal, Table 1, for this population to show up in the PDD from a CONTIN analysis. Such considerations are important, e.g., when sizing biomedical nanoparticles, where buffers at physiological ionicity often lead to aggregation. The presence of micron-sized aggregates leads to highly distorted PSDs[38] or even irreproducible results when DLS was used to size nanoparticles.[6] Figure 5: CONTIN results for various 240 nm/1.1 \(\mu\)m sphere mixtures (see legend) with expected signal contributions from large particles of 5%, 37%, 87%, and 95% respectively. Purple vertical lines show average diffusivities from least-squares fits to the monodisperse suspensions. We next show how to use DDM to size one or more populations of small particles in the presence of a numerically-minor population of large particles that dominate the signal. The key is to make use of the spatial information encoded in the images collected in a DDM experiment. The numerical minority of the largest particles means that they are relatively sparse in the images. So, it should be possible to analyse selectively only those portions of the collected images from which these particles are essentially absent, Fig. 6. This can be accomplished either by combining dilution and control of magnification or by using a spatially-resolved analysis. We demonstrate these two approaches using a trimodal stock, Table 2. First, we measured a sequence of samples obtained by successively diluting the stock by a factor of 3. The idea is to identify, if possible, a window of concentration in which the field of view typically does not include any of the largest particles, but the signal level from the smaller populations is still measurable. Videos of each dilution were captured at 400 fps using both 10\(\times\)/0.3 and 60\(\times\)/0.7 objectives without pixel binning, for pixel sizes of 650 nm and 108 nm respectively. For 512\(\times\)512 pixel images (the maximum square images possible at this frame rate with our camera) this corresponds to 2D imaging areas of \(1\times 10^{5}\)\(\mu\)m\({}^{2}\) and \(3\times 10^{3}\)\(\mu\)m\({}^{2}\) respectively. The exact volume imaged is difficult to estimate since the depth of field is strongly \(k\) dependent;[10, 23] an estimate based on geometric optics[39] would be 9 \(\mu\)m and 1 \(\mu\)m for the 10\(\times\) and 60\(\times\) objective respectively. By design, the number of modes found by the analysis should vary as dilution progresses, with the largest population disappearing to reveal smaller particles. With no _a prior_ fixed number of modes to input to least-squares fitting, we used CONTIN, with the input being a grid of 60 logarithmically spaced bins for \(10^{-2}\mu\)m\({}^{2}\)s\({}^{-1}\leq D\leq 10^{2}\mu\)m\({}^{2}\)s\({}^{-1}\). To avoid logarithmically scaled bin heights, the quadrature weight of each bin is set to 1, so that the sum of bin heights describes the contribution from each population rather than the bin area.[32, 33] Figure 7 shows the fitted \(P(D)\) from each repeat using the two different objectives with the stock suspension and three successive dilutions. With the larger field of view (10\(\times\) magnification) the large particles dominate the signal at all dilutions. At the higher (60\(\times\)) magnification, there is still no convincing evidence of the two smaller populations until we reach 3\({}^{2}\)-fold dilution, and their signals remaining robust at 3\({}^{3}\)-fold dilution. Again, there is significant variability in peak shape from run to run, but the overall picture is clear. Further dilution reduces the signal to the extent that peaks appear and disappear in the 5 repeats. A disadvantage of the dilution method is that the optimal concentration window is rather narrow, and so can be easily missed in a real-life application. More robustly, one may eliminate the contribution from the largest particles by selecting an appropriate region of interest (ROI) for analysis from the original video. Figure 8 compares the PDDs obtained from one of the 60\(\times\) magnification videos of the stock suspension when we analysed the full video (512\(\times\)512 pixels), and when we analysed a smaller ROI (128\(\times\)128 pixels) chosen to exclude all large particles. Not surprisingly, the former PDDs shows only the largest particles, while the latter shows the two smaller populations. Obviously, success depends on selecting and optimising an ROI. Figure 12 in Appendix D.2 reports the PDD obtained as the size of the ROI is progressively reduced for 60\(\times\) magnification videos of the stock solution, again showing five runs at each stage. That the two smaller populations show up strongly after reducing the ROI by a factor of 4\({}^{2}\) is consistent with diluting the stock by 3\({}^{2}\)-3\({}^{3}\) times to give optimal performance, Fig. 7. The full data set, Fig. 12, illustrates the superiority of this method compared to dilution. Here, the user varies the ROI size and position in real time while analysing a single data set until correct sizing is achieved; dilution requires multiple experiments in which the user must 'hit' the right dilution window and sample position by chance. As we already noted, the peak areas in the CONTIN output con \begin{table} \begin{tabular}{l l l l} \hline Particle Diameter & 60 nm & 240 nm & 1.1 \(\mu\)m \\ \hline Weight Fraction & \(10^{-4}\) & \(10^{-6}\) & \(10^{-6}\) \\ Signal Contribution & 2\% & 1\% & 97\% \\ Number Density (per mm\({}^{3}\)) & \(8\times 10^{8}\) & \(1\times 10^{5}\) & \(2\times 10^{3}\) \\ \hline \end{tabular} \end{table} Table 2: Trimodal system composition for Section 6. Figure 6: Schematic showing how by selecting a suitable region of interest, we can enhance the DDM signal from smaller particles by removing the contribution from large particles. Figure 7: CONTIN fits to videos of the trimodal system defined in Table 2 and dilutions. Each pair of graphs shows the result with 10\(\times\) magnification (top) and 60\(\times\) magnification (bottom) for the labelled dilution. Vertical dotted lines show expected peak positions. tains compositional information via the relative values of \(A(k)\). Following the procedure described in Appendix C, we extracted relative volume fractions, finding \(98.12\pm 0.12\%\), \(1.01\pm 0.10\%\), and \(0.87\pm 0.11\%\) for the \(60\,\mathrm{nm}\), \(240\,\mathrm{nm}\), \(1.1\,\mathrm{\SIUnitSymbolMicro m}\) populations, in excellent agreement with the known \(10^{-4}:10^{-6}:10^{-6}\), Table 2. Conceptually, our technique is similar to centrifuging out large particles prior to DLS. However, our methods need lower quantities of suspension and do not require physical processing, which could be relevant for sparse or delicate samples. Note also that our ROI selection of may be compared with the use of spatially-resolved DDM to verify a theorem in active matter physics [40]. ## 7 Summary and conclusions Our results show that DDM is a facile and robust method for sizing suspensions with multimodal PSDs, but must be coupled with a suitable method for deducing diffusivity distributions from measured ISFs. Specifically, we have demonstrated the use of CONTIN, which is already familiar from long use in DLS. In future work, more advanced algorithms designed for DLS analysis should be explored for potential improvements in resolution and performance [41, 42, 43, 44]. In addition, for accurate uncertainty estimates in fitted parameters, correlations between \(g(k,\tau)\) points could be included and a Bayesian fitting algorithm may be advantageous as an alternative to least-squares fits. Our findings already suggest a protocol for the DDM sizing of a multimodal suspension. One starts by visually inspecting images of the sample, increasing the magnification from a low value until the first particles become visible. At this point, when the largest particles should be comparable to pixel size, record a set of videos and back out the ISF. A CONTIN analysis may already reveal multimodality, or only show a single large population. Regardless, one would then dilute the suspension (and/or increase the magnification) until the signal from the largest particles disappears to reveal smaller populations. If no signal remains, indicated by a time-independent DICF, one may be reasonably confident any small particles are at a number density comparable to or lower than that of the large particles. We conclude that DDM can fill an important gap between low-throughput electron microscopy and high-throughput DLS. While DLS can access shorter timescales and is likely more sensitive [20], form factor effects make multimodal systems challenging. In contrast, access to real space images and low-\(k\) information makes DDM uniquely suited for sizing multimodal suspensions, which are ubiquitous in applications. ## Author Contributions **Bradley**: Investigation, Methodology, Formal Analysis, Software, Visualization, Writing - original draft **Martinez**: Conceptualisation, Methodology, Investigation, Supervision, Formal Analysis **Arlt**: Conceptualisation, Formal analysis, Methodology, Software, Supervision **Royer**: Supervision **Poon**: Conceptualisation, Formal Analysis, Funding Acquisition, Supervision, Writing - review and editing ## Conflicts of interest There are no conflicts to declare. ## Data Availability Data relevant to this publication is available at [https://doi.org/10.7488/ds/3851](https://doi.org/10.7488/ds/3851). ## Acknowledgements JJB was funded by the EPSRC SOFI2 CDT (EP/S023631/1). Part funding also came from ERC PoC award GA 882559 NoChaPFI.
2306.08019
Perfect Plane-Wave to Surface-Wave Coupler Enabled Teleporting Conformal Metasurfaces
A technique for the design of conformal metasurfaces with two spatially disconnected space wave ports connected by a surface wave is presented. The passive and lossless metasurface absorbs the incident plane wave at port 1, converts it perfectly into a surface wave which transports the energy along an arbitrarily shaped/curved metasurface to port 2, then reradiates the captured power as a radiated field with control over its amplitude and phase. Since the incident field is seen to disappear at the input port and reappear at a spatially dislocated port as a new formed beam, the field can be said to have teleported. The metasurface consists of a single, conformal, spatially variant, impedance sheet supported by a conformal grounded dielectric substrate of the same shape. It is modeled using integral equations. The impedances of the sheet are optimized using the adjoint variable method to achieve the perfect teleporting operation from a passive and lossless metasurface. Possible applications include channel optimization for cellular networks, inexpensive power harvesting, sensing, around-the-corner radar, and cloaking.
Jordan Budhu
2023-06-13T14:48:18Z
http://arxiv.org/abs/2306.08019v2
# Perfect Plane-Wave to Surface-Wave Coupler Enabled Teleporting Conformal Metasurfaces ###### Abstract A technique for the design of conformal metasurfaces with two spatially disconnected space wave ports connected by a surface wave is presented. The passive and lossless metasurface absorbs the incident plane wave at port 1, converts it perfectly into a surface wave which transports the energy along an arbitrarily shaped/curved metasurface to port 2, then reradiates the captured power as a radiated field with control over its amplitude and phase. Since the incident field is seen to disappear at the input port and reappear at a spatially dislocated port as a new formed beam, the field can be said to have teleported. The metasurface consists of a single, conformal, spatially variant, impedance sheet supported by a conformal grounded dielectric substrate of the same shape. It is modeled using integral equations. The impedances of the sheet are optimized using the adjoint variable method to achieve the perfect teleporting operation from a passive and lossless metasurface. Possible applications include channel optimization for cellular networks, inexpensive power harvesting, sensing, around-the-corner radar, and cloaking. Conformal, Metasurface, Grating coupler ## I Introduction The design of metasurfaces to create tunnel-like connections through space (with a finite travel time from port to port) connecting two space wave ports at distant locations is addressed in this paper (see Fig. 1). The incident plane wave field is absorbed at port 1, perfectly converted into a surface wave which connects the two ports and transfers power between them, and reradiated from port 2 located at a distant location. The reradiated field from port 2 is designed with arbitrary control over its phase _and_ amplitude in a completely passive and lossless way utilizing all of the power contained in the incident field over port 1. As the metasurface transfers all of the available power in the incident wave to the reradiated wave, the operation is said to be perfect. The enabling technology for the presented designs is perfect plane-wave to surface-wave couplers. Although these devices have been demonstrated before for planar [1, 2, 3, 4] and cylindrical surfaces [5, 6, 7], perfect coupling over an arbitrarily shaped non-canonical conformal surface has not been shown. Furthermore, complex-valued field control over non-canonical conformal surfaces using passive and lossless metasurfaces has also not been shown. In this paper, perfect plane-wave to surface-wave couplers enable teleportation of incident beams to spatially dislocated ports along any desired shape surface and with complex-valued field control of the reradiated beam from a completely passive and lossless metasurface. The metasurface itself is a textured interface, modeled as a spatially variant homogenized purely reactive impedance sheet supported by a grounded dielectric substrate [8]. The metasurface is capable of complex-valued radiated field control and seamless conversion between guided and unguided modes in a lossless or perfect manner. It is modeled using integral equations [9, 10, 11], the integral equations are solved using the method of moments technique [12], and the reactances of the impedance sheet optimized using the adjoint variable method [13, 11, 14]. For an overview of the design procedure, see [11, 15]. A customized integral equation for the design of the conformal cases in this paper can be found in the supplementary material of [16]. The metasurface can be made to conform to any shape such as the corner of a building (see Fig. 7 for preview) or a general non-canonical curvilinear surface (see Fig. 13 for preview) for example. We will first present a planar design to demonstrate the perfect teleportation and understand its operation. Subsequently, the same functionality from conformal geometries will be shown. Similar planar teleporting metasurfaces have appeared in recent scientific works. In [2], a planar teleporting metasurface was designed by juxtaposing three separate metasurfaces, two space wave to surface wave couplers separated by a metasurface supporting a pure surface wave. The overall metasurface system laterally shifts a plane wave incident at an angle of 30\({}^{\circ}\) on the left-hand side of the metasurface to a transmitted wave emanating from the right-hand side of the metasurface at an angle of \(-7.2^{\circ}\) with respect to the normal. Due to the analytical design procedure, the metasurface junctions scatter and reduce the efficiency. Also, the coupling metasurfaces do not perfectly convert the incident fields to surface wave fields. The authors report an efficiency of only Fig. 1: Teleporting metasurface geometry. Note, the unit cells are shown enlarged for clarity. The actual metasurface contains 200 unit cells each \(\lambda_{0}/20\) wide. 10%, and hence the teleportation cannot be deemed perfect. Furthermore, the approach cannot control both the phase and amplitude of the transmitted field and hence does not have the capability of complex-valued field control. The metasurfaces in the referenced work are also not conformal. In [17], a planar teleporting metasurface is designed using the principles of \(\mathcal{PT}\) symmetry. A reactive layer is sandwiched between an absorbing lossy Salisbury screen layer matched to free space and an active layer with a negative impedance also matched to free space. An incident plane wave is absorbed nearly completely by the lossy layer, while the inductive perforated layer allows the remaining small amount of power to couple to the active layer where it is resonantly amplified to recreate or teleport the incident plane wave to the opposite side. Although this device requires active and lossy components, the loss and gain is balanced according to \(\mathcal{PT}\) symmetry and hence represents an overall lossless system. Nonetheless, the structure is planar, requires active layers which complicates fabrication and cannot achieve beamforming. This paper is organized as follows. In section II, we present a planar example. Next, in section III, two conformal examples will be presented. The first contains planar coupling regions and a conformal surface wave region. The second contains both conformal coupling regions and conformal surface wave regions. Some concluding remarks are provided in section IV. An \(e^{j\omega t}\) time convention is assumed and suppressed throughout the paper. ## II Teleporting Metasurface Design and Analysis We first present a planar teleporting metasurface to understand the teleportation function as it pertains to metasurfaces. The metasurface geometry is shown in Fig. 1. The electromagnetics problem is 2-dimensional (out-of-plane wavenumber is zero) and hence the geometry is invariant in the \(z\)-direction. The patterned metallic cladding, described by a spatially variant homogenized sheet impedance \(\eta_{s}(x)\), is supported by a grounded dielectric substrate of thickness \(d=1.27\,\mathrm{mm}\) (\(50\mathrm{mil}\)) and complex relative permittivity \(\epsilon_{rc}=e_{r}(1-jtan\delta)=2.2-j0.002\). The impedance sheet is broken into \(200\) unit cells of width \(\lambda_{0}/20\) each at \(f=10\mathrm{GHz}\). The metasurface is therefore \(w=200(\lambda_{0}/20)=10\lambda_{0}\) wide along the \(x\)-axis. The design of the teleporting metasurface begins by specifying the desired total field tangential to the metasurface. The incident field is assumed a normally incident plane wave illuminating only the right-hand portion of the metasurface between \(\lambda_{0}\leq x\leq\lambda_{0}\) as shown in Fig. 2a. The reradiated (scattered) field is defined to have both a cosine tapered amplitude and uniform phase (complex-valued field control) exiting from only the left-hand portion between \(-4\lambda_{0}\leq x\leq-\lambda_{0}\). Critically, the absolute level of the amplitude in V/m of the scattered field is chosen to conserve power globally meaning the total power in the incident field, \(P_{inc}=|E_{0}|^{2}/2\eta_{0}=0.12\,\mathrm{mW/m}\) for a unit strength plane wave, is equal to the total power in the scattered field. This definition will lead to perfect teleportation, i.e., all the power is transferred from the incident field to the scattered field. ### Local Active/Lossy Design The metasurface design algorithm [11] starts from a local active/lossy design used as a seed for the non-local passive/lossless design. The local active/lossy design is obtained from the solution of the governing integral equations given the desired total field, \(E^{tot}\), associated with the wavefront transformation. By solving the integral equation, the surface current density, \(J_{s}\), on the metasurface is obtained, thereby allowing for the direct calculation of the metasurface impedances, \(\eta_{s}\), following from the boundary condition \(\eta_{s}=E^{tot}/J_{s}\) (see Fig. 2b). As expected, the metasurface is lossy over the incident field region, and contains gain in the scattered field region. Furthermore, the loss and gain is balanced. A transmission line model can be used to understand the result. Modelling the panel as a transmission line terminated in a shunt impedance representing the metasurface in parallel with an Figure 2: (a) Specification of incident and scattered field amplitudes at the metasurface plane. NOTE: the input/output ports have been swapped with respect to Fig. 1. Metasurface sheet impedances of (b) Initial local active/lossy metasurface design. (c) Subsequent non-local passive and lossless design. (d) Zoomed in view within the input port region of the non-local passive and lossless design shown superimposed with the sheet impedance modulation function \(\eta_{s}=-j53\Omega\left[1+0.1415\sin\left(\frac{2\pi x}{p}\right)\right]\) where \(p=\lambda_{0}/2.58\). inductance representing the thin grounded dielectric substrate, the sheet impedance can be calculated as \[\eta_{t,inc}=-\frac{\eta_{g}\eta_{d}\tan\beta d\left(1+\Gamma\right)}{\eta_{d}\tan \beta d\left(\Gamma-1\right)-j\eta_{0}\left(1+\Gamma\right)}=27.54-j98\Omega \tag{1}\] where \(\eta_{s,inc}\) is the sheet impedance within the illuminated portion of the metasurface, \(\Gamma\) is the reflection coefficient looking into the parallel load, \(\beta\) is the wavenumber in the dielectric region, and \(\eta_{0}\) and \(\eta_{d}\) are the intrinsic impedances of the free space and dielectric regions, respectively. For perfect absorption, the reflection coefficient should be zero. The resulting sheet impedance in (1) matches the numerically obtained value in Fig. 2b. In order for power conservation, the power absorbed in the lit region must exit the output region, and hence the sheet impedance in the scattered field region can be described as \(\eta_{s,sca}=-27.54-j98\Omega\). Note, since the shape of the amplitude differs between the two port regions, the sheet impedance tapers are different at the ends of their respective regions. Thus, the desired functionality of teleportation can be achieved with a local balanced active/lossy metasurface described by the sheet impedances in Fig. 2b. In this case, the beam truly teleports as the incident energy is not transported to the output beam but rather the output beam is created through resonant amplification given some small diffractive coupling. Its balanced loss and gain operating principle is similar to the balanced loss and gain of the \(\mathcal{PT}\) symmetric teleporting structure in [17]. In both cases, although the incident energy itself does not teleport, a small diffractive coupling is necessary to excite the gain medium to resonantly create the teleported beam. ### _Non-local Passive/Lossless Design_ A purely passive/lossless metasurface eases fabrication and avoids unnecessary complexity. To that end, the local active/lossy metasurface is used as a seed design to obtain a non-local passive/lossless design with the same performance. Since the power is balanced, a surface wave can carry the power from the lossy region to the active region. In this case, the metasurface tunnels the energy to the output port rather than teleports it. However, from an outside perspective, the operation is the same. The integral equation solver is coupled with an adjoint variable optimizer to obtain the non-local design [11]. The non-local metasurface sheet impedance is shown in Fig. 2c and the near fields computed from the non-local design are shown in Fig. 3. Three distinct regions are evident, a spatially modulated input and output port connected by a nearly constant surface wave region. Sharp discontinuities in the sheet impedance of Fig. 2c excite a number of auxiliary surface waves (different from the tunnel surface wave connecting the two ports) in the input and output port regions responsible for distributing power transversally within the port region facilitating passivity and losslessness [11, 18]. These perturbations also aid in obtaining a seamless transition region between the ports and the connecting surface wave region increasing the overall port-to-port power transfer efficiency. The surface waves can be visualized in Fig. 4a. Fig. 4a shows the amplitude of the plane wave spectrum of the scattered electric field evaluated on the metasurface. The tunnel surface wave responsible for transporting power between the ports is Fig. 4: (a) Amplitude spectrum of the scattered electric field at the metasurface plane. (b) Zoomed in view of Fig. 3a within the tunnel region to show tunnel surface wave wavelength. (c) Zoomed in view of Fig. 3a within the input port region to show port surface wave wavelength. (d) Dispersion curve relating the surface wavenumber and wavelength to the homogenized sheet reactance. Fig. 3: (a) Real part of the total electric field. (b) Zoomed in to show surface wave connecting the input and output ports. evident at \(\beta_{sw,t}=-7.95k_{0}\). This wavenumber is in agreement with the sheet impedance of \(\eta_{s}=-j22\Omega\) in the tunnel region in Fig. 2c since a sheet of this impedance supports a surface wave of wavenumber \(\beta_{sw,t}=-7.95k_{0}\). This can be verified by viewing the dispersion curves plotted in Fig. 4d. In Fig. 4d, a plot of the surface wavenumber and wavelength versus the sheet reactance of the metasurface is shown. The plot is obtained using the Transverse Resonance Technique [11]. A reactive sheet impedance of \(\eta_{s}=-j22\Omega\) is seen to support a surface wave of wavenumber \(\beta_{sw,t}=-7.95k_{0}\). The surface wave wavelength can also be verified to agree with the curves of Fig. 4d. In Fig. 4b, a zoomed in view of the surface wave in the tunnel region is shown. The measured wavelength agrees with the dispersion curves. The amplitude spectrum in Fig. 4a also shows another peak at \(\beta_{sw,p}=-2.58k_{0}\), which is the surface wave generated from the incident plane wave within the input port region. Figure 4d shows this surface wavenumber is associated with a sheet impedance of \(\eta_{s}=-j53\Omega\) in agreement with the average of the sheet impedances shown in Fig. 2c within the input port region. The sheet impedance modulation within the input port region can be understood by noting that for broadside radiation of the \(n=-1\) harmonic from a surface wave of wavenumber \(\beta_{sw,p}=-2.58k_{0}\), the period of the modulation should be \(k_{\mathrm{cm}}=\beta_{sw,p}-2\pi/p\Rightarrow 0=\beta_{sw,p}-2\pi/p\) or \(p=2\pi/\beta_{sw,p}=\lambda_{0}/2.58\). Shown in Fig. 2d, a sinusoidal sheet impedance modulation function with this period is fit to the non-local passive/lossless metasurface sheet reactances. As can be seen, the modulation period corresponding to the \(n=-1\) harmonic for broadside radiation fits the optimized sheet reactances well. The perturbations of the reactances around this analytic result leads to the perfect coupling. No other spatial harmonics fall within the light cone. A zoomed in view of the generated surface wave within the input port region is shown in Fig. 4c. The measured surface wave wavelength is also in agreement with the dispersion curves in Fig. 4d. Finally, a remark on the spectrum limits. The spectrum has a cut-off at \(\beta_{sw}=10k_{0}\) which is the highest wavenumber possible as the onset of a stop-band at \(\beta_{sw}=\pi/d=0.1k_{0}\) for the chosen unit cell discretization of \(d=\lambda/20\) occurs at this wavenumber. This corresponds to a maximum sheet impedance of \(-j20\Omega\) according to Fig. 4d. For this reason, hard limits of \(-j20\Omega\) on the impedances during the optimization phase were set, and is why the tunnel sheet impedance is approximately \(-j20\Omega\). The remaining evanescent spectrum is due to the sharp perturbations in the sheet impedances of Fig. 2c. These surface waves are responsible for redistributing power transversally within the port regions and at their transitions with the surface wave region in order to achieve passivity and losslessness. The power in the surface wave can be seen to grow approximately linearly in agreement with the conclusions in [3] in Fig. 5a, although here the spectrum (Fig. 5a) contains many spatial harmonics rather than the single harmonic considered in [3] and the metasurface is strongly non-local. The figure shows the \(\chi\)-component of the Poynting vector, \(S_{x}^{sca}=-(1/2)Re[E_{x}^{sca}H_{y}^{sca}]\). \(H_{y}^{sca}\) was obtained by taking the inverse Fourier transform of \(\tilde{E}_{x}^{sca}k_{x}/\eta_{0}k_{0}\), where \(\tilde{E}_{x}^{sca}\) is the electric field spectrum at the plane of the metasurface (the amplitude of \(\tilde{E}_{x}^{sca}\) is shown in Fig. 4a). The power density in the surface wave is shown to increase from zero within the input port region approximately linearly as more of the power in the plane wave is absorbed, then become constant through the tunnel region as the power is carried to the output port region, and finally decay approximately linearly in the output port region to zero as the Figure 5: (a) \(x\)-component of the Poynting vector at the plane of the metasurface. (b) Line cut of near electric scattered field amplitude at a height of one wavelength above the metasurface. Figure 6: COMSOL Multiphysics simulation results. (a) \(x\)-component of the Poynting vector at the plane of the metasurface. (b) Real part of the total electric field. (c) Line cut of near electric scattered field amplitude at a height of one wavelength above the metasurface. (d) Amplitude spectrum of the scattered electric field at the metasurface plane. power is shed into the scattered beam. The oscillations in the power density profile occur due to the interference between the similarly polarized incident and surface wave fields [16]. Next, to show the metasurface perfectly converts the incident plane wave at port 1 to the complex-valued scattered field at port 2, the near electric field was calculated along a horizontal line one wavelength above the metasurface. In Fig. 5b, the stipulated scattered field amplitude (replicated from Fig. 2a), the directly calculated (from the induced surface currents) scattered near field amplitude, and the backprojected far fields are all shown compared. It is evident that the non-local metasurface perfectly creates the stipulated near field amplitude, and hence transfers all power in the incident plane wave to the output scattered field. Integrating the power contained in the near fields along a horizontal line one wavelength above the metasurface yields \(P_{sca,stip}=0.12\)mW/m and \(P_{sca,n\prime}=0.1198\)mW/m giving a port-to-port transfer efficiency of 99%. Lastly, to provide an independent verification of the teleporting metasurface, the design was imported into COMSOL Multiphysics and a full-wave simulation performed. The results are compared to the MoM results in Fig. 6. As can be seen, the independent full-wave verification corroborates our results. ## III Conformal Teleporting Metasurfaces By incorporating conformal geometry modelling capabilities into the integral equation/moment method algorithm [16], teleporting metasurfaces connecting two distant non-colinear ports in space can be accomplished. These types of teleporting metasurfaces can be useful for channel optimization in urban environments where the window-pane sized metasurface conforms to the corner of a building for example (see Fig. 7). ### _Conformal Metasurface for Communications Channel Optimization_ In Fig. 8, the geometry of a conformal teleporting metasurface which routes the surface wave around a \(90^{\circ}\) bend is shown. The metasurface is parameterized by a superquadric function with \(p=10\), \[\begin{split} x(u,v)&=\frac{v}{\sqrt[]{\frac{\cos u }{a}}+\left(\frac{\sin u}{b}\right)^{p}}\cos u\\ y(u,v)&=\frac{v}{\sqrt[]{\frac{\cos u}{a}}+\left( \frac{\sin u}{b}\right)^{p}}\sin u\end{split} \tag{2}\] The parameterization is also shown graphically in Fig. 9. The parameters \(a\) and \(b\) control the aperture length along the \(x\)-axis and \(y\)-axis, respectively, and the parameter \(d\) controls the substrate thickness. The parameter \(p\) controls the metasurface shape and radius of curvature at the bend. For \(p=2\), for example, (2) defines a quadrant of a circular annulus in the \(xy\)-plane. As \(p\rightarrow\infty\), the parameterization approaches a quadrant of a square ring with thickness \(d\). When \(v=1\), the superquadric has the largest radius (the curve \(g\) in Fig. 9). The impedance sheet will be placed along this arc. When \(v=1-d/a\), the superquadric has the smallest radius (the curve \(e\) in Fig. 9). This is where the perfectly conducting ground plane will be placed. \(\forall v\) between these two values, the space between is filled (the dielectric material of the substrate will fill this area). For the parameters in (2), the greatest radius of curvature at the \(90^{\circ}\) bend point is \(R=0.5874_{0}\)[16]. The metasurface has length \(a=54_{0}\) along the \(x\)-axis, \(b=54_{0}\) along the \(y\)-axis, and thickness \(d=1.27\)mm (50mil). It is constructed from the same three layer stack: a patterned metallic cladding represented as a spatially variant homogenized impedance sheet, a dielectric spacer, and a ground plane. The incident plane wave has its \(\vec{k}\) vector oriented along the \(-y\)-axis and illuminates the portion of the metasurface between \(-4\lambda_{0}\leq x\leq-\lambda_{0}\). The plane wave will be absorbed at this space wave port and converted into a surface wave. The surface wave will travel around the bend Fig. 8: Conformal teleporting metasurface geometry. Fig. 7: (a) An urban environment. (b) Simulation results of a conformal teleporting metasurface which routes plane waves around corners of buildings. delivering the power to port 2 defined along \(\lambda_{0}\leq y\leq 4\lambda_{0}\), where it will be formed into a shaped beam corresponding to an aperture field with uniform amplitude and phase. Figure 10a shows the metasurface sheet impedances for both the local active/lossy metasurface design and the non-local passive/lossless metasurface design. Figure 10b shows the near field amplitude taken along a contour following the metasurface and one wavelength above the metasurface. As can be seen, the non-local passive/lossless metasurface performs identically to the local active/lossy design. Finally, the real part of the total near electric field is shown in Fig. 7b. As in the planar case, the metasurface is performing the function of perfect teleportation, only in this case, the beam is seen to teleport around the corner of a building. ### _Sinusoidally Modulated Exponential Metasurface Coupler_ The final example is a perfect conformal teleporting metasurface where the coupling regions are not planar. The geometry and its parameterization are shown in Fig. 11a and in Fig. 12, respectively [16]. The parameterization can be described as a sinusoidally modulated exponential \[\begin{split} x(u,v)=u&-w\,/\,2\leq u\leq w\,/\,2\\ y(u,v)=ce^{\omega}+p\sin bu+v&-d\leq v\leq 0\end{split} \tag{3}\] The parameter \(w\) controls the aperture length along the \(x\)-axis, and the parameter \(d\) controls the substrate thickness. The parameters \(c\) and \(a\) controls the amplitude and the growth rate of the exponential function, which acts as a fundamental term of which the sinusoid is added to. The parameters \(p\) and \(b\) control the amplitude and period of the sinusoidal term. The impedance sheet will be placed along the curve resulting from \(v=0\), The perfectly conducting ground plane will be placed along the curve at \(v=-d\). \(\forall v\) between these two values, the space between is filled (the dielectric material of the substrate will fill this area). The metasurface width, as projected onto the \(x\)-axis, is \(w=10\lambda_{0}\). The incident field is assumed a normally incident plane wave illuminating only the left-hand portion of the metasurface between \(-A\lambda_{0}\leq x\leq-1\lambda_{0}\). Defining the scattered field as \(E_{2}^{sca}=e^{-hy\varphi}\), for \(\lambda_{0}\leq x\leq 4\lambda_{0}\), and solving the governing integral equation, results in the active/lossy Fig. 11: (a) Conformal metasurface geometry. (b) Metasurface sheet impedances of initial local active/lossy (Ac/Ly) metasurface design, and subsequent non-local passive and lossless (Pa/L). (c) Line cut of near electric scattered field amplitude at a height of one wavelength above the metasurface. Fig. 10: (a) Metasurface sheet impedances of initial local active/lossy (Ac/Ly) metasurface design, and subsequent non-local passive and lossless (Pa/L) design vs. parameter \(u\). (b) Line cut of near electric scattered field amplitude at a height of one wavelength above the metasurface. Fig. 9: Parameterization of conformal metasurface. design impedances shown in Fig. (b)b. The corresponding passive/lossless design's reactances after optimization are also shown in Fig. (b)b. Finally, the simulation results of the real part of the total near electric field for the excited passive/lossless design is shown in Fig. 13. Perfect teleportation is observed, as well as perfect coupling of a normally incident plane wave to a surface wave over a conformal surface. A COMSOL Multiphysics full-wave verification was also performed for this conformal metasurface design. The results are shown in Fig. 14. The figure shows some imperfect coupling and/or impedance matching between the port region and the tunnel region as some scattered electric field is present over the input port region. Nonetheless, the full-wave results again corroborate our results. ## V Conclusion Finally, note the primary purpose of this paper is to show that perfect teleportation is possible and what the sheet impedances look like. Support for dielectric materials is currently being added to the unit cell design process required to realize these metasurfaces outlined in [19]. Once this is complete, follow-on work involves translating the optimized impedance sheets for all designs in this paper to patterned metallic claddings. ## Acknowledgment The author would like to acknowledge contributions by Professor Anthony Grbic and Dr. Luke Szymanski from the University of Michigan on previous related works.
2306.01983
Mitigating Backdoor Attack Via Prerequisite Transformation
In recent years, with the successful application of DNN in fields such as NLP and CV, its security has also received widespread attention. (Author) proposed the method of backdoor attack in Badnet. Switch implanted backdoor into the model by poisoning the training samples. The model with backdoor did not exhibit any abnormalities on the normal validation sample set, but in the input with trigger, they were mistakenly classified as the attacker's designated category or randomly classified as a different category from the ground truth, This attack method seriously threatens the normal application of DNN in real life, such as autonomous driving, object detection, etc.This article proposes a new method to combat backdoor attacks. We refer to the features in the area covered by the trigger as trigger features, and the remaining areas as normal features. By introducing prerequisite calculation conditions during the training process, these conditions have little impact on normal features and trigger features, and can complete the training of a standard backdoor model. The model trained under these prerequisite calculation conditions can, In the verification set D'val with the same premise calculation conditions, the performance is consistent with that of the ordinary backdoor model. However, in the verification set Dval without the premise calculation conditions, the verification accuracy decreases very little (7%~12%), while the attack success rate (ASR) decreases from 90% to about 8%.Author call this method Prerequisite Transformation(PT).
Han Gao
2023-06-03T02:33:38Z
http://arxiv.org/abs/2306.01983v1
# Mitigating Backdoor Attack Via Prerequisite Transformation ###### Abstract In recent years, with the successful application of DNN in fields such as NLP and CV, its security has also received widespread attention. (Author) proposed the method of backdoor attack in Badnet. Switch implanted backdoor into the model by poisoning the training samples. The model with backdoor did not exhibit any abnormalities on the normal validation sample set, but in the input with trigger, they were mistakenly classified as the attacker's designated category or randomly classified as a different category from the ground truth, This attack method seriously threatens the normal application of DNN in real life, such as autonomous driving, object detection, etc.This article proposes a new method to combat backdoor attacks. We refer to the features in the area covered by the trigger as trigger features, and the remaining areas as normal features. By introducing prerequisite calculation conditions during the training process, these conditions have little impact on normal features and trigger features, and can complete the training of a standard backdoor model. The model trained under these prerequisite calculation conditions can, In the verification set \(\text{D}^{\prime}_{\text{val}}\) with the same premise calculation conditions, the performance is consistent with that of the ordinary backdoor model. However, in the verification set \(\text{D}_{\text{val}}\) without the premise calculation conditions, the verification accuracy decreases very little (7%~12%), while the attack success rate (ASR) decreases from 90% to about 8%.Author call this method Prerequisite Transformation(PT). ## 1 Introduction: In real life, users often use models or data provided by third parties to achieve their goals when training models due to insufficient computing power or insufficient data sets. This provides an opportunity for attackers to attack by changing the training data to train or provide a model with a backdoor, To reduce the accuracy of the model or force it to misclassify with or without targets. Backdoor attacks were first proposed in a paper by XX in XX year. By adding triggers to the training data, the attacked model is trained on this data set denoted by \(\text{D}_{\text{poisoned}}\) to generate a model \(\text{F}_{\text{trigger}}\) (\(\text{F}_{\text{t}}\)) with backdoor. This backdoor model will perform well on a normal data set,, it will misclassified to the category specified by the attacker or to a different category than Ground Truth,while on inputs with backdoor triggers. The representation of a Trigger is a certain patch in the image, such as the white patch in the bottom right corner of Fig1. The backdoor attack has a significant property, which is also the essence of the neural network's strong learning ability about the feature of the trigger location, that is, it is caused by over fitting. We call the feature of the trigger location as trigger feature, while the area not covered by the trigger is called normal feature. This paper proposes an idea based on this - to add some prerequisites during the model learning process, which have a small impact on the model's learning of normal features, while the triggering features have a small impact on the training
2305.09657
Newad: A register map automation tool for Verilog
Large scale scientific instrumentation-and-control FPGA gateware designs have numerous run-time settable parameters. These can be used either for user-level control or by automated processes (e.g., calibration). The number of such parameters in a single design can reach on the order of 1000, and keeps evolving as the gateware and its functionality evolves. One must keep track of which module the registers belong to, where the registers need to be decoded, and how to express the properties (or even semantics) of the register to the next level of user or software. Note, the registers maybe embedded anywhere throughout the module hierarchy. Purely manual handling of these tasks by HDL developers is considered burdensome and error-prone at this scale. Typically these registers are writable via an on-chip bus, vaguely VME-like, that is controlled by an on-chip or off-chip CPU. There have been several attempts in the community to address this task at different levels. However, we have found no tool that is able to generate a register map, generate decoders and encoders with minimal overhead to the developer. So, here we present a tool that scours native HDL source files and looks for specific language-supported attributes and automatically generates a register map and bus decoders, respecting multiple clock domains, and presents a JSON file to the network that maps register names to addresses.
Vamsi K Vytla, Larry Doolittle
2023-05-16T17:56:51Z
http://arxiv.org/abs/2305.09657v1
# Newad: A register map automation tool for Verilog ###### Abstract Large scale scientific instrumentation-and-control FPGA gateware designs have numerous run-time settable parameters. These can be used either for user-level control or by automated processes (e.g., calibration). The number of such parameters in a single design can reach on the order of 1000, and keeps evolving as the gateware and its functionality evolves. One must keep track of which module the registers belong to, where the registers need to be decoded, and how to express the properties (or even semantics) of the register to the next level of user or software. Note, the registers maybe embedded anywhere throughout the module hierarchy. Purely manual handling of these tasks by HDL developers is considered burdensome and error-prone at this scale. Typically these registers are writable via an on-chip bus, wugly VMFL-like, that is controlled by an on-chip or off-chip CPU. There have been several attempts in the community to address this task at different levels. However, we have found no tool that is able to generate a register map, generate decoders and encoders with minimal overhead to the developer. So, here we present a tool that scours native HDL source files and looks for specific language-supported attributes and automatically generates a register map and bus decoders, respecting multiple clock domains, and presents a JSON file to the network that maps register names to addresses. Verilog; Verilog HDL; VHDL; Register map; Automation; Address map; Code generation + Footnote †: _Copyright:_ + Footnote †: _Copyright:_ simple but clean and allow for cleanly encoding lots of register related information. To leverage the usage of attributes, and successfully parse them, it made sense to use an existing Verilog parser. In order to accomplish this, we looked at iverilog and Yosys parsers. Attribute parsing wasn't sufficiently implemented in either of those open source parsers, and it took a few contributions from the authors to add the necessary features. After using both, we stuck with Yosys, for both technical and licensing reasons. Project Details #### Reisters Registers maybe marked as "automatic" using an attribute in any module of the design hierarchy. Registers are marked automatic at module definition site, as shown in the code sample below. A developer may also notify newad to generate additional register logic, such as a write strobe for the register, a read strobe, a signal that holds the write value only for single cycle, etc. ``` moduleprng( inputclk, output[31:0]rnda, output[31:0]rndb, (*external*) input[0:0]run, (*external,signal_type="plus-we"*) input[31:0]iva, inputiva_we,//special trailing_we ); ``` #### Module instances If a module has any registers described as "automatic", it is expected that signals/wires for the register need to be routed through its module instantiation site. When the developer marks the instantiation site with "automatic", newad then generates a macro for that instantiation site. The macro contains wires for the automatic registers of to the module being instantiated. (*lb_automatic*) prngpng(. clk(clk),. rnda(rnda),. rndb(rndb) 'AUTOMATIC_prng ); #### Verilog header files newad is run as a pre-compile step. Once newad is run the macros are populated inside generated header files. The developer is expected to include these header files. This strictly keeps all generated code away from source. We are still considering a scheme where newad generates a single Verilog file with all modules it has parsed and the generated code. Continuing with the example above, for the sake of simplicity let's say the prng module was included in a top-level file, known as station. The following automatically generated header files can be included inside station.v. First we will look at station_auto_vh which is a headerfile expected to be included with station.v. ``` //station_auto_vh //parse_vfile_yosysstation.v.. //module=prnginstance=prnggvar=Nonegcnt=None //parse_vfile_yosys:station.v./prng.v 'define AUTOMATIC_prng.run(prng_run),.iva_we(prng_iva_we),\.iva(prng_iva),\.. ``` Listing.. Second, addr_map_station.vh is a generated file that strictly includes the address map that was generated by newad. ``` //addr_map_station.vh //parse_vfile_yosysstation.v.. //prng_iwabw:0,base_addr:7203 //prng_iwa(lb_addr['LB_HI:0]==7203) //prng_runbw:0,base_addr:7205 'define HIT_prng_run(lb_addr['LB_HI:0]==7205) 'A JSON file is generated as an API for a top-level software to be able to access the register info. ``` "prng_iwa":{ "access":"rw", "addr_width":0, "base_addr:7203, "data_width":32, "description":"", "sign":"unsigned" }, "prng_run":{ "access":"rw", "addr_width":0, "base_addr":7205, "data_width":1, "description":", "sign":"unsigned" }.. ``` #### Decoder generation Once newad is notified of a top-level Verilog file, and recursively searches paths for the modules utilized in the design, newad builds a tree with per-module register information. It can then generate a bus decoder upon programmer's command. Generated code for the decoder is embedded in a macro which is utilized at the intended module. ## Conclusion We believe large, complex projects need a robust scheme to manage their register space. Languages like Verilog, VHDL, and bSystem Verilog that can't cleanly express such semantics natively demand a tool like Cheby or newad. newad initially emerged out of necessity and evolved into something that is actively supporting several gateware projects. We have seen at least one modern HDL language such as (n)-migen (a Python based DSL for describing gateware), where such register map generation is fully embedded into the language inside a library. We intend to release newad as a stand-alone package for register map automation. Currently, it is embedded inside our framework as a build tool here. We believe such register map automation could be done for VHDL as well. However, VHDL doesn't play very well with header files. newad is an HDL-developer-centric approach to register space management. It focuses on HDL readability and maintainability, and a single-source-of-truth for generating register maps and their documentation.
2302.13870
Effects of the self-propulsion parity on the efficiency of a fuel-consuming active heat engine
We propose a thermodynamically consistent, analytically tractable model of steady-state active heat engines driven by both temperature difference and a constant chemical driving. While the engine follows the dynamics of the Active Ornstein-Uhlenbeck Particle, its self-propulsion stems from the mechanochemical coupling with the fuel consumption dynamics, allowing for both even- and odd-parity self-propulsion forces. Using the standard methods of stochastic thermodynamics, we show that the entropy production of the engine satisfies the conventional Clausius relation, based on which we define the efficiency of the model that is bounded from above by the second law of thermodynamics. Using this framework, we obtain exact expressions for the efficiency at maximum power. The results show that the engine performance has a nonmonotonic dependence on the magnitude of the chemical driving, and that the even-parity (odd-parity) engines perform better when the size of the engine is smaller (larger) than the persistence length of the active particle. We also discuss the existence of a tighter upper bound on the efficiency of the odd-parity engines stemming from the detailed structure of the entropy production.
Yongjae Oh, Yongjoo Baek
2023-02-27T15:20:36Z
http://arxiv.org/abs/2302.13870v2
# Effects of the self-propulsion parity on the efficiency ###### Abstract We propose a thermodynamically consistent, analytically tractable model of steady-state active heat engines driven by both temperature difference and a constant chemical driving. While the engine follows the dynamics of the Active Ornstein-Uhlenbeck Particle, its self-propulsion stems from the mechanochemical coupling with the fuel consumption dynamics, allowing for both even- and odd-parity self-propulsion forces. Using the standard methods of stochastic thermodynamics, we show that the entropy production of the engine satisfies the conventional Clausius relation, based on which we define the efficiency of the model that is bounded from above by the second law of thermodynamics. Using this framework, we obtain exact expressions for the efficiency at maximum power. The results show that the engine performance has a nonmonotonic dependence on the magnitude of the chemical driving, and that the even-parity (odd-parity) engines perform better when the size of the engine is smaller (larger) than the persistence length of the active particle. We also discuss the existence of a tighter upper bound on the efficiency of the odd-parity engines stemming from the detailed structure of the entropy production. ## I Introduction Formulation of the macroscopic irreversibility in terms of the entropy production and its application to the upper bound on the efficiency of heat engines was a cornerstone in the development of thermodynamics. The Clausius relation, which relates energy exchanges with thermal reservoirs to the change of entropy, provides a systematic way to describe the fundamental limitations of how efficient an engine can be, namely the Carnot efficiency, which is the maximum efficiency attainable by a quasistatic process. But the original Clausius relation is applicable only to quasistatic processes, during which systems stay close to equilibrium. Since various natural and artificial engines operate far from equilibrium to achieve finite power, modern thermodynamics has focused on developing a systematic framework for describing the irreversibility of such systems. More recently, with the development of technologies for observing and controlling nanoscale systems, microscopic engines subject to nonnegligible thermal and athermal fluctuations have been constructed. The development of stochastic thermodynamics [1] proved to be a crucial step towards describing such system. The theory provides a systematic method for deriving a host of inequalities describing the irreversibility of a broad range of systems, including the systems driven by finite-time protocols with nonnegligible microscopic fluctuations. An important challenge of this field is to describe the performance of engines composed of a single active particle. An active particle maintains its direction of motion by converting its stored energy into a propulsion force determined by its internal degrees of freedom [2; 3; 4]. Examples can be found in various natural and artificial systems, such as flocks of birds, school of fish, swimming bacteria, Janus particles, and colloidal rollers. They have been extensively studied for their novel far-from-equilibrium collective phenomena such as flocking, phase separation [5; 6; 7], current rectification [8], and formation of topological defects [9; 10] But recent studies have also investigated engines composed of such active particles, namely _active heat engines_[11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Active heat engines are distinct from the ordinary heat engines (_passive heat engines_) in that they do not require temperature difference to operate. Positive work can be extracted from such engines even in isothermal environments by constant protocols imposing a nonequilibrium steady state [12] or by cyclic protocols involving other control parameters [16]. Even periodic manipulation of the potential alone is enough for persistent work extraction [13]. This makes active heat engines an ideal candidate for designing nanomachines or micromachines operating in living systems, whose temperature does not vary much. It is natural to ask how to define efficiency of such engines. According to the standard methods of stochastic thermodynamics, for isothermal active heat engines, the _active work_, _i.e._, the work done by the self-propulsion force, yields an upper bound on the extractable work. Thus, the ratio between the extracted work and the active work, bounded from above by 1, was used as the definition of efficiency in [12; 13; 20]. In case the fuel consumption is tightly coupled to the motion of the active particle, the active work is equivalent to the _chemical work_, as was the case in [12]. We also note that the standard definition of efficiency for _molecular motors_ in the literature [21; 22; 23] is the ratio between the extracted work and the chemical work, whose upper bound is also 1. Meanwhile, active heat engines operating between different temperatures have also been extensively stud ied [14; 17; 18; 19; 24]. Interest in such engines was sparked by the experiment of an _active Stirling engine_, which used swimming bacteria confined in a laser trap to extract work via cyclic protocols [14]. The study reported that the "efficiency" of the engine, defined as the ratio between the extracted work \(W_{\text{out}}\) and the "heat" absorbed by the system from the hot reservoir, defined as \(\Delta E-W_{\text{out}}\) for the change of internal energy \(\Delta E\) during the process, can surpass the Carnot efficiency ("super-Carnot behavior"). This behavior was attributed to the non-Gaussian statistics of the swimming bacteria [14] even in the harmonic optical potential. A theoretical work by [15], which revisited the experiment from the perspective of a steady-state engine simultaneously coupled to two reservoirs, attributed the super-Carnot behavior to the finite correlation time incurred by the swimming bacteria [15]. The same definition of efficiency was also used in [17; 18; 19] for active heat engines operating between different temperatures. While the definition of efficiency used in those studies is a straightforward generalization of the definition used for the efficiency of the Carnot engine, they lack any upper bound stemming from the second law of thermodynamics. This is because they do not distinguish between different components of the "heat" \(\Delta E-W_{\text{out}}\), which is actually a mixture of the proper heat from the reservoir and the chemical work done on the particle. These distinct types of energy flows may contribute differently to the irreversibility of the system, which should be clarified by explicitly modeling the dynamics of chemical degrees of freedom and applying the methods of stochastic thermodynamics. Also related is the issue of quantifying how far from equilibrium active particles are. The dynamics of active particles are typically modeled at a phenomenological level, introducing a nonequilibrium driving that breaks the fluctuation-dissipation theorem at the particle level. A notable example is the Active Ornstein-Uhlenbeck Particle (AOUP) [25; 26], which is kept out of equilibrium by making the time scale of friction (assumed to be instantaneous) different from the correlation time of the athermal noise (assumed to be finite). Lacking a full energetic picture of how such nonequilibrium driving arises, the irreversibility of such apparent dynamics [27] is disconnected from the energy flows [28]. While the notion of apparent irreversibility is useful for characterizing whether an effective equilibrium description is possible for the _dynamics_ of active particles, it does not describe the _thermodynamics_ of how much energy is dissipated to maintain certain structures formed by active particles. Notably, the lack of a clear energetic picture of how self-propulsion arises has led to some controversy about the irreversibility of active particles, especially regarding whether the self-propulsion force should change sign under time reversal [29; 30; 31]. Now, the consensus is that both positive and negative signs (called _even_ and _odd_ parities, respectively) are equally possible, with the suitable parity to be determined by the detailed picture of how the self-propulsion arises [32; 33; 28; 34]. For these reasons, many studies have focused on constructing _thermodynamically consistent_ descriptions of active particles [35; 36; 37; 38; 4], which aim for models that are detailed enough to relate the irreversibility of the stochastic dynamics to the energy flows, as done by the Clausius relation in the conventional thermodynamics. Those models introduce a constant chemical driving as the origin of self-propulsion and provide a coherent picture of how such mechanochemical coupling affects the dynamics of both mechanical and chemical degrees of freedom. However, application of this approach to a concrete description of the performance of an active heat engine is still lacking. In this study, employing the _active dimer_ framework used in [38], we construct a thermodynamically consistent, analytically tractable model of a fuel-consuming active heat engine, whose dynamics follows the AOUP with the self-propulsion stemming from a constant chemical driving. Both even-parity and odd-parity self-propulsion are considered, with minimal descriptions of the mechanochemical coupling for each situation, which allows us to properly distinguish between the heat and the chemical work. Then, applying the standard methods of stochastic thermodynamics, we derive the Clausius relations between the entropy production of the engine and the heat flows, which yields a definition of the engine efficiency properly bounded from above by the second law of thermodynamics. This allows us to systematically assess the performance of the active heat engine for self-propulsion force of both parities. Figure 1: An illustration of two different types of AOUPs with even-parity and odd-parity self-propulsion forces. The behavior of each self-propulsion force under time reversal is shown inside the grey box. The rest of the paper is organized as follows. First, we briefly point to the main results of this paper in Sec. II. Then, in Sec. III, we present minimal thermodynamically consistent models of a single AOUP driven by the constant chemical driving for both even-parity and odd-parity self-propulsion. We also clarify the energetics of the models and relate them to the entropy productions by the methods of stochastic thermodynamics. Based on these, in Sec. IV, we propose a model for fuel-consuming active heat engines, which are simultaneously coupled to two reservoirs at different temperatures and operate in the steady state. The apparent dynamics of the engine is equivalent to the one studied in [15], but we assess the performance of the engine using a thermodynamically consistent definition of efficiency derived by stochastic thermodynamics. Then, in Sec. V, we apply our definition of efficiency to derive exact expressions for the efficiency at maximum power (EMP). We compare the EMPs of passive engines and active engines of both parities, deducing some design principles for active heat engines from the results. In Sec. VI, we show that the engines driven by the odd-parity propulsion force have a tighter upper bound on their efficiencies than given by the second law of thermodynamic. Finally, we summarize our findings and discuss possible future investigations in Sec. VII. ## II Main results Before going into detail, we briefly point to the main findings of this study. Applying the theoretical approach described in [38], we consider two different thermodynamically consistent models of the AOUP that propels itself by consuming some chemical fuel. The first model features an even-parity self-propulsion force that does not change sign under time reversal. It can be regarded as describing an _active dimer_ described by Eq. (11). Meanwhile, the second model features an odd-parity self-propulsion force that changes sign under time reversal, which is described by Eq. (16). See Fig. 1 for schematic illustrations of these two models. Applying the mechanochemical coupling used in these models, we study the efficiency of the fuel-driven active heat engine described by Eq. (32). Using the standard methods of stochastic thermodynamics, we show that the entropy production of this engine always satisfies the Clausius relation stated in Eq. (39), whose lower bound naturally leads to the expression for the thermodynamically consistent engine efficiency shown in Eq. (43). This efficiency differs from the apparent engine efficiency considered in the previous studies [14; 15], stated in Eq. (44). When the maximum power is achieved, the apparent EMP \(\eta_{\rm appr}^{*}\) is always higher for the active engine (\(\Delta\mu_{1}>0\)) than for its passive counterpart (\(\Delta\mu_{1}=0\)), see Eq. (50) and Fig. 2(a). Meanwhile, using the definition of the engine efficiency we propose, the EMP of the active engine is greater than the passive counterpart only when the chemical driving \(\Delta\mu_{1}\) is sufficiently strong, see Eq. (51) and Fig. 2(b). Finally, the even-parity (odd-parity) active engine achieves a higher EMP when the size scale of the engine (determined by the parameter \(c\)) is small (large) enough, see Eq. (55) and Fig. 2(c). ## III Chemically driven AOUP The AOUP is one of the simplest models of the active particle dynamics. It assumes that the self-propulsion force of the active particle behaves like a noise whose autocorrelation decays exponentially in time, breaking the fluctuation-dissipation theorem (FDT). In one dimension, the dynamics of the AOUP is described by the following equation of motion: \[\dot{X}=-\frac{1}{\Gamma}\,V^{\prime}(X)+v+\xi_{X}. \tag{1}\] Here \(X\) denotes the position of the AOUP, \(\Gamma\) the friction coefficient, \(V(X)\) the external potential, \(v\) the self-propulsion, and \(\xi_{X}\) the thermal noise. The variables \(v\) and \(\xi_{X}\) are both Gaussian noises whose statistics satisfy \[\left\langle\xi_{X}(t)\right\rangle =0, \left\langle\xi_{X}(t)\xi_{X}(t^{\prime})\right\rangle =\frac{2T}{\Gamma}\,\delta(t-t^{\prime}), \tag{2a}\] \[\left\langle v(t)\right\rangle =0, \left\langle v(t)v(t^{\prime})\right\rangle =\frac{D_{\mathrm{a}}}{\tau}\mathrm{e}^{-|t-t^{\prime}|/\tau}, \tag{2b}\] where \(T\) is the temperature, and \(D_{\mathrm{a}}\) is the active contribution to the particle's diffusion coefficient. These relations indicate that \(\xi_{X}\) is a _white_ noise, while \(v\) is a _colored_ noise with a characteristic time scale \(\tau\). The disagreement between this noise time scale \(\tau\) and the instantaneous friction force \(-\Gamma\dot{X}\) implied by Eq. (1) leads to the breaking of the FDT, thereby ensuring that the self-propulsion \(v\) drives the particle out of equilibrium. However, the dynamics of \(v\) is modeled only at the phenomenological level, so it is unclear from which dissipation forces the nonequilibrium driving of the system originates from. In this section, we introduce a thermodynamically consistent, yet simple model of the AOUP which incorporates the constant chemical driving as the origin of the self-propulsion \(v\). Towards this aim, we first present a general recipe for a system of Langevin equations that reaches equilibrium. Then we apply the recipe to construct the desired model for a single AOUP, whose energetics can be clearly identified. ### Recipe for an equilibrating Langevin system Our goal is to first construct a system of Langevin equations that reach equilibrium if there is no external driving, and then to add the external driving to keep the system _active_. Keeping this in mind, we consider an overdamped system described by the state vector \(\mathbf{q}\), whose corresponding free energy is given by \(F(\mathbf{q})\). In equilibrium, the system must satisfy the following conditions: (i) the steady-state distribution \(p_{\mathrm{s}}\) must follow the Gibbs measure \(p_{\mathrm{s}}\propto\mathrm{e}^{-F(\mathbf{q})/T}\), where \(T\) is the temperature; (ii) the _irreversible_ probability current \(\mathbf{j}^{\mathrm{irr}}(\mathbf{q})\equiv\frac{1}{2}[\mathbf{J}(\mathbf{q}) +\mathcal{E}\mathbf{J}(\mathcal{E}\mathbf{q})]\) must vanish in the steady state. Here \(\mathcal{E}=\mathrm{diag}(\epsilon_{1},\dots,\epsilon_{N})\) is the time-reversal operator with \(\epsilon_{i}=+1\) (\(-1\)) if the \(i\)-th coordinate corresponds to an even-parity (odd-parity) variable. These conditions ensure that the system satisfies the _detailed balance_ (DB) and thus becomes fully time-reversal symmetric in the steady state. All the conditions listed above are satisfied by a system of Langevin equations (for \(i=1,\dots,N\)) \[\dot{q}_{i}=\sum_{j}\left[-(\Gamma_{ij}+R_{ij})\frac{\partial F} {\partial q_{j}}+T\frac{\partial}{\partial q_{j}}(\Gamma_{ij}+R_{ij})\right]+ \xi_{i},\] \[\left\langle\xi_{i}(t)\right\rangle=0, \left\langle\xi_{i}(t)\,\xi_{j}(t^{\prime})\right\rangle=2T\Gamma_{ij}\, \delta(t-t^{\prime}), \tag{3}\] provided that the _dissipative_ response coefficients \(\Gamma_{ij}\) and the _reactive_ response coefficients \(R_{ij}\) satisfy the _Onsager reciprocal relations_[39; 40] \[\Gamma_{ij}(\mathbf{q})=\epsilon_{i}\epsilon_{j}\Gamma_{ij}( \mathcal{E}\mathbf{q})=\Gamma_{ji}(\mathbf{q}), \tag{4a}\] \[R_{ij}(\mathbf{q})=-\epsilon_{i}\epsilon_{j}R_{ij}(\mathcal{E} \mathbf{q})=-R_{ji}(\mathbf{q}). \tag{4b}\] We note that the flux \(\dot{q}_{i}\) and the dissipative response \(-\Gamma_{ij}\partial F/\partial q_{j}\) (reactive response \(-R_{ij}\partial F/\partial q_{j}\)) must have the opposite signs (same sign) under time reversal. The justification of this recipe is discussed in Appendix A. ### Modeling the Fuel-Driven AOUP How do we apply the above recipe to construct a thermodynamically consistent model with the AOUP dynamics? We start by assuming that the self-propulsion \(v\) is determined by the internal structure of the AOUP, which in itself follows the Ornstein-Uhlenbeck process \[\dot{v}=-\frac{1}{\tau}v+\xi_{v},\] \[\left\langle\xi_{v}(t)\right\rangle=0,\quad\left\langle\xi_{v}(t )\xi_{v}(t^{\prime})\right\rangle=\frac{2D_{\mathrm{a}}}{\tau^{2}}\delta(t-t^{ \prime}). \tag{5}\] One can easily show that \(v\) satisfying the above equations exhibits the exponentially decaying autocorrelation shown in Eq. (2b) in the steady state. But we are yet to decide which internal state determines \(v\). We propose two different scenarios. #### ii.2.1 Even-parity scenario We regard the AOUP as a dimer composed of two different species of monomers, see the dimer consisting of a black and a red particle in Fig. 1(a). For the moment, we disregard the nonequilibrium driving on the dimer. Then we may write the free energy of the system composed of the dimer and the chemical fuel as follows: \[F(X,x,n)=V(X)+\frac{1}{2}kx^{2}+f(n). \tag{6}\] Here \(x\) is the displacement of the red dimer from the dimer center, \(k\) is the spring constant of the harmonic potential binding the monomers together, and \(f(n)\) is the fuel contribution to the free energy, \(n\) denoting the fuel concentration. Note that \(X\), \(x\), and \(n\) all have even parities; thus, all fluxes (forces) are bound to have odd (even) parities, which means that the system can only have dissipative response coefficients. Based on these considerations and the recipe shown above, we propose the following system of Langevin equations that reaches equilibrium: \[\dot{X} =-\frac{1}{\Gamma}V^{\prime}(X)+\frac{\zeta x}{\Gamma}f^{\prime}(n) +\xi_{X}, \tag{7a}\] \[\dot{x} =-\frac{k}{\gamma}x+\xi_{x},\] (7b) \[\dot{n} =-\Gamma_{nn}\,f^{\prime}(n)+\frac{\zeta x}{\Gamma}V^{\prime}(X) +\xi_{n}. \tag{7c}\] These originate from the choice of the response coefficients \(\Gamma_{XX}=1/\Gamma\), \(\Gamma_{Xn}=\Gamma_{nX}=-\zeta x/\Gamma\), \(\Gamma_{nn}=1/\gamma\), where \(\gamma\) and \(\zeta\) are positive coefficients, with the other response coefficients being zero. These also imply that the noise components \(\xi_{X}\), \(\xi_{x}\), and \(\xi_{n}\) all have zero means, and their correlations are given by \[\left\langle\xi_{X}(t)\xi_{X}(t^{\prime})\right\rangle =\frac{2T}{\Gamma}\,\delta(t-t^{\prime}), \tag{8a}\] \[\left\langle\xi_{x}(t)\xi_{x}(t^{\prime})\right\rangle =\frac{2T}{\gamma}\,\delta(t-t^{\prime}),\] (8b) \[\left\langle\xi_{X}(t)\xi_{n}(t^{\prime})\right\rangle =-2T\,\frac{\zeta x}{\Gamma}\,\delta(t-t^{\prime}),\] (8c) \[\left\langle\xi_{n}(t)\xi_{n}(t^{\prime})\right\rangle =2T\,\Gamma_{nn}\,\delta(t-t^{\prime}), \tag{8d}\] while the other correlations are all zero. We note that the noise correlation matrix \(\mathbb{M}\), whose elements are given by \(M_{ij}(t-t^{\prime})\equiv\left\langle\xi_{i}(t)\xi_{j}(t^{\prime})\right\rangle\), must be positive semi-definite. This implies \[\frac{\Gamma_{nn}}{\Gamma}-\left(\frac{\zeta x}{\Gamma}\right)^{2}\geq 0, \quad\therefore\Gamma_{nn}\geq\frac{\zeta^{2}x^{2}}{\Gamma}. \tag{9}\] We choose \[\Gamma_{nn}=\frac{\zeta^{2}x^{2}}{\Gamma}, \tag{10}\] so that we minimize the dissipation associated with the fuel dynamics while guaranteeing the inequality (9). The consequence of this choice will be discussed again shortly. With all the response coefficients thus fixed, Eqs. (7), (8), and (10) describe the equilibrium dynamics of a dimer coupled to the fuel at temperature \(T\). Now, to make the apparent dynamics of \(X\) equivalent to the AOUP shown in Eqs. (1) and (2), we fix \(f^{\prime}(n)=\Delta\mu\), which amounts to applying a constant chemical driving to the system. Also, Eqs. (8c), (8d), and (10) imply that the noise \(\xi_{n}\) can be rewritten as \(\xi_{n}(t)=-\zeta x\xi_{X}(t)\). Taking all of these into account, the dynamics shown in Eq. (7) changes to \[\dot{X} =-\frac{1}{\Gamma}V^{\prime}(X)+\frac{\zeta x}{\Gamma}\Delta\mu+ \xi_{X}, \tag{11a}\] \[\dot{x} =-\frac{k}{\gamma}x+\xi_{x},\] (11b) \[\dot{n} =-\zeta x\dot{X}. \tag{11c}\] A comparison of this dynamics with Eqs. (1), (2), and (5) shows that \(X\) follows the AOUP dynamics with \[v=\frac{\zeta x\Delta\mu}{\Gamma},\quad\tau=\frac{\gamma}{k}, \quad D_{\rm a}=\left(\frac{\zeta\Delta\mu}{\Gamma k}\right)^{2}\gamma T. \tag{12}\] Multiplying the friction coefficient \(\Gamma\) to both sides of Eq. (11a), it is clear that \(\zeta x\Delta\mu\) is the self-propulsion force acting on the AOUP. Thus, in this scenario, the parity of the self-propulsion is the same as that of \(x\), which is an even-parity variable. Moreover, Eq. (11c) shows that the fuel consumption is tightly coupled to the motion of the AOUP. This is the consequence of the choice of \(\Gamma_{nn}\) made in Eq. (10); in other words, the physical meaning of Eq. (10) is that the consumed fuel thoroughly contributes to the particle dynamics without any wasteful background reactions. #### ii.2.2 Odd-parity scenario Now we consider case where the two monomers of the dimer are of the same type. The self-propulsion of this dimer does not come from the positions of the monomers, but from the rotation of the screw-like device attached between the monomers, see the dimer with a red screw in the middle shown in Fig. 1(b). Denoting by \(p\) the angular momentum of the screw, we may write the free energy of the system composed of the dimer and the chemical fuel as follows: \[F(X,p,n)=V(X)+\frac{p^{2}}{2m}+f(n). \tag{13}\] Here \(m\) is the moment of inertia of the screw. In this scenario, our goal is to build a model where the self-propulsion changes sign under time reversal as does the flux \(\dot{X}\). Such self-propulsion is bound to be a reactive response originating from \(p\), which is the only odd-parity dynamical variable of the system. Based on these considerations and the recipe shown above, we propose the following system of Langevin equations that reaches equilibrium: \[\dot{X} =-\frac{1}{\Gamma}V^{\prime}(X)+\frac{\zeta^{\prime}p}{\Gamma}f^{ \prime}(n)+\xi_{X}, \tag{14a}\] \[\dot{p} =-\frac{\gamma^{\prime}}{m}p+\xi_{p},\] (14b) \[\dot{n} =-\Gamma_{nn}\,f^{\prime}(n)-\frac{\zeta^{\prime}p}{\Gamma}V^{ \prime}(X)+\xi_{c}, \tag{14c}\] These originate from the choice of the response coefficients \(\Gamma_{XX}=1/\Gamma\), \(R_{Xn}=-R_{nX}=-\zeta^{\prime}p/\Gamma\), \(\Gamma_{pp}=\gamma^{\prime}\), where \(\gamma^{\prime}\) and \(\zeta^{\prime}\) are positive coefficients, with the other response coefficients being zero. These also imply that the noise components \(\xi_{X}\), \(\xi_{p}\), and \(\xi_{n}\) all have zero means, and their correlations are given by \[\left\langle\xi_{X}(t)\xi_{X}(t^{\prime})\right\rangle =\frac{2T}{\Gamma}\,\delta(t-t^{\prime}), \tag{15a}\] \[\left\langle\xi_{p}(t)\xi_{p}(t^{\prime})\right\rangle =2T\,\gamma^{\prime}\,\delta(t-t^{\prime}),\] (15b) \[\left\langle\xi_{n}(t)\xi_{n}(t^{\prime})\right\rangle =2T\,\Gamma_{nn}\,\delta(t-t^{\prime}), \tag{15c}\] while the other correlations are all zero. Again, the noise correlation matrix \(\mathbb{M}\) must be positive semi-definite, which in this case requires \(\Gamma_{nn}\geq 0\). As done in the even-parity scenario, we choose \(\Gamma_{nn}\) so that the dissipation associated with the fuel dynamics is minimized. Thus we set \(\Gamma_{nn}=0\), _i.e._, \(\xi_{n}=0\). Then, as in the previous case, we introduce a constant chemical driving by fixing \(f^{\prime}(n)=\Delta\mu\). These change the dynamics shown in Eq. (14) to \[\dot{X} =-\frac{1}{\Gamma}V^{\prime}(X)+\frac{\zeta^{\prime}p}{\Gamma} \Delta\mu+\xi_{X}, \tag{16a}\] \[\dot{p} =-\frac{\gamma^{\prime}}{m}p+\xi_{p},\] (16b) \[\dot{n} =-\frac{\zeta^{\prime}p}{\Gamma}\left(-\Gamma\dot{X}+\zeta^{ \prime}p\Delta\mu+\Gamma\xi_{X}\right), \tag{16c}\] A comparison of this dynamics with Eqs. (1), (2), and (5) shows that \(X\) follows the AOUP dynamics with \[v=\frac{\zeta^{\prime}p\Delta\mu}{\Gamma},\quad\tau=\frac{m}{\gamma^{\prime}},\quad D_{\rm a}=\left(\frac{\zeta^{\prime}\Delta\mu m}{\Gamma}\right)^{2} \frac{T}{\gamma^{\prime}}. \tag{17}\] Multiplying \(\Gamma\) to both sides of Eq. (16a), one can clearly see that \(\zeta^{\prime}p\Delta\mu\), an odd-parity term, plays the role of the self-propulsion force. We also note that Eq. (16c) only contains the fuel consumption associated with the screw rotation, without any background reaction that goes on even when \(p=0\). In this sense, the choice \(\Gamma_{nn}=0\) ensures the tight coupling between the fuel dynamics and the self-propulsion, as was done in the even-parity scenario. ### Energetics of the AOUP Now that we have fully modeled the dynamics of the chemically driven AOUP, we turn to the energetic interpretation of the model for each scenario. #### ii.3.1 Even-parity scenario So far, to derive Langevin equations with the proper mechanochemical coupling that ensures equilibration in the absence of driving, we have treated the fuel concentration \(n\) as a dynamical variable of the system. However, with the chemical driving \(\Delta\mu\) now fixed at a constant value, we regard the fuel supply as an external particle reservoir whose intensive properties do not change over time. In this viewpoint, now \(X\) and \(x\) are the only dynamical variables of the system, whose energy can be written as \[E=V(X)+\frac{1}{2}kx^{2}. \tag{18}\] Differentiating both sides with respect to time, we obtain \[\dot{E} =V^{\prime}(X)\circ\dot{X}+kx\circ\dot{x}\] \[=\Gamma(-\dot{X}+\xi_{X})\circ\dot{X}+\gamma(-\dot{x}+\xi_{x}) \circ\dot{x}+\zeta x\Delta\mu\,\dot{X}, \tag{19}\] where \(\circ\) denotes the Stratonovich product [41], and the second equality is derived using Eqs. (11a) and (11b). Among the three terms on the rhs of the second equality, the last term is readily identified as the rate of _chemical work_ \[\dot{W}_{\rm chem}\equiv-\Delta\mu\,\dot{n}=\zeta x\Delta\mu\,\dot{X}, \tag{20}\] where the second equality is found using Eq. (11c). Then, by the _first law of thermodynamics_, the rate of heat absorbed by the AOUP is \[\dot{Q} =\dot{E}-\dot{W}_{\rm chem}\] \[=\Gamma(-\dot{X}+\xi_{X})\circ\dot{X}+\gamma(-\dot{x}+\xi_{x}) \circ\dot{x}. \tag{21}\] We note that this \(\dot{Q}\) can be interpreted as the rate of work done on the AOUP by the reservoir forces \(\Gamma(-\dot{X}+\xi_{X})\) and \(\gamma(-\dot{x}+\xi_{x})\), which is in agreement with the standard microscopic definition of heat used in stochastic thermodynamics [42; 1; 43]. How do the work and heat identified above contribute to the dissipation of the system? To address this question, let us examine the conditional probability of the infinitesimal path \[\mathcal{P}[X+\dot{X}\,dt,x+\dot{x}\,dt,t+dt|X,x,t]\] \[\quad\sim\exp\Bigg{[}-\frac{\Gamma\,dt}{4T}\,\left(\dot{X}+\frac {1}{\Gamma}V^{\prime}(X)-\frac{\zeta x\Delta\mu}{\Gamma}\right)^{2}\] \[\qquad\qquad-\frac{\gamma\,dt}{4T}\,\left(\dot{x}+\frac{kx}{\gamma }\right)^{2}+\frac{dt}{2}\big{(}V^{\prime\prime}(X)+k\big{)}\Bigg{]}, \tag{22}\] where (\(X\), \(x\)) in the above expression is to have the midpoint value between (\(X,x\)) and (\(X+\dot{X}dt\), \(x+\dot{x}dt\)) [44]. The conditional probability of the backward infinitesimal path can then be written as \[\mathcal{P}[X,x,t+dt|X+\dot{X}\,dt,x+\dot{x}\,dt,t]\] \[\quad\sim\exp\Bigg{[}-\frac{\Gamma\,dt}{4T}\,\left(-\dot{X}+ \frac{1}{\Gamma}V^{\prime}(X)-\frac{\zeta x\Delta\mu}{\Gamma}\right)^{2}\] \[\qquad\qquad-\frac{\gamma\,dt}{4T}\,\left(-\dot{x}+\frac{kx}{ \gamma}\right)^{2}+\frac{dt}{2}\big{(}V^{\prime\prime}(X)+k\big{)}\Bigg{]}. \tag{23}\] According to the standard formalism of stochastic thermodynamics [1], the environmental entropy production (EP) associated with the infinitesimal path is given by \[dS_{\rm env} \equiv\ln\frac{\mathcal{P}[X+\dot{X}\,dt,x+\dot{x}\,dt,t+dt|X,x,t]}{ \mathcal{P}[X,x,t+dt|X+\dot{X}\,dt,x+\dot{x}\,dt,t]}\] \[=-\frac{dt}{T}\left[\Gamma(-\dot{X}+\xi_{X})\circ\dot{X}+\gamma(- \dot{x}+\xi_{x})\circ\dot{x}\right]\] \[=-\frac{dQ}{T}, \tag{24}\] where the last equality is obtained by comparison with Eq. (21). Thus, the heat identified in our model satisfies the Clausius relation for the EP. #### iii.2.2 Odd-parity scenario Now we turn to the energetics of the odd-parity scenario. Again, we regard the fuel supply as an external particle reservoir, so the energy of the system can be written as \[E=V(X)+\frac{p^{2}}{2m}. \tag{25}\] Differentiating both sides with respect to time, we obtain \[\dot{E} =V^{\prime}(X)\circ\dot{X}+\frac{p}{m}\circ\dot{p}\] \[=\left[-\Gamma\left(\dot{X}-\frac{\zeta^{\prime}p\Delta\mu}{ \Gamma}\right)+\Gamma\xi_{X}\right]\circ\left(\dot{X}-\frac{\zeta^{\prime}p \Delta\mu}{\Gamma}\right)\] \[\quad+\left(-\frac{\gamma^{\prime}}{m}p+\xi_{p}\right)\circ\frac {p}{m}\] \[\quad+\left[-\Gamma\left(\dot{X}-\frac{\zeta^{\prime}p\Delta\mu} {\Gamma}\right)+\Gamma\xi_{X}\right]\frac{\zeta^{\prime}p\Delta\mu}{\Gamma}, \tag{26}\] where the second equality is obtained by using Eqs. (16a) and (16b). In a manner similar to the even-parity scenario, we identify the rates of chemical work \(W_{\rm chem}\) and heat \(Q\) absorbed by the AOUP as \[\dot{W}_{\rm chem} \equiv-\Delta\mu\,\dot{n}\] \[=\left[-\Gamma\left(\dot{X}-\frac{\zeta^{\prime}p\Delta\mu}{ \Gamma}\right)+\Gamma\xi_{X}\right]\frac{\zeta^{\prime}p\Delta\mu}{\Gamma}, \tag{27}\] \[\dot{Q} =\left[-\Gamma\left(\dot{X}-\frac{\zeta^{\prime}p\Delta\mu}{ \Gamma}\right)+\Gamma\xi_{X}\right]\circ\left(\dot{X}-\frac{\zeta^{\prime}p \Delta\mu}{\Gamma}\right)\] \[\quad+\left(-\frac{\gamma^{\prime}}{m}p+\xi_{p}\right)\circ\frac {p}{m}, \tag{28}\] so that the first law of thermodynamics \(\dot{E}=\dot{Q}+\dot{W}_{\rm chem}\) is satisfied. How can we interpret the expressions obtained above? Since the self-propulsion force \(\zeta^{\prime}p\Delta\mu\) changes sign under time reversal, the motion of the AOUP driven only by the force at velocity \(\zeta^{\prime}p\Delta\mu/\Gamma\) is in itself not an irreversible phenomenon, meaning such motion happens without dissipating any energy. Thus, the energy dissipation comes only from the _excess_ velocity of the AOUP, \(\dot{X}-\zeta^{\prime}p\Delta\mu/\Gamma\); the frictional force, for the same reason, should be \(\Gamma(-\dot{X}+\zeta^{\prime}p\Delta\mu/\Gamma)\). This, together with the thermal force \(\Gamma\xi_{X}\), forms the dissipative force applied by the thermal reservoir on the AOUP. Thus, Eq. (28) is a natural expression for the rate of energy dissipation at the reservoir. We note that similar expressions for heat in the presence of odd-parity self-propulsion were also proposed in [32; 36]. Using stochastic thermodynamics, we can more explicitly check that \(\dot{Q}\) identified in Eq. (28) quantifies the rate of energy dissipation. Towards this end, we examine the conditional probability of the infinitesimal path \[\mathcal{P}[X+\dot{X}\,dt,p+\dot{p}\,dt,t+dt|X,p,t]\] \[\sim\exp\Bigg{[}-\frac{\Gamma\,dt}{4T}\,\left(\dot{X}+\frac{1}{ \Gamma}V^{\prime}(X)-\frac{\zeta^{\prime}p\Delta\mu}{\Gamma}\right)^{2}\] \[\qquad-\frac{dt}{4\gamma^{\prime}T}\,\left(\dot{p}+\frac{\gamma^ {\prime}p}{m}\right)^{2}+\frac{dt}{2}\!\left(V^{\prime\prime}(X)+\frac{1}{m} \right)\Bigg{]}, \tag{29}\] where \((X,\,p)\) in the above expression is to have the midpoint value between \((X,p)\) and \((X+\dot{X}dt,\,p+\dot{p}dt)\). The conditional probability of the backward infinitesimal path can then be written as \[\mathcal{P}[X,-p,t+dt|X+\dot{X}\,dt,-p-\dot{p}\,dt,t]\] \[\sim\exp\Bigg{[}-\frac{\Gamma\,dt}{4T}\,\left(-\dot{X}+\frac{1}{ \Gamma}V^{\prime}(X)+\frac{\zeta^{\prime}p\Delta\mu}{\Gamma}\right)^{2}\] \[\qquad-\frac{dt}{4\gamma^{\prime}T}\,\left(\dot{p}-\frac{\gamma^ {\prime}p}{m}\right)^{2}+\frac{dt}{2}\!\left(V^{\prime\prime}(X)+\frac{1}{m} \right)\Bigg{]}, \tag{30}\] where the odd parity of \(p\) has been taken into account. Now, the environmental EP associated with the infinitesimal path is obtained as \[dS_{\rm env}\equiv\ln\frac{\mathcal{P}[X+\dot{X}\,dt,p+\dot{p} \,dt,t+dt|X,p,t]}{\mathcal{P}[X,-p,t+dt|X+\dot{X}\,dt,-p-\dot{p}\,dt,t]}\] \[=-\frac{dt}{T}\left[-\Gamma\left(\dot{X}-\frac{\zeta^{\prime}p \Delta\mu}{\Gamma}\right)+\Gamma\xi_{X}\right]\circ\left(\dot{X}-\frac{\zeta ^{\prime}p\Delta\mu}{\Gamma}\right)\] \[\qquad-\frac{dt}{T}\left[\left(-\frac{\gamma^{\prime}}{m}p+\xi_ {p}\right)\circ\frac{p}{m}\right]=-\frac{dQ}{T}, \tag{31}\] where the last equality, _i.e._, the Clausius relation, comes from Eq. (28). This confirms that \(\dot{Q}\) identified in Eq. (28) indeed quantifies the rate of energy dissipation. ## IV Fuel-driven active heat engine So far, we have introduced a thermodynamically consistent model of a single fuel-consuming active particle following the AOUP. In order to describe how such a particle performs as a heat engine, we need to build a model which couples the particle to multiple thermal reservoirs. For this purpose, in this section, (i) we propose a fuel-consuming active heat engine operating between a pair of heat baths at different temperatures, (ii) clarify its energetics, and (iii) identify the engine efficiency bounded from above by the second law of thermodynamics. ### Modeling the fuel-driven active heat engine Our inspiration for the model comes from the _Brownian gyrator_, first proposed in [45] and implemented in [46], which is a minimal model of a microscopic heat engine simultaneously coupled to two thermal reservoirs. Provided that the reservoirs are kept at different temperatures, the engine operates in a nonequilibrium steady state without any time-dependent protocol. A closely related variant, the linear Brownian engine, has also been studied and was shown to exhibit the Curzon-Ahlborn efficiency as the EMP, although it is not endoreversible [47]. Then the model was generalized to a heat engine coupled to two different active baths (or an active bath and an ordinary heat bath), with discussions of when the engine efficiency, defined at an apparent level, surpasses the Carnot efficiency [15]. Employing a similar framework, we place our fuel-consuming engine in a two-dimensional space \((X_{1},\,X_{2})\). For each direction, the engine follows the fuel-consuming AOUP dynamics proposed in Sec. III, _i.e._, Eqs. (11) or (16). The constants characterizing each part of the dynamics, including the temperatures \(T_{1}\geq T_{2}\), may differ from each other; however, for simplicity, we assume that both \(X_{1}\) and \(X_{2}\) are constrained by the same harmonic potential \(V(X_{i})=(K/2)X_{i}^{2}\), and that both directions have the same friction coefficient \(\Gamma\). To couple the dynamics of the two coordinates and extract work from the engine, we also apply a nonconservative force field \(\mathbf{f}^{\text{nc}}=(\lambda_{1}X_{2},\,\lambda_{2}X_{1})\)[47; 15]. Then, for the case where all variables have the even parity, the mechanical degrees of freedom \(\mathbf{r}\equiv(X_{1},\,X_{2},\,x_{1},\,x_{2})\) obey the multivariate Langevin equation \[\dot{\mathbf{r}}=-\mathbb{K}\,\mathbf{r}+\xi \tag{32}\] with the \(4\times 4\) matrix \[\mathbb{K}=\begin{bmatrix}K/\Gamma&-\lambda_{1}/\Gamma&-\zeta_{1}\Delta\mu_{1 }/\Gamma&0\\ -\lambda_{2}/\Gamma&K/\Gamma&0&-\zeta_{2}\Delta\mu_{2}/\Gamma\\ 0&0&1/\tau_{1}&0\\ 0&0&0&1/\tau_{2}\end{bmatrix} \tag{33}\] and the noise satisfying \(\langle\xi(t)\rangle=0\) and \(\langle\xi(t)\xi^{\mathsf{T}}(t^{\prime})\rangle=2\mathbb{D}\,\delta(t-t^{ \prime})\), where \[\mathbb{D}\equiv\text{diag}\left(\frac{T_{1}}{\Gamma},\frac{T_{2}}{\Gamma}, \frac{T_{1}}{\gamma_{1}},\frac{T_{2}}{\gamma_{2}}\right). \tag{34}\] To change the dynamics of \(X_{i}\) to that of an odd-parity AOUP, one can simply replace the variables and coefficients like \(x_{i}\to p_{i}\), \(k_{i}\rightarrow\frac{1}{m_{i}}\), \(\gamma_{i}\rightarrow\frac{1}{\gamma_{i}}\) and \(\zeta_{i}\rightarrow\zeta_{i}^{\prime}\). Meanwhile, the dynamics of the chemical degrees of freedom \(c_{1}\) and \(c_{2}\) have the same form as Eqs. (11c) or (16c) (after adding suitable subscript indices) depending on the parity of self-propulsion. Provided that \(\mathbf{f}_{\text{nc}}\) satisfies \(\lambda_{1}\lambda_{2}<K^{2}\), the mechanical degrees of freedom are stable and converge to a unique steady state that can be analytically obtained. Since Eq. (32) simply defines a multi-dimensional Ornstein-Uhlenbeck process, \(\mathbf{r}\) exhibits Gaussian statistics with zero mean in the steady state, so calculating its second moments fully determines the distribution. Calculations of those second moments are detailed in Appendix B.1. We also note that, when \(\mathbf{r}\) attains the steady state, the fuel concentrations \(c_{1}\) and \(c_{2}\) keep changing at constant rates, which can be calculated from the steady-state solution. ### Energetics of the fuel-driven active heat engine Following the logic discussed in Sec. III.3, we can identify the work and the heat components of the energy flows generated by the active heat engine we have defined above. First of all, the work \(W_{\text{out},i}\) extracted by the \(i\)-th component of the external force \(\mathbf{f}_{\text{nc}}\) satisfies \[\dot{W}_{\text{out},1}=-\lambda_{1}X_{2}\dot{X}_{1},\quad\dot{W}_{\text{out},2 }=-\lambda_{2}X_{1}\dot{X}_{2}, \tag{35}\] with the total extracted work given by \(W_{\text{out}}=W_{\text{out},1}+W_{\text{out},2}\). Meanwhile, the chemical work done along the \(i\)-th component can be written as \[\dot{W}_{\text{chem},i}=-\Delta\mu_{i}\,\dot{n}_{i}\quad\text{for $i=1$, $2$}, \tag{36}\] with the total chemical work \(W_{\text{chem}}=W_{\text{chem},1}+W_{\text{chem},2}\). Now, using the above and the first law of thermodynamics, the rate of heat absorbed by the engine through the \(i\)-th component is obtained as \[\dot{Q}_{i}=\dot{E}_{i}+\dot{W}_{\text{out},i}-\dot{W}_{\text{chem},i}, \tag{37}\] where \(E_{i}\) is the mechanical energy associated with the \(i\)-th component. These imply that \(\dot{W}_{\text{chem},i}\) and \(\dot{Q}_{i}\) are respectively given by equations of the same form as Eqs. (20) and (21) for the even-parity case and Eqs. (27) and (28) for the odd-parity case, with matching subscript indices added. Then the rate of total heat absorbed by the engine is finally obtained as \(\dot{Q}=\dot{Q}_{1}+\dot{Q}_{2}\). The balance between all energy flows in the active heat engine identified above are schematically illustrated in Fig. 3. ### Thermodynamically consistent engine efficiency Now we discuss how the energy flows identified above are bounded by the second law of thermodynamics. As was done in Sec. III.3, we can use the stochastic thermodynamics to relate the energy flows to the EP. More explicitly, in a manner analogous to Eqs. (24) and (31), we identify the rate of environmental EP \[dS_{\rm env}\equiv\ln\frac{\mathcal{P}[\mathbf{r}+\dot{\mathbf{r}}\,dt,\,t+dt| \mathbf{r},\,t]}{\mathcal{P}[\mathcal{E}\mathbf{r},\,t+dt|\mathcal{E}(\mathbf{ r}+\dot{\mathbf{r}}\,dt),\,t]}. \tag{38}\] Then, after some algebra analogous to Eqs. (22), (23), (24), (29), (30), and (31), we can easily obtain the Clausius formula \[\dot{S}_{\rm env}=-\frac{1}{T_{1}}\dot{Q}_{1}-\frac{1}{T_{2}}\dot{Q}_{2}. \tag{39}\] The rate of total EP is obtained by adding the rate of change of the Shannon entropy of the engine [1]. But, as long as we focus on the steady state, the average Shannon entropy of the engine stays constant. In that case, the total EP is on average equal to the environmental EP. Then, the Integral Fluctuation Theorem (IFT) implies the second-law inequality \(\langle\dot{S}_{\rm env}\rangle\geq 0\), where \(\langle\cdot\rangle\) denotes the average with respect to the steady-state distribution. Using Eqs. (37) and (39) in this inequality, we obtain \[\left\langle-\frac{\dot{Q}_{1}}{T_{1}}-\frac{\dot{Q}_{2}}{T_{2}} \right\rangle=\left\langle-\frac{\dot{Q}_{1}}{T_{1}}-\frac{\dot{W}_{\rm out} -\dot{Q}_{1}-\dot{W}_{\rm chem}}{T_{2}}\right\rangle\] \[\quad=-\frac{1}{T_{2}}\Big{[}\langle\dot{W}_{\rm out}\rangle- \eta_{\rm C}\langle\dot{Q}_{1}\rangle-\langle\dot{W}_{\rm chem}\rangle\Big{]} \geq 0, \tag{40}\] where \[\eta_{\rm C}\equiv 1-\frac{T_{2}}{T_{1}} \tag{41}\] is the Carnot efficiency. We note that \[\eta_{\rm C}\langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem}\rangle=\langle \dot{W}_{\rm out}\rangle+T_{2}\langle\dot{S}_{\rm env}\rangle\geq\langle\dot {W}_{\rm out}\rangle, \tag{42}\] so as long as the engine does operate as an engine (\(\langle\dot{W}_{\rm out}\rangle>0\)), we have \(\eta_{\rm C}\langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem}\rangle>0\)[48]. Using this fact in Eq. (40), we identify a measure of engine performance bounded from above by the second law of thermodynamics: \[\tilde{\eta}\equiv\frac{\langle\dot{W}_{\rm out}\rangle}{\eta_{\rm C} \langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem}\rangle}\leq 1. \tag{43}\] This measure, henceforth simply referred to as _efficiency_, naturally incorporates both heat injection and fuel consumption components of energy flows. Thus, this quantity is properly grounded upon the entire picture of how the engine converts the supplied energy into useful work. We stress that the upper bound on the efficiency \(\tilde{\eta}\) in Eq. (43) has been derived using only the laws of thermodynamics and two assumptions: (i) the engine extracts positive work in the steady state and (ii) the entropy is produced via contact with a pair of thermal reservoirs and satisfies the Clausius relation. Given these, the inequality is valid regardless of the details of the engine. Moreover, if the engine extracts work in a periodic state driven by a cyclic protocol, we can simply replace the heat and work rates in \(\tilde{\eta}\) with corresponding quantities accumulated over a single period and still get the same inequality. Thus the above definition of \(\tilde{\eta}\) is generally applicable to a broad range of engines. Some comparisons of \(\tilde{\eta}\) with the discussions of efficiency in the literature are in order. When there is no chemical driving (\(\Delta\mu=0\)), Eq. (43) simply reduces to the Carnot upper bound on the efficiency of a heat engine. On the other hand, if the temperatures of the two baths are equal (\(\eta_{\rm C}=0\)), then \(\tilde{\eta}=\langle\dot{W}_{\rm out}\rangle/\langle\dot{W}_{\rm chem} \rangle\leq 1\). This definition of efficiency has been used for isothermal active engines [12], including molecular motors [21; 22; 23]. Thus \(\tilde{\eta}\) defined in Eq. (43) interpolates between the definitions of efficiency for the ordinary heat engines and the molecular motors. More recently, a similar measure of efficiency has been proposed in [24], which in place of \(W_{\rm chem}\) uses an information-theoretical quantity describing the change of the probability distribution of the engine due to the nonzero \(\Delta\mu\). When most of \(W_{\rm chem}\) is wasted on the background chemical reactions irrelevant to the dynamics of the engine, their definition of the efficiency, which focuses on the part of \(W_{\rm chem}\) that affects the engine dynamics, is a more useful measure of the engine's performance. But since our models assume tight coupling between the fuel consumption and the self-propulsion of the engine, as long as such engines are concerned, our definition of \(\tilde{\eta}\) is similarly useful. ### Thermodynamics of the apparent efficiency Now that we have a clear energetic picture of the active heat engine, it is natural to ask how the previously defined notion of the apparent efficiency [14; 15] exceeds the Carnot efficiency \(\eta_{\rm C}\) without breaking the second law. We observe that, using our framework, the apparent efficiency discussed in the previous literature can be recast Figure 3: The energetics of the fuel-driven active heat engine. in a thermodynamically consistent manner as follows: \[\eta_{\rm appr}\equiv\frac{\langle\dot{W}_{\rm out}\rangle}{\langle\dot{Q}_{1} \rangle+\langle\dot{W}_{\rm chem,1}\rangle}. \tag{44}\] We note that this quantity is always equal to \(1-\lambda_{2}/\lambda_{1}\) in our model, as derived in [15]. In this definition, the denominator corresponds to the rate of energy injection from both thermal and fuel reservoirs to the position coordinate \(X_{1}\). This definition might be a practical choice when we are unable to distinguish the thermal and the chemical parts of the injected energy. For example, in [14; 15], the AOUP arises from the active bath without any accessible information about the fuel dynamics, so \(\dot{W}_{\rm chem,1}\) cannot be separated from \(\dot{Q}_{1}\). An upper bound on \(\eta_{\rm appr}\) is easily obtained using the inequality stated in Eq. (42): \[\eta_{\rm appr}\leq\eta_{\rm C}+\frac{(1-\eta_{\rm C})\langle\dot{W}_{\rm chem,1}\rangle+\langle\dot{W}_{\rm chem,2}\rangle}{\langle\dot{Q}_{1}\rangle+ \langle\dot{W}_{\rm chem,1}\rangle} \tag{45}\] This inequality clearly shows that the positive chemical works (\(\langle\dot{W}_{\rm chem,i}\rangle>0\)) extends the thermodynamically allowed range of \(\eta_{\rm appr}\) beyond the Carnot efficiency. Meanwhile, after some manipulations, \(\eta_{\rm appr}>\eta_{\rm C}\) leads to \[-\frac{\langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem,1}\rangle}{T_{1}}- \frac{\langle\dot{Q}_{2}\rangle+\langle\dot{W}_{\rm chem,2}\rangle}{T_{2}}<0. \tag{46}\] If the chemical works are all zero, this condition violates the second law of thermodynamics expressed in Eq. (40); the apparent super-Carnot behavior \(\eta_{\rm appr}>\eta_{\rm C}\) requires the presence of positive chemical works. ## V Efficiency at maximum power Optimizing the design an engine is of theoretical and practical interest, but the aim of optimization should first be clarified. In this regard, the notion of EMP has been studied extensively due to the following reasons. First, it quantifies the efficiency achieved by an engine when it is most "useful". Second, some universal results regarding the EMP have been reported for a broad range of ordinary heat engines, especially the Curzon-Ahlborn efficiency \(\eta_{\rm CA}\equiv 1-\sqrt{T_{2}/T_{1}}\)[47; 49; 50; 51]. For the convenience of analysis, we change the external force parameters \(\lambda_{1}\) and \(\lambda_{2}\) to \(r\equiv\lambda_{1}/\lambda_{2}\) and \(c\equiv\lambda_{1}\lambda_{2}\). Then, for reasons to be clarified below, we search for \(r\) maximizing the power for a fixed value of \(c\). If only one of the two variables \(X_{1}\) and \(X_{2}\) is driven by the fuel, the condition for the maximum power can be analytically obtained with ease. In this section, we only present the results and compare the EMPs achieved by the passive heat engine and the active heat engine with even and odd-parity self-propulsion. For detailed derivations, see Appendix B.2. ### Case \(\Delta\mu_{1}>0\), \(\Delta\mu_{2}=0\) #### v.1.1 Active vs. passive heat engines We first consider the case with \(\Delta\mu_{1}>0\) and \(\Delta\mu_{2}=0\). For a fixed value of \(c\), the power is maximized at \(r=r^{*}\) with \[r^{*}=\frac{a_{1}}{\sqrt{1-\eta_{\rm C}}}, \tag{47}\] where \[a_{1}\equiv\sqrt{1+\frac{\Gamma\tau_{1}^{2}\,\zeta_{1}^{2}\,\Delta\mu_{1}^{2}} {\gamma_{1}[(\Gamma+K\tau_{1})^{2}-\tau_{1}^{2}c]}} \tag{48}\] for the even-parity case, and the corresponding expression for the odd-parity case can be obtained by the mapping \(\gamma_{1}\to 1/\gamma_{1}^{\prime}\) and \(\zeta_{1}\to\zeta_{1}^{\prime}\). Then the value of the maximum power (MP) is \[P^{*}=\left.\langle\dot{W}_{\rm out}\rangle\right|_{r=r^{*}}=\frac{T_{2}\,c}{ 2\Gamma K}\Big{(}\frac{a_{1}}{\sqrt{1-\eta_{\rm C}}}-1\Big{)}^{2}. \tag{49}\] Since \(\eta_{\rm appr}=1-\lambda_{2}/\lambda_{1}=1-1/r\), the apparent EMP is given by \[\eta_{\rm appr}^{*}=1-\frac{1}{a_{1}}\sqrt{1-\eta_{\rm C}}, \tag{50}\] which reduces to the Curzon-Ahlborn efficiency in the passive limit \(\Delta\mu_{1}\to 0\), which corresponds to \(a_{1}\to 1\). We observe that both the MP and the apparent EMP monotonically increase as functions of \(a_{1}\), which in turn monotonically increases with \(\Delta\mu_{1}\). Thus, both the MP and the apparent EMP of the active heat engine are larger than their passive counterparts. As shown in Fig. 2(a), this apparent EMP can even surpass the Carnot efficiency \(\eta_{\rm C}\). Meanwhile, the thermodynamically consistent EMP is obtained as \[\tilde{\eta}^{*}=\left[\frac{\eta_{\rm C}}{1-\frac{\sqrt{1-\eta_{\rm C}}}{a_{ 1}}}+b_{1}\frac{a_{1}^{2}-1}{\big{(}\frac{a_{1}}{\sqrt{1-\eta_{\rm C}}}-1 \big{)}^{2}}\right]^{-1}, \tag{51}\] where \[b_{1}=b_{\rm even,1}\equiv\frac{2K(\Gamma+K\tau_{1})}{\tau_{1}c} \tag{52}\] for the even-parity engine and \[b_{1}=b_{\rm odd,1}\equiv\frac{2K[\Gamma K+(K^{2}-c)\tau_{1}]}{\Gamma c} \tag{53}\] for the odd-parity engine. Taking \(\Delta\mu_{1}\to 0\) (_i.e._, \(a_{1}\to 1\)) in Eq. (51), we obtain \[\tilde{\eta}^{*}=\frac{1-\sqrt{1-\eta_{\rm C}}}{\eta_{\rm C}}=\frac{\eta_{\rm CA }}{\eta_{\rm C}}, \tag{54}\] which yields the Curzon-Ahlborn efficiency \(\eta_{\rm CA}\) in agreement with [15]. Now we discuss when the EMP of the active heat engine surpasses that of the passive counterpart. Comparing the passive EMP \(\eta_{\rm CA}/\eta_{\rm C}\) obtained above with \(\tilde{\eta}^{*}\), we obtain an inequality involving a quadratic polynomial of \(a_{1}\). Noting that \(a_{1}\geq 1\) and that \(b_{1}\geq 2\) due to the stability condition \(c<K^{2}\), there are three possible scenarios. First, when \(\eta_{\rm C}\leq\frac{b_{1}(b_{1}-2)}{(b_{1}-1)^{2}}\) (small temperature difference), the active EMP cannot surpass the passive EMP for any value of \(\Delta\mu_{1}\), see the even-parity case with \(T_{1}=2.0\) in Fig. 4(a) and the odd-parity case with \(T_{1}=15\) in Fig. 4(b). Second, when \(\frac{b_{1}(b_{1}-2)}{(b_{1}-1)^{2}}<\eta_{\rm C}<\frac{2b_{1}}{\sqrt{b_{1}^{2} +1+b_{1}}}\) (intermediate temperature difference), the active EMP is smaller than the passive EMP for small but positive \(\Delta\mu_{1}\). However, the former eventually surpasses the latter as \(\Delta\mu_{1}\) becomes larger, see the even-parity case with \(T_{1}=7.5\) in Fig. 4(a) and the odd-parity case with \(T_{1}=35\) in Fig. 4(b). Third, when \(\eta_{\rm C}\geq\frac{2b_{1}}{\sqrt{b_{1}^{2}+1+b_{1}}}\) (large temperature difference), the active EMP is larger than the passive EMP for any positive \(\Delta\mu_{1}\), see the even-parity case with \(T_{1}=30\) in Fig. 4(a) and the odd-parity case with \(T_{1}=95\) in Fig. 4(b). As clearly shown in Fig. 4, the active EMP \(\tilde{\eta}^{*}\) can exhibit a nonmonotonic dependence on the chemical driving \(\Delta\mu_{1}\). This is an intriguing feature which illustrates that the behavior of a far-from-equilibrium system can be vastly different from a nonequilibrium system in the linear response regime. #### iv.2.2 Even-parity vs. odd-parity engines From Eqs. (51), (52), and (53), we obtain \[\frac{1}{\tilde{\eta}^{*}_{\rm even}}-\frac{1}{\tilde{\eta}^{*}_{ \rm odd}} = \left(\text{Positive constant}\right) \tag{55}\] \[\times\left[\Gamma^{2}-(K^{2}-c)\tau_{1}^{2}\right],\] so the sign of the lhs is solely determined by the dimensionless parameter \[\alpha\equiv\frac{\Gamma}{\tau_{1}\sqrt{K^{2}-c}}. \tag{56}\] The even-parity (odd-parity) engine achieves the higher EMP when \(\alpha<1\) (\(\alpha>1\)). This is illustrated in Fig. 2(c) as the parameter \(c\) is varied. What is the physical significance of \(\alpha\)? Examining the exponential decays of the two-time correlation functions, we identify the three relaxation time scales: \[\tau_{+}\equiv\frac{\Gamma}{K+\sqrt{c}},\quad\tau_{-}\equiv\frac{\Gamma}{K- \sqrt{c}},\quad\tau_{1}. \tag{57}\] Here \(\tau_{+}\) and \(\tau_{-}\) indicate the relaxation time scale of the AOUP within the spatial domain of the engine, while \(\tau_{1}\) is the persistence time scale of the orientation of self-propulsion. Thus, given the typical velocity \(v\) of the AOUP, the above time scales can be converted to the length scales \(l_{\pm}\equiv v\tau_{\pm}\) reflecting the size of the engine and the length scale \(l_{1}\equiv v\tau_{1}\) corresponding to the persistence length. This shows that maximizing the power for a fixed \(c\) amounts to optimizing the engine for a given size. Then, we can write \[\alpha=\frac{\sqrt{l_{+}l_{-}}}{l_{1}}=\frac{\left(\text{Length scale of the engine}\right)}{\left(\text{Persistence length}\right)}, \tag{58}\] which means that \(\alpha\) quantifies the relative size of the engine with respect to the persistence length of the AOUP. With this interpretation, we conclude that the even-parity (odd-parity) engine achieves the higher EMP when the engine is smaller (larger) than the persistence length of the AOUP. This can be intuitively understood in terms of the fuel consumption of the AOUP as follows. While the even-parity engine consumes fuel as the AOUP moves in space (see Eq. (11c)), the odd-parity engine consumes fuel even when the AOUP does not move in space (see Figure 4: EMP of the active heat engine for various values of \(T_{1}\) at fixed \(T_{2}=1\) for (a) even-parity and (b) odd-parity self-propulsion. We used \(c=3.5\), \(\tau_{1}=5\), and \(\Delta\mu_{2}=0\). Each dashed horizontal line indicates the passive EMP \(\eta_{\rm CA}/\eta_{\rm C}\) at the corresponding value of \(T_{1}\). Eq. (16c)). When the engine is smaller than the persistence length, the AOUP tends to get stuck to the engine boundary with very slow actual motion in space. If this happens, the odd-parity engine spends much more fuel than the even-parity engine does, so the even-parity engine is more efficient. In contrast, when the engine is larger than the persistence length, the AOUP tends to move within the engine rapidly. If this motion is fast enough, the even-parity engine rapidly consumes fuel, whereas the fuel consumption rate of the odd-parity engine saturates (note that, for the single AOUP, Eq. (16c) can be rewritten as \(\dot{n}=-\zeta^{\prime}pV^{\prime}(X)/\Gamma\), which does not explicitly involve \(\dot{X}\)). Thus, in this case, the odd-parity engine is more efficient than the even-parity engine. ### Case \(\Delta\mu_{1}=0\), \(\Delta\mu_{2}>0\) Now we consider the case where the chemical driving is applied only to \(X_{2}\) in contact with the cold thermal reservoir at temperature \(T_{2}\). For a fixed \(c\), the optimal value of \(r\) is given by \[r^{*}=\frac{1}{a_{2}\sqrt{1-\eta_{\rm C}}}, \tag{59}\] where \[a_{2}=\sqrt{1+\frac{\Gamma\,\tau_{2}^{2}\,\zeta_{2}^{2}\,\Delta\mu_{2}^{2}}{ \gamma_{2}[(\Gamma+K\tau_{2})^{2}-\tau_{2}^{2}c]}} \tag{60}\] for the even-parity case, and the corresponding expression for the odd-parity case can be obtained by the mapping \(\gamma_{2}\to 1/\gamma_{2}^{\prime}\) and \(\zeta_{2}\to\zeta_{2}^{\prime}\). See Appendix B.2 for a detailed derivation. Using this result, the MP is obtained as \[P^{*}=\frac{T_{2}\,c}{2K\Gamma}\left(\frac{1}{\sqrt{1-\eta_{\rm C}}}-a_{2} \right)^{2}, \tag{61}\] and the apparent EMP is given by \[\eta_{\rm appr}^{*}=1-\frac{1}{r^{*}}=1-a_{2}\sqrt{1-\eta_{\rm C}}. \tag{62}\] We may compare this expression with \(\eta_{\rm appr}^{*}\) for the case \(\Delta\mu_{1}>0\), \(\Delta\mu_{2}=0\) shown in Eq. (50). Since both \(a_{1}\) and \(a_{2}\) cannot be less than 1, \(\eta_{\rm appr}^{*}\) obtained in Eq. (50) is never less than \(\eta_{\rm CA}\), while \(\eta_{\rm appr}^{*}\) obtained above is never greater than \(\eta_{\rm CA}\). Moreover, by increasing \(a_{2}\) via increasing \(\Delta\mu_{2}\), we see that the MP decreases to zero and then increases again, reaching the minimum for \(a_{2}=1/\sqrt{1-\eta_{\rm C}}\), exactly where \(\eta_{\rm appr}^{*}\) obtained above changes sign. This indicates that the denominator of \(\eta_{\rm appr}\) in Eq. (44), \(\langle\dot{Q}_{1}\rangle\), becomes negative when MP is achieved for \(a_{2}>1/\sqrt{1-\eta_{\rm C}}\) (note that \(\langle\dot{W}_{\rm chem,1}\rangle=0\) here). In this regime, the chemical driving on \(X_{2}\) is so strong that the engine operates by dissipating the heat even into the hot reservoir. Thus, here \(\eta_{\rm appr}\) is a poor measure of the engine's efficiency. In contrast, the thermodynamically consistent efficiency \(\tilde{\eta}\) defined in Eq. (43) is still positive and bounded by 1 as the positivity of its denominator is guaranteed by Eq. (42). The behavior of the EMP \(\tilde{\eta}^{*}\) as \(\Delta\mu_{2}\) is varied is shown in Fig. 5(c). The exact analytical form of the EMP is given by \[\tilde{\eta}^{*}=\left[\frac{\eta_{\rm C}}{1-a_{2}\sqrt{1-\eta_{\rm C}}}+b_{2} \frac{a_{2}^{2}-1}{\left(\frac{1}{\sqrt{1-\eta_{\rm C}}}-a_{2}\right)^{2}} \right]^{-1}, \tag{63}\] where \[b_{2}=b_{\rm even,2}\equiv\frac{2K(\Gamma+K\tau_{2})}{\tau_{2}c} \tag{64}\] for the even-parity engine and \[b_{2}=b_{\rm odd,2}\equiv\frac{2K[\Gamma K+(K^{2}-c)\tau_{2}]}{\Gamma c} \tag{65}\] for the odd-parity engine. Once again, we examine whether \(\tilde{\eta}^{*}\) can be greater than its value in the passive limit \(\eta_{\rm CA}/\eta_{\rm C}\). Noting that \(\tilde{\eta}^{*}=\eta_{\rm CA}/\eta_{\rm C}\) yields a quadratic equation for \(a_{2}\) and that \(a_{2}\geq 1\) and \(b_{2}\geq 2\) (due to the stability condition \(K^{2}>c\)), we can show that \(\tilde{\eta}^{*}\) cannot be greater than \(\eta_{\rm CA}/\eta_{\rm C}\). Figure 5: Performance of the active heat engine when the chemical driving is attached to \(X_{2}\). We use \(c=3.5\), \(\tau_{2}=5\), \(T_{1}=4\) and \(\Delta\mu_{1}=0\). (a) The MP, (b) the apparent EMP, (c) EMP are shown as functions of the chemical driving \(\Delta\mu_{2}\). As for the effects of the parity on the EMP, we obtain \[\frac{1}{\tilde{\eta}_{\rm even}^{*}}-\frac{1}{\tilde{\eta}_{\rm odd} ^{*}} = (\text{Positive constant}) \tag{66}\] \[\times\left[\Gamma^{2}-(K^{2}-c)\tau_{2}^{2}\right],\] which is almost the same as Eq. (55), except for the replacement \(\tau_{1}\to\tau_{2}\). Thus the previous discussions of when the even-parity engine is more efficient than the odd-parity engine are also fully applicable in this case. ## VI Tighter bound on efficiency Thus far, we have defined and examined the efficiency \(\tilde{\eta}\) of active heat engines, whose upper bound set by the second law of thermodynamics is 1. But is there a tighter upper bound on \(\tilde{\eta}\)? Indeed, stochastic thermodynamics has generalized the second law of thermodynamics, identifying different kinds of EP which are guaranteed to be nonnegative. Since we are dealing with an engine that operates in the steady state, the relevant type of EP is the _house-keeping_ EP, which is associated with the maintenance of the nonequilibrium steady state. In the presence of odd-parity variables, the housekeeping EP can be further decomposed into two parts [52]: the part associated with the breaking of detailed balance in the steady state (whose rate is denoted by \(\dot{S}_{\rm bDB}\)) and the part associated with the breaking of the mirror symmetry of the steady-state distribution \(p_{\rm s}({\bf r})=p_{\rm s}(\mathcal{E}{\bf r})\) (whose rate is denoted by \(\dot{S}_{\rm as}\)). Only \(\langle\dot{S}_{\rm bDB}\rangle\geq 0\) is guaranteed, which yields an inequality distinct from that shown in Eq. (42): \[\eta_{\rm C}\langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem}\rangle = \langle\dot{W}_{\rm out}\rangle+T_{2}\langle\dot{S}_{\rm bDB} \rangle+T_{2}\langle\dot{S}_{\rm as}\rangle \tag{67}\] \[\geq \langle\dot{W}_{\rm out}\rangle+T_{2}\langle\dot{S}_{\rm as}\rangle.\] Using the definition of \(\tilde{\eta}\) in Eq. (43), this inequality implies \[\tilde{\eta}\leq 1-\frac{T_{2}\langle\dot{S}_{\rm as}\rangle}{\eta_{\rm C }\langle\dot{Q}_{1}\rangle+\langle\dot{W}_{\rm chem}\rangle} \tag{68}\] as long as \(\langle\dot{W}_{\rm out}\rangle>0\) (see the discussion below Eq. (42)). This gives a tighter upper bound on \(\tilde{\eta}\) if \(\langle\dot{S}_{\rm as}\rangle>0\). When there are only even-parity variables, the mirror symmetry \(p_{\rm s}({\bf r})=p_{\rm s}(\mathcal{E}{\bf r})\) is trivially satisfied, so \(\langle\dot{S}_{\rm as}\rangle=0\). On the other hand, in the presence of odd-parity variables, the mirror symmetry is in general not guaranteed. Thus, here we focus on whether the above inequality yields a tighter upper bound on the efficiency of an odd-parity active heat engine. For a diffusive system like the models considered here, there is an infinite number of possible definitions of \(\dot{S}_{\rm bDB}\) and \(\dot{S}_{\rm as}\), which can be parametrized by a single real number \(\sigma\)[53]. As detailed in Appendix C, we follow their method to derive \[\langle\dot{S}_{\rm as}\rangle = \sigma(1-\sigma) \tag{69}\] \[\times\left\langle\sum_{i=1}^{2}\left[\frac{T_{i}}{\Gamma}\left( \frac{\partial\phi^{\rm A}}{\partial X_{i}}\right)^{2}+\gamma_{i}^{\prime}\,T _{i}\left(\frac{\partial\phi^{\rm A}}{\partial p_{i}}\right)^{2}\right] \right\rangle,\] where \[\phi^{\rm A}({\bf r})\equiv-\ln\left[p_{\rm s}({\bf r})/p_{\rm s}(\mathcal{E} {\bf r})\right] \tag{70}\] quantifies the extent to which the mirror symmetry is broken. In Eq. (69), the expression inside \(\langle\cdot\rangle\) is always nonnegative, so \(\langle\dot{S}_{\rm as}\rangle\) is maximized when \(\sigma=1/2\). Thus, in this case, Eq. (68) imposes a tighter upper bound on the efficiency \(\tilde{\eta}\) of an odd-parity engine than the second law of thermodynamics does. This is illustrated in Fig. 6 as the nonconservative force coefficient \(\lambda_{1}\) is varied. The new upper bound imposed by Eq. (68) is significantly tighter than the original upper bound \(\tilde{\eta}\leq 1\). Also note that this new upper bound is applicable only to the odd-parity engine as exemplified by the efficiency of the even-parity engine surpassing the bound for large \(\lambda_{1}\). ## VII Summary and outlook In this study, we proposed a thermodynamically consistent, analytically solvable model of the active heat engines based on the fuel-driven Active Ornstein-Uhlenbeck Process with either even-parity or odd-parity self-propulsion. Our model, which stays active only due to the constant chemical driving, reflects how the fuel consumption dynamics should change depending on the self-propulsion parity. It also has a clear energetic interpretation for the entropy production in the form of the Clausius relation, which is lacking in the usual phenomenological models of active heat engines. This energetic picture allows us to Figure 6: Tighter upper bound (black dashed line) for odd-parity efficiency (green solid line). We used \(\Delta\mu_{1}=1.75\), \(\Delta\mu_{2}=0\), \(\tau_{1}=0.3\), \(T_{1}=3\) and \(\lambda_{2}=0.8\). The efficiency of the engine with even-parity self propulsion (green line) is also plotted for comparison. define the efficiency of the engine as a ratio \(\tilde{\eta}\) between two measurable energy fluxes, namely the extracted power and the energy flux arising from the thermal and the chemical driving forces. Moreover, the efficiency thus defined has an upper bound imposed by the second law of thermodynamics. Taking this \(\tilde{\eta}\) to be the proper measure of efficiency, we quantified the performance of the engine by examining its efficiency at maximum power \(\tilde{\eta}^{*}\). First, we checked whether the _active_ nature of the engine can make it more efficient than the passive engine in terms of \(\tilde{\eta}^{*}\). Intriguingly, we found that \(\tilde{\eta}^{*}\) may have a nonmonotonic dependence on the strength of the chemical driving, so that the active engine may become more efficient than the passive one when the chemical driving is strong enough. Second, we compared the performances of the even-parity and the odd-parity active engines. It turned out that the size of the engine matters: if the engine is larger (smaller) than persistence length of the particle, the odd-parity (even-parity) self-propulsion is more efficient. These results suggest interesting design principles that should be taken into account when constructing efficient and yet functioning active engines. Finally, we explored the possibility of a tighter upper bound on \(\tilde{\eta}\) than the one imposed by the second law of thermodynamics. Using the detailed structure of the housekeeping entropy production, we derived a tighter upper bound on the efficiency of the odd-parity engines and found an example where the bound is very close to the actual efficiency of the engine. These findings suggest various directions of future investigations. First, one may verify whether the design principles found in our study are indeed at work in more realistic examples of active heat engines, such as those made of Janus particles or swimmers propelled by a screw-like structure. Such systems typically involve hydrodynamic interactions with the liquid-like medium, making dynamic and thermodynamic descriptions much more challenging. Nonetheless, since we could explain the relative performance of the even-parity and the odd-parity engines using an intuitive argument based on a few length (or time) scales of the engine, we believe that the design principles will be generally applicable to more complicated systems. Second, one may apply our model to a system of Active Ornstein-Uhlenbeck Particles and explore how their collective phenomena are related to the rate of energy dissipation in the entire system. Previous studies of such relations are based only upon the measure of apparent irreversibility [54; 55; 27], so while they address the question of whether a large-scale dissipative structure can also be achieved by an equilibrium system at the dynamical level, they do not address the question of how much fuel is required to maintain such structure. Our approach provides a useful framework for investigating the latter question for both even-parity and odd-parity active particles. We also note that a thermodynamically consistent framework for the energy dissipation that maintains a nonequilibrium structure at the field-theoretical level has been proposed in [56]. Third, one may explore the behaviors of our model under time-dependent protocols and quantify the performance of cyclic active heat engines. In particular, the question of a tighter bound on the engine efficiency becomes much more relevant in this case, because the _excess_ entropy production, which is trivially zero in the steady state, also becomes a crucial part of the energy dissipation mechanism. Both the even-parity and the odd-parity engines will have a tighter bound on their efficiencies in this case. It would be also interesting to check whether thermodynamic uncertainty relations and speed limits on the entropy production yield interesting tradeoff relations involving the fuel consumption. ###### Acknowledgements. This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2020R1C1C101443613). YB also thanks Michael E. Cates, Patrick Pietzonka, Tomer Markovich, Etienne Fodor, Hyunggyu Park, and Jae Sung Lee for helpful discussions. ## Appendix A General derivation of the Langevin system with guaranteed equilibration Consider a general Langevin system \(\dot{\mathbf{q}}=\mathbf{A}(\mathbf{q})+\xi\) with states \(\mathbf{q}=(q_{1},\cdots,q_{N})\in\mathbb{R}^{N}\) and the noise statistics \(\langle\xi(t)\xi(t^{\prime})^{T}\rangle=2\mathbb{D}(\mathbf{q})\delta(t-t^{ \prime})\). We enforce the equilibration for this continuous stochastic system, toward the Boltzmann distribution \(p_{\mathrm{s}}(\mathbf{q})\propto e^{-F(\mathbf{q})/T}\) with a given free energy \(F(\mathbf{q})\) and temperature \(T\). Instead of directly dealing with the Langevin equations, we start from the corresponding steady-state Fokker-Planck equation (FPE) \[0=\frac{\partial p_{\mathrm{s}}}{\partial t}=-\sum_{i}\frac{ \partial}{\partial r_{i}}\Big{\{}A_{i}p_{\mathrm{s}}-\sum_{j}\frac{\partial}{ \partial r_{j}}\big{[}D_{ij}p_{\mathrm{s}}\big{]}\Big{\}}\] \[=:-\sum_{i}\frac{\partial}{\partial r_{i}}J_{i}^{\mathrm{s}}( \mathbf{q}), \tag{10}\] which is a partial differential equation solved by the steady-state probability distribution function \(p_{\mathrm{s}}(\mathbf{q})\) of the Langevin system [57]. Where \(\mathbb{D}=\{D_{ij}\}\). In order to relate this system's diffusion with the notion of temperature \(T\), we write \(D_{ij}(\mathbf{q})=T\,\Gamma_{ij}(\mathbf{q})\). Since \(\mathbb{D}\) is the covariance matrix of random vector \(\mathbf{q}\), the coefficients \(\Gamma_{ij}\) should be symmetric, _i.e._\(\Gamma_{ij}(\mathbf{q})=\Gamma_{ji}(\mathbf{q})\). Substituting the Boltzmann distribution into Eq. (A) gives \[0=-\sum_{i}\frac{\partial}{\partial r_{i}}\bigg{\{}p_{\mathrm{s}}\Big{[}A_{i}+ \sum_{j}\Big{(}-T\frac{\partial\Gamma_{ij}}{\partial r_{j}}+\Gamma_{ij}\frac{ \partial F}{\partial r_{j}}\Big{)}\Big{]}\bigg{\}}. \tag{11}\] To check the DB condition, let's go back to the general FPE shown in Eq. (16). We define an auxiliary FPE, which has the same steady-state distribution and the opposite probability current \[J_{i}^{\rm aux,s}({\bf q})\equiv-J_{i}^{\rm s}({\bf q})=-A_{i}p_{ \rm s}+\sum_{j}\frac{\partial}{\partial r_{j}}\big{(}D_{ij}p_{\rm s}\big{)}\\ =\big{(}-A_{i}p_{\rm s}+2\sum_{j}\frac{\partial}{\partial r_{j}}(D _{ij}p_{\rm s})\big{)}-\sum_{j}\frac{\partial}{\partial r_{j}}(D_{ij}p_{\rm s}) \tag{17}\] compared to the original one. The manipulation from the first to the second line is to keep the notion of drift and diffusion explicitly appear in the equation. Identifying the drift and diffusion with the corresponding counterparts of time-inverted probability current \(\mathcal{E}J_{i}^{\rm s}(\mathcal{E}{\bf q})\) of the original FPE leads to \[\epsilon_{i}\epsilon_{j}D_{ij}(\mathcal{E}{\bf q})=D_{ij}({\bf q}) \tag{18a}\] \[\epsilon_{i}A_{i}(\mathcal{E}{\bf q})p_{\rm s}({\bf q})=-A_{i}({ \bf q})p_{\rm s}({\bf q})+2\sum_{j}\frac{\partial}{\partial r_{j}}(D_{ij}({ \bf q})p_{\rm s}({\bf q})), \tag{18b}\] which are the conditions for DB mentioned at Section III of the main text. Here, the mirror symmetry \(p_{\rm s}(\mathcal{E}{\bf q})=p_{\rm s}({\bf q})\) is assumed for the steady state. Note that the mirror symmetry may not hold in our final model of fuel-driven AOUP, because the model is eventually kept far from equilibrium due to fixed chemical potential gradient \(f^{\prime}(n)=\Delta\mu\). First, one can take the simplest choice of \(A_{i}\): \[A_{i}^{0}({\bf q})=-\sum_{j}\Big{(}-\Gamma_{ij}({\bf q})\frac{ \partial F}{\partial r_{j}}({\bf q})+T\frac{\partial\Gamma_{ij}}{\partial r _{j}}({\bf q})\Big{)} \tag{19}\] to satisfy Eq. (17). It turns out that, requiring Eq. (18a), _i.e._, \(\epsilon_{i}\epsilon_{j}\Gamma_{ij}(\mathcal{E}{\bf q})=\Gamma_{ij}({\bf q})\), automatically makes Eq. (18b) also true. This means that, when we apply the time-inversion operator \(\mathcal{E}\) for both side of the Langevin equation, this drift term (and consequently the irreversible probability current) does not change its sign. It means that response coefficient \(\Gamma_{ij}\) which couples the generalized force \(-\partial F/\partial q_{j}\) to the displacement \(\hat{q}_{j}\) is relevant with the irreversibility. Hence we call it _dissipative_ response. Moreover, those dissipative response coefficients have a strict relation with the elements \(T\,\Gamma_{ij}\) of the diffusion matrix, which tells us about the magnitude of fluctuation. This can be interpreted as an example of the FDT for equilibrating systems. But there can be another choice of satisfying Eq. (17) thanks to the degree of freedom which allows additional terms \(A_{i}^{1}\). When we let \[A_{i}^{1}=\sum_{j}\Big{(}-R_{ij}({\bf q})\frac{\partial F}{ \partial r_{j}}({\bf q})+T\frac{\partial R_{ij}}{\partial r_{j}}({\bf q}) \Big{)}, \tag{20}\] through some calculations with change of dummy indices, it can be shown that this choice plays a role as a valid additional degree of freedom which satisfies Eq. (17). The summed choice \(A_{i}=A_{i}^{0}+A_{i}^{1}\) leads to Eq. (3). Without loss of generality, we assume \(\epsilon_{i}\epsilon_{j}R_{ij}(\mathcal{E}{\bf q})=-R_{ij}({\bf q})\) (If not, one can just move the responsible contribution into the dissipative response instead). This means that \(R_{ij}\) behaves differently with \(\Gamma_{ij}\), and thus related with the reversible part of the probability current. We call this the _reactive_ response. These reactive responses is not relevant with the irreversible equilibration of the given Langevin system. Lastly, requiring the DB conditions to Eq. (3) proves the antisymmetric behavior \(R_{ij}({\bf q})=-R_{ji}({\bf q})\) of these reactive response coefficients. Collecting all those, we get our final Onsager reciprocal relations Eq. (4). When a coupling from the \(j\)-th generalized force to the \(i\)-th variable's displacement is given, we can first determine whether the coupling is dissipative or reactive, based on the parity of each variable. It decides the coefficient of the coupling between \(i\)-th generalized force and the \(j\)-th variable's displacement, namely the reciprocal coupling, which make the system equilibrate. These can be regarded as the far-from-equilibrium generalization of Onsager's reciprocal relations [39; 40], which is an important consequence of linear irreversible thermodynamics. ## Appendix B Explicit expressions of the model's covariance matrix and involved energy flow rates ### Steady-state covariant matrix of the model's mechanical part Noting that all first moments (e.g. \(\langle X_{1}\rangle\)) of the random vector \({\bf r}\) are zeros at the steady-state, one can construct the steady-state covariance matrix \(\mathbb{C}:=\langle{\bf rr}^{\rm T}\rangle\) of which elements are all constant. Matrix equations Eq. (15) for the \(4\times 4\) covariance matrix \(\mathbb{C}=\{C_{ij}\}\) are straightforwardly obtained from our active heat engine model Eq. (32). \[\langle{\bf r}\circ\dot{\bf r}^{\rm T}\rangle=-\mathbb{C}\mathbb{ K}^{\rm T}+\mathbb{D} \tag{21a}\] \[\mathbb{K}\mathbb{C}+\mathbb{C}\mathbb{K}^{\rm T}=2\mathbb{D} \tag{21b}\] One can solve Eq. (21b) in a successive manner by focusing on the equation's hierarchical structure. Then, the explicit expressions of energy flow rates are derived by taking Eq. (21a) into account. Since Eq. (32) is a linear Langevin system, its steady-state probability distribution function (PDF) is a multivariate Gaussian distribution of which covariance matrix is precisely the \(\mathbb{C}\) calculated above. This PDF \(p_{\rm s}({\bf r})=\frac{1}{\det(2\pi\mathbb{C})^{1/2}}\exp\big{(}-\frac{1}{2}{ \bf r}^{\rm T}\mathbb{C}^{-1}{\bf r}\big{)}\) solves the steady-state FPE corresponding to the Langevin system. To obtain explicit element-wise expression for each second moment \(C_{ij}\), we aware of the hierarchy of variables influencing each other. First, solutions for self-closed dynamics are obvious: \[C_{33}=\frac{\tau_{1}T_{1}}{\gamma_{1}},\quad C_{34}=0,\quad C_{44}=\frac{\tau_{ 2}T_{2}}{\gamma_{2}}. \tag{10}\] Next, the element-wise expressions of Eq. (10b) are listed in Eq. (11), with those obvious solutions Eq. (10) substituted in and multiplied by \(\Gamma\). \[-KC_{11}+\lambda_{1}C_{12}+\zeta_{1}\Delta\mu_{1}C_{13}+T_{1}=0 \tag{11a}\] \[-2KC_{12}+\lambda_{1}C_{22}+\zeta_{1}\Delta\mu_{1}C_{23}\] \[+\lambda_{2}C_{11}+\zeta_{2}\Delta\mu_{2}C_{14}=0\] (11b) \[\Big{(}\frac{\Gamma}{\tau_{1}}+K\Big{)}C_{13}=\lambda_{1}C_{23}+ \zeta_{1}\Delta\mu_{1}\frac{\tau_{1}T_{1}}{\gamma_{1}}\] (11c) \[\Big{(}\frac{\Gamma}{\tau_{2}}+K\Big{)}C_{14}=\lambda_{1}C_{24}\] (11d) \[\lambda_{2}C_{12}-KC_{22}+\zeta_{2}\Delta\mu_{2}C_{24}+T_{2}=0\] (11e) \[\Big{(}\frac{\Gamma}{\tau_{1}}+K\Big{)}C_{23}=\lambda_{2}C_{13}\] (11f) \[\Big{(}\frac{\Gamma}{\tau_{2}}+K\Big{)}C_{24}=\lambda_{2}C_{14}+\zeta_{2}\Delta \mu_{2}\frac{\tau_{2}T_{2}}{\gamma_{2}} \tag{11g}\] Putting Eqs. (11c, 11d) together leads to Eqs. (11c). Note the guaranteed positivity of the denominator due to the stability condition \(\lambda_{1}\lambda_{2}<K^{2}\), and the proportionality to \(\zeta_{1}\Delta\mu_{1}\). Similarly from Eqs. (11d, 11g), the expressions Eq. (11c) are obtained. \[C_{13}=\zeta_{1}\Delta\mu_{1}\frac{\tau_{1}T_{1}}{\gamma_{1}} \frac{(\Gamma/\tau_{1})+K}{\big{(}(\Gamma/\tau_{1})+K\big{)}^{2}-\lambda_{1} \lambda_{2}} \tag{12a}\] \[C_{23}=\zeta_{1}\Delta\mu_{1}\frac{\tau_{1}T_{1}}{\gamma_{1}} \frac{\lambda_{2}}{\big{(}(\Gamma/\tau_{1})+K\big{)}^{2}-\lambda_{1}\lambda_{2}} \tag{12b}\] \[C_{24}=\zeta_{2}\Delta\mu_{2}\frac{\tau_{2}T_{2}}{\gamma_{2}} \frac{(\Gamma/\tau_{2})+K}{\big{(}(\Gamma/\tau_{2})+K\big{)}^{2}-\lambda_{1} \lambda_{2}} \tag{12a}\] \[C_{14}=\zeta_{2}\Delta\mu_{2}\frac{\tau_{2}T_{2}}{\gamma_{2}} \frac{\lambda_{1}}{\big{(}(\Gamma/\tau_{2})+K\big{)}^{2}-\lambda_{1}\lambda_{2}} \tag{12b}\] Next, when \(C_{11}\) is eliminated by combining Eqs. (11a, 11b), an equation of only \(C_{12}\) and \(C_{22}\) arises. Putting this together with Eq. (11e) results in the explicit form of \(C_{12}\), which is symmetric for the indices \(1\) and \(2\): \[C_{12}=\frac{1}{K^{2}-\lambda_{1}\lambda_{2}}\bigg{[}\frac{\lambda_{2}T_{1}+ \lambda_{1}T_{2}}{2}+\frac{\tau_{1}T_{1}}{\gamma_{1}}\frac{(\zeta_{1}\Delta \mu_{1})^{2}\ (K+\frac{\Gamma}{2\tau_{1}})\lambda_{2}}{(\frac{\Gamma}{\tau_{1}}+K)^{2}- \lambda_{1}\lambda_{2}}+\frac{\tau_{2}T_{2}}{\gamma_{2}}\frac{(\zeta_{2}\Delta \mu_{2})^{2}\ (K+\frac{\Gamma}{2\tau_{2}})\lambda_{1}}{(\frac{\Gamma}{\tau_{2}}+K)^{2}- \lambda_{1}\lambda_{2}}\bigg{]}. \tag{13}\] Using this, we finally get the self-correlations of two positional variables \[C_{11}=\frac{1}{K}\big{[}\lambda_{1}C_{12}+(\zeta_{1}\Delta\mu_ {1})C_{13}+T_{1}\big{]} \tag{14a}\] \[C_{22}=\frac{1}{K}\big{[}\lambda_{2}C_{12}+(\zeta_{2}\Delta\mu_ {2})C_{24}+T_{2}\big{]}. \tag{14b}\] If the non-conservative force field is turned off, _i.e._, \(\lambda_{1}=\lambda_{2}=0\), two distinct contributions on \(C_{ii}\) are clearly seen here, each originating from the chemical gradient \(\Delta\mu_{i}\), and the temperature \(T_{i}\), respectively. Then, turning on the force field gives additional contributions to \(C_{ii}\) due to the coupling with the other positional coordinate \(X_{j}\,(j\neq i)\). Using these expressions for the second moments, the average energy flows of our model are discussed in next subsection. ### Explicit form of the average energetic currents In the main text, we have identified the extracted power Eq. (35) as a part of our model's energetics. The ensemble average of this quantity can be written as \(\langle\dot{W}_{\rm out}\rangle=(\lambda_{1}-\lambda_{2})\langle X_{1}\dot{X}_ {2}\rangle\) at steady-state, taking advantage of the steady-state condition \[\frac{d}{dt}\,\langle X_{1}X_{2}\rangle=\langle X_{1}\dot{X}_{2}\rangle+\langle X _{2}\dot{X}_{1}\rangle=0.\] From this, using Eq. (11a), the explicit form of the average extracted power \[\langle\dot{W}_{\rm out}\rangle=\frac{\lambda_{1}-\lambda_{2}}{2K}\bigg{[}\frac{ \lambda_{2}T_{1}-\lambda_{1}T_{2}}{\Gamma}+\frac{T_{1}\tau_{1}^{2}\,\lambda_{1} }{(\Gamma+K\tau_{1})^{2}-\tau_{1}^{2}\lambda_{1}\lambda_{2}}(\zeta_{1}\Delta \mu_{1})^{2}-\frac{T_{2}\tau_{2}^{2}\,\lambda_{2}}{(\Gamma+K\tau_{2})^{2}-\tau_ {2}^{2}\lambda_{1}\lambda_{2}}(\zeta_{2}\Delta\mu_{2})^{2}\bigg{]} \tag{12}\] is obtained. When \(\lambda_{1}>\lambda_{2}\), the chemical driving \(\Delta\mu_{1}\) associated with \(X_{1}\) positively contributes to the power, while \(\Delta\mu_{2}\) deteriorates the power. The reparametrization introduced in the Section V of the main text leads to an alternative expression \[\begin{split}\langle\dot{W}_{\rm out}\rangle=&\,c \Big{(}1-\frac{1}{r}\Big{)}g_{1}(c)-\,c\big{(}r-1\big{)}g_{2}(c)+c\big{(}r-1 \big{)}\Big{(}\frac{1}{r}-\frac{T_{2}}{T_{1}}\Big{)}\frac{T_{1}}{2\Gamma K}\\ &\qquad\qquad=c\bigg{\{}g_{1}(c)+g_{2}(c)+\frac{T_{1}+T_{2}}{2 \Gamma K}-\Big{[}\Big{(}g_{2}(c)+\frac{T_{2}}{2\Gamma K}\Big{)}r+\Big{(}g_{1} (c)+\frac{T_{1}}{2\Gamma K}\Big{)}\frac{1}{r}\Big{]}\bigg{\}}\end{split} \tag{13}\] where \[g_{i}(c)\equiv\frac{\tau_{i}^{2}}{2K}\,\frac{T_{i}}{\gamma_{i}}\,\frac{(\zeta_ {i}\Delta\mu_{i})^{2}}{(\Gamma+K\tau_{i})^{2}-\tau_{i}^{2}c}. \tag{14}\] From here, by applying arithmetic mean-geometric mean inequality for \(r\) at the second line of Eq. (13), one can obtain the value of MP: \[P^{*}\equiv\left.\langle\dot{W}_{\rm out}\rangle\right|_{r=r^{*}}=c\left( \sqrt{g_{1}(c)+\frac{T_{1}}{2\Gamma K}}-\sqrt{g_{2}(c)+\frac{T_{2}}{2\Gamma K }}\right)^{2}. \tag{15}\] The optimal choice \(r=r^{*}\) and the resultant MP value are more neatly introduced in the main text for two special cases: for \(\Delta\mu_{2}=0\) or \(\Delta\mu_{1}=0\). Here we emphasize again that the extracted power does not depend on the parity of the self-propulsion force (except the transformation \(\gamma_{i}\to 1/\gamma_{i}^{\prime}\) and \(\zeta_{i}\to\zeta_{i}^{\prime}\)), since it is relevant with only apparent (mechanical) part of the dynamics. Meanwhile, average chemical work rate attached to \(X_{i}\) is explicitly calculated from Eqs. (20, 27), using (10) and its solutions. For the active heat engine with even-parity self-propulsion, it reads: \[\langle\dot{W}_{\rm chem,i}\rangle=\frac{T_{i}}{\gamma_{i}}\frac{(\zeta_{i} \Delta\mu_{i})^{2}}{(\Gamma+K\tau_{i})^{2}-\tau_{i}^{2}\,c}\left(\Gamma+K\tau _{i}\right)\tau_{i} \tag{16}\] while the corresponding quantity for the odd-parity case is \[\langle\dot{W}_{\rm chem,i}\rangle=\gamma_{i}^{\prime}T_{i}\,\,\frac{(\zeta_{i} ^{\prime}\Delta\mu_{i})^{2}}{(\Gamma+K\tau_{i})^{2}-\tau_{i}^{2}\,c}\,\frac{ \left[\Gamma K+(K^{2}-c)\tau_{i}\right]\tau_{i}^{2}}{\Gamma}. \tag{17}\] These rates does not contain \(r\) even before the optimization. Also, they are always positive in our model at least in the steady-state, provided by the stability condition \(c<K^{2}\). Plugging \(P^{*}\) of the main text and Eqs. (16, 17) to Eq. (43) gives the explicit formulae Eqs. (51, 63) for the EMP. ## Appendix C The detailed structure of entropy production Here we delineate the decomposition of EP introduced in Sec. VI of the main text, following the results of a paper discussing the further decomposition of housekeeping EP under the presence of odd-parity variable, especially for the continuous stochastic dynamics [53]. First, in terms of the log-density \(\phi({\bf r})\equiv-\ln p_{\rm s}({\bf r})\) or its time-reversed counterpart \(\phi^{\rm R}({\bf r})\equiv-\ln p_{\rm s}({\bf\mathcal{E}}{\bf r})\), we rewrite the steady-state condition \(\nabla\cdot{\bf J}^{\rm s}({\bf r})=0\) for the probability current \(J_{i}^{\rm s}({\bf r})=(-\sum_{j}K_{ij}\,r_{j})p_{s}({\bf r})-\sum_{j}\partial _{j}\left(D_{ij}({\bf r})\,\,p_{s}({\bf r})\right)\) as follows: \[0=\frac{1}{\tau_{1}}+\left(-\frac{1}{\tau_{1}}x_{1}\right)\frac {\partial\phi}{\partial x_{1}}+\frac{T_{1}}{\gamma_{1}}\left[-\frac{\partial^ {2}\phi}{\partial x_{1}^{2}}+\left(\frac{\partial\phi}{\partial x_{1}}\right) ^{2}\right]+\frac{1}{\tau_{2}}+\left(-\frac{1}{\tau_{2}}x_{2}\right)\frac{ \partial\phi}{\partial x_{2}}+\frac{T_{2}}{\gamma_{2}}\left[-\frac{\partial^ {2}\phi}{\partial x_{2}^{2}}+\left(\frac{\partial\phi}{\partial x_{2}}\right) ^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1}} {\Gamma}X_{2}+\frac{\zeta_{1}\Delta\mu_{1}}{\Gamma}x_{1}\right)\frac{\partial \phi}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\left[-\frac{\partial^{2}\phi}{ \partial X_{1}^{2}}+\left(\frac{\partial\phi}{\partial X_{1}}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2}} {\Gamma}X_{1}+\frac{\zeta_{2}\Delta\mu_{2}}{\Gamma}x_{2}\right)\frac{\partial \phi}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\left[-\frac{\partial^{2}\phi}{ \partial X_{2}^{2}}+\left(\frac{\partial\phi}{\partial X_{2}}\right)^{2}\right], \tag{18}\] \[0=\frac{1}{\tau_{1}}+\left(-\frac{1}{\tau_{1}}x_{1}\right)\frac{ \partial\phi^{\rm R}}{\partial x_{1}}+\frac{T_{1}}{\gamma_{1}} \left[-\frac{\partial^{2}\phi^{\rm R}}{\partial x_{1}^{2}}+\left( \frac{\partial\phi^{\rm R}}{\partial x_{1}}\right)^{2}\right]+\frac{1}{\tau_{2} }+\left(-\frac{1}{\tau_{2}}x_{2}\right)\frac{\partial\phi^{\rm R}}{\partial x _{2}}+\frac{T_{2}}{\gamma_{2}}\left[-\frac{\partial^{2}\phi^{\rm R}}{\partial x _{2}^{2}}+\left(\frac{\partial\phi^{\rm R}}{\partial x_{2}}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1} }{\Gamma}X_{2}+\frac{\zeta_{1}\Delta\mu_{1}}{\Gamma}x_{1}\right)\frac{ \partial\phi^{\rm R}}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm R}}{\partial X_{1}^{2}}+\left(\frac{\partial\phi^{\rm R }}{\partial X_{1}}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2} }{\Gamma}X_{1}+\frac{\zeta_{2}\Delta\mu_{2}}{\Gamma}x_{2}\right)\frac{ \partial\phi^{\rm R}}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm R}}{\partial X_{2}^{2}}+\left(\frac{\partial\phi^{\rm R }}{\partial X_{2}}\right)^{2}\right], \tag{100}\] for the case with even-parity self-propulsion, and \[0=\frac{1}{\tau_{1}}+\left(-\frac{1}{\tau_{1}}p_{1}\right) \frac{\partial\phi}{\partial p_{1}}+\gamma_{1}^{\prime}T_{1} \left[-\frac{\partial^{2}\phi^{\rm 2}}{\partial p_{1}^{2}}+\left(\frac{ \partial\phi}{\partial p_{1}}\right)^{2}\right]+\frac{1}{\tau_{2}}+\left(- \frac{1}{\tau_{2}}p_{2}\right)\frac{\partial\phi}{\partial p_{2}}+\gamma_{2}^ {\prime}T_{2}\left[-\frac{\partial^{2}\phi}{\partial p_{2}^{2}}+\left(\frac{ \partial\phi}{\partial p_{2}}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1} }{\Gamma}X_{2}+\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\right) \frac{\partial\phi}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\left[-\frac{ \partial^{2}\phi}{\partial X_{1}^{2}}+\left(\frac{\partial\phi}{\partial X_{1 }}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2} }{\Gamma}X_{1}+\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}\right) \frac{\partial\phi}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\left[-\frac{\partial ^{2}\phi}{\partial X_{2}^{2}}+\left(\frac{\partial\phi}{\partial X_{2}}\right)^ {2}\right], \tag{101}\] \[0=\frac{1}{\tau_{1}}+\left(-\frac{1}{\tau_{1}}p_{1}\right) \frac{\partial\phi^{\rm R}}{\partial p_{1}}+\gamma_{1}^{\prime}T_{1} \left[-\frac{\partial^{2}\phi^{\rm 2}}{\partial p_{1}^{2}}+\left(\frac{ \partial\phi^{\rm R}}{\partial p_{1}}\right)^{2}\right]+\frac{1}{\tau_{2}}+ \left(-\frac{1}{\tau_{2}}p_{2}\right)\frac{\partial\phi^{\rm R}}{\partial p_ {2}}+\gamma_{2}^{\prime}T_{2}\left[-\frac{\partial^{2}\phi^{\rm R}}{\partial p _{2}^{2}}+\left(\frac{\partial\phi^{\rm R}}{\partial p_{2}}\right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1} }{\Gamma}X_{2}-\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\right) \frac{\partial\phi^{\rm R}}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm 2}}{\partial X_{1}^{2}}+\left(\frac{\partial\phi^{\rm R}}{\partial X_{1}} \right)^{2}\right]\] \[+\frac{K}{\Gamma}+\left(-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2} }{\Gamma}X_{1}-\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}\right) \frac{\partial\phi^{\rm R}}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm 2}}{\partial X_{2}^{2}}+\left(\frac{\partial\phi^{\rm R}}{\partial X_{2}} \right)^{2}\right] \tag{102}\] for the case with odd-parity self-propulsion. We note the sign difference of the self-propulsion force terms in Eqs. (C.1) and (C.1), originated from their different behavior under time-reversal transformation. Then we introduce a measure of asymmetry of the steady-state distribution \(\phi^{\rm A}({\bf r})\equiv\phi({\bf r})-\phi({\cal E}{\bf r})\). Furthermore, a mixture distribution function \[\psi_{\sigma}({\bf r})\equiv\sigma\phi({\bf r})+(1-\sigma)\phi({\cal E}{\bf r })=\phi({\bf r})-(1-\sigma)\phi^{\rm A}({\bf r})=\phi({\cal E}{\bf r})+\sigma \phi^{\rm A}({\bf r}) \tag{103}\] with a real-valued parameter \(\sigma\), is suggested in the context of designing a generalized adjoint process [53]. When the transition rate of the original process \[\omega\left[{\bf r}^{\prime},{\bf r}\right]\equiv\bigg{[}-\frac{\partial}{ \partial r_{i}^{\prime}}\Big{(}-\sum_{j}K_{ij}r_{j}\Big{)}+\frac{\partial^{2}}{ \partial r_{i}^{\prime}\partial r_{j}^{\prime}}\Big{(}\sum_{j}D_{ij}\Big{)} \bigg{]}\delta({\bf r}^{\prime}-{\bf r}) \tag{104}\] which gives the path probability \({\cal P}[{\bf r}^{\prime},t+dt|{\bf r},t]=\delta({\bf r}^{\prime}-{\bf r})+dt \,\omega({\bf r}^{\prime},{\bf r})\) is equated with that of the generalized adjoint process, it appropriately gives the DB condition without the EP or its subcomponents diverging. ### Even-parity case For the engine with even-parity self-propulsion, the mirror symmetry trivially holds since there is no odd-parity variable. therefore,\(\psi_{\sigma}({\bf r})=\phi({\bf r})\), resulting the following formula (See equation (71) of [53]): \[\dot{S}_{\rm bDB}=-\frac{1}{T_{1}}\dot{Q}_{1}-\frac{1}{T_{2}} \dot{Q}_{2}+\bigg{\{}\frac{\partial\phi}{\partial X_{1}}\circ\dot{X}_{1}+\frac{ \partial\phi}{\partial X_{2}}\circ\dot{X}_{2}+\frac{\partial\phi}{\partial x_{1}} \circ\dot{x}_{1}+\frac{\partial\phi}{\partial x_{2}}\circ\dot{x}_{1}\bigg{\}}\\ +\bigg{\{}\Big{[}\Big{(}-\frac{1}{\tau_{1}}x_{1}\Big{)}\frac{ \partial\phi}{\partial x_{1}}+\frac{T_{1}}{\gamma_{1}}\Big{(}\frac{\partial\phi}{ \partial x_{1}}\Big{)}^{2}+\frac{1}{\tau_{1}}-\frac{T_{1}}{\gamma_{1}}\frac{ \partial^{2}\phi}{\partial x_{1}^{2}}\Big{]}+\Big{[}\Big{(}-\frac{1}{\tau_{2}}x_ {2}\Big{)}\frac{\partial\phi}{\partial x_{2}}+\frac{T_{2}}{\gamma_{2}}\Big{(} \frac{\partial\phi}{\partial x_{2}}\Big{)}^{2}+\frac{1}{\tau_{2}}-\frac{T_{2}}{ \gamma_{2}}\frac{\partial^{2}\phi}{\partial x_{2}^{2}}\Big{]}\\ +\Big{[}\Big{(}-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1}}{\Gamma}X_ {2}+\frac{\zeta_{1}\Delta\mu_{1}}{\Gamma}x_{1}\Big{)}\frac{\partial\phi}{ \partial X_{1}}+\frac{T_{1}}{\Gamma}\Big{(}\frac{\partial\phi}{\partial X_{1 }}\Big{)}^{2}+\frac{K}{\Gamma}-\frac{T_{1}}{\Gamma}\frac{\partial^{2}\phi}{ \partial X_{1}^{2}}\Big{]}\\ +\Big{[}\Big{(}-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2}}{\Gamma}X_ {1} Comparing this with Eq. (106), we see that the expression inside the second curly bracket is zero. In addition, \(\phi\) is a function only of \(\mathbf{r}\), so that identify the first curly bracket as the total derivative of \(\phi\). Therefore we obtain \[\dot{S}_{\rm bDB}=-\frac{1}{T_{1}}\dot{Q}_{1}-\frac{1}{T_{2}}\dot{Q}_{2}+\dot{ \phi}=\dot{S}_{\rm env}+\dot{\phi}=\dot{S}_{\rm tot}. \tag{107}\] The last equality comes from the fact that the system's stochastic entropy is \(S_{\rm sys}=-\ln\rho_{s}=\phi\) in this case because we assume the system is in steady state. Meanwhile, the EP rate related with the mirror symmetry breakage is trivially given by \(\dot{S}_{\rm as}=0\) for the even-parity case since \(\phi^{\rm A}(\mathbf{r})=0\) (See equation (78) of [53]). This result is also in agreement with Eq. (107), which means that there is no additional component on the total EP except the bDB contribution. ### Odd-parity case Unlike above, the housekeeping EP is nontrivially further decomposed in the odd-parity case. Again referring to the equation (71) of [53], the bDB contribution of EP reads: \[\dot{S}_{\rm bDB}=-\frac{1}{T_{1}}\dot{Q}_{1}-\frac{1}{T_{2}}\dot {Q}_{2}+\bigg{\{}\frac{\partial\psi_{\sigma}}{\partial X_{1}}\circ\dot{X}_{1 }+\frac{\partial\psi_{\sigma}}{\partial X_{2}}\circ\dot{X}_{2}+\frac{\partial \psi_{\sigma}}{\partial p_{1}}\circ\dot{p}_{1}+\frac{\partial\psi_{\sigma}}{ \partial p_{2}}\circ\dot{p}_{2}\bigg{\}}\\ +\bigg{\{}\Big{[}\Big{(}-\frac{p_{1}}{\tau_{1}}\Big{)}\frac{ \partial\psi_{\sigma}}{\partial p_{1}}+\gamma_{1}^{\prime}T_{1}\Big{(}\frac{ \partial\psi_{\sigma}}{\partial p_{1}}\Big{)}^{2}+\frac{1}{\tau_{1}}-\gamma_ {1}^{\prime}T_{1}\frac{\partial^{2}\psi_{\sigma}}{\partial p_{1}^{2}}\Big{]} +\Big{[}\Big{(}-\frac{p_{2}}{\tau_{2}}\Big{)}\frac{\partial\psi_{\sigma}}{ \partial p_{2}}+\gamma_{2}^{\prime}T_{2}\Big{(}\frac{\partial\psi_{\sigma}}{ \partial p_{2}}\Big{)}^{2}+\frac{1}{\tau_{2}}-\gamma_{2}^{\prime}T_{2}\frac{ \partial^{2}\psi_{\sigma}}{\partial p_{2}^{2}}\Big{]}\\ +\Big{[}\Big{(}-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1}}{\Gamma} X_{2}-\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\Big{)}\frac{\partial\psi_{ \sigma}}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\Big{(}\frac{\partial\psi_{ \sigma}}{\partial X_{1}}\Big{)}^{2}+\frac{K}{\Gamma}-\frac{T_{1}}{\Gamma} \frac{\partial^{2}\psi_{\sigma}}{\partial X_{1}^{2}}\Big{]}\\ +\Big{[}\Big{(}-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2}}{\Gamma} X_{1}-\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}\Big{)}\frac{\partial\psi_{ \sigma}}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\Big{(}\frac{\partial\psi_{ \sigma}}{\partial X_{2}}\Big{)}^{2}+\frac{K}{\Gamma}-\frac{T_{2}}{\Gamma} \frac{\partial^{2}\psi_{\sigma}}{\partial X_{2}^{2}}\Big{]}\bigg{\}}. \tag{108}\] Note that the signs of \(\zeta_{i}^{\prime}\Delta\mu_{i}\,p_{i}\) behave different with the corresponding terms of even-parity case. The second curly bracket of Eq. (108) is similar with the rhs of Eq. (101), except \(\phi^{\rm R}\) is replaced with \(\psi_{\sigma}\). Therefore, this expression can simplified using the relation \(\psi_{\sigma}(\mathbf{r})=\phi^{\rm R}(\mathbf{r})+\sigma\phi^{\rm A}(\mathbf{ r})\). Especially, expanding the nonlinear terms (the terms with the square of first derivatives) gives \[\left(\frac{\partial\psi^{\sigma}}{\partial r_{i}}\right)^{2}=\left(\frac{ \partial\phi^{\rm R}}{\partial r_{i}}\right)^{2}+2\sigma\left(\frac{\partial \phi^{\rm R}}{\partial r_{i}}\right)\left(\frac{\partial\phi^{\rm A}}{ \partial r_{i}}\right)+\sigma^{2}\left(\frac{\partial\phi^{\rm A}}{\partial r _{i}}\right)^{2}. \tag{109}\] Collecting all those leads to \[\dot{S}_{\rm bDB}=\left(-\frac{1}{T_{1}}\dot{Q}_{1}-\frac{1}{T_{2 }}\dot{Q}_{2}\right)+\dot{\psi}^{\sigma}+\sigma^{2}A+\sigma B = \dot{S}_{\rm env}+\dot{\phi}-(1-\sigma)\dot{\phi}^{\rm A}+\sigma^{2 }A+\sigma B \tag{110}\] \[= \dot{S}_{\rm tot}-(1-\sigma)\dot{\phi}^{\rm A}+\sigma^{2}A+\sigma B\] where \[A=\frac{T_{1}}{\Gamma}\big{(}\frac{\partial\phi^{\rm A}}{\partial X_{1}}\big{)} ^{2}+\frac{T_{2}}{\Gamma}\big{(}\frac{\partial\phi^{\rm A}}{\partial X_{2}} \big{)}^{2}+\gamma_{1}^{\prime}T_{1}\big{(}\frac{\partial\phi^{\rm A}}{ \partial p_{1}}\big{)}^{2}+\gamma_{2}^{\prime}T_{2}\big{(}\frac{\partial\phi^ {\rm A}}{\partial p_{2}}\big{)}^{2}, \tag{111}\] \[B=\Big{[}-\frac{1}{\tau_{1}}p_{1}+2\gamma_{1}^{\prime}T_{1}\big{(} \frac{\partial\phi^{\rm R}}{\partial p_{1}}\big{)}\Big{]}\frac{\partial\phi^{ \rm A}}{\partial p_{1}}-\gamma_{1}^{\prime}T_{1}\frac{\partial^{2}\phi^{\rm A}} {\partial p_{1}^{2}}+\Big{[}-\frac{1}{\tau_{2}}p_{2}+2\gamma_{2}^{\prime}T_{2} \big{(}\frac{\partial\phi^{\rm R}}{\partial p_{2}}\big{)}\Big{]}\frac{\partial \phi^{\rm A}}{\partial p_{2}}-\gamma_{2}^{\prime}T_{2}\frac{\partial^{2}\phi^{ \rm A}}{\partial p_{2}^{2}}\] \[+\Big{[}-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1}}{\Gamma}X_{2}- \frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}+\frac{2T_{1}}{\Gamma}\big{(} \frac{\partial\phi^{\rm R}}{\partial X_{1}}\big{)}\Big{]}\frac{\partial\phi^{ \rm A}}{\partial X_{1}}-\frac{T_{1}}{\Gamma}\frac{\partial^{2}\phi^{\rm A}}{ \partial X_{1}^{2}}\] \[+\Big{[}-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2}}{\Gamma}X_{1}- \frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}+\frac{2T_{2}}{\Gamma}\big{(} \frac{\partial\phi^{\rm R}}{\partial X_{2}}\big{)}\Big{]}\frac{\partial\phi^{ \rm A}}{\partial X_{2}}-\frac{T_{2}}{\Gamma}\frac{\partial^{2}\phi^{\rm A}}{ \partial X_{2}^{2}}.\] Using the steady-state conditions written above, this can be further simplified. Subtracting the rhs of Eq. (101) from Eq. (C3), we get \[0=\left(-\frac{p_{1}}{\tau_{1}}\right)\frac{\partial\phi^{\rm A}}{ \partial p_{1}} +\gamma_{1}^{\prime}T_{1}\left[-\frac{\partial^{2}\phi^{\rm A}}{ \partial p_{1}^{2}}+\left(\frac{\partial\phi}{\partial p_{1}}\right)^{2}-\left( \frac{\partial\phi^{\rm R}}{\partial p_{1}}\right)^{2}\right]+\left(-\frac{p_{2 }}{\tau_{2}}\right)\frac{\partial\phi^{\rm A}}{\partial p_{2}}+\gamma_{2}^{ \prime}T_{2}\left[-\frac{\partial^{2}\phi^{\rm A}}{\partial p_{2}^{2}}+\left( \frac{\partial\phi}{\partial p_{2}}\right)^{2}-\left(\frac{\partial\phi^{\rm R }}{\partial p_{2}}\right)^{2}\right]\] \[+\left(-\frac{K}{\Gamma}X_{1}+\frac{\lambda_{1}}{\Gamma}X_{2}- \frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\right)\frac{\partial\phi ^{\rm A}}{\partial X_{1}}+2\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1 }\frac{\partial\phi}{\partial X_{1}}+\frac{T_{1}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm A}}{\partial X_{1}^{2}}+\left(\frac{\partial\phi}{ \partial X_{1}}\right)^{2}-\left(\frac{\partial\phi^{\rm R}}{\partial X_{1}} \right)^{2}\right]\] \[+\left(-\frac{K}{\Gamma}X_{2}+\frac{\lambda_{2}}{\Gamma}X_{1}- \frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}\right)\frac{\partial\phi ^{\rm A}}{\partial X_{2}}+2\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2 }\frac{\partial\phi}{\partial X_{2}}+\frac{T_{2}}{\Gamma}\left[-\frac{ \partial^{2}\phi^{\rm A}}{\partial X_{2}^{2}}+\left(\frac{\partial\phi}{ \partial X_{2}}\right)^{2}-\left(\frac{\partial\phi^{\rm R}}{\partial X_{2}} \right)^{2}\right]\] \[=A+B+2\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\frac{ \partial\phi}{\partial X_{1}}+2\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma} p_{2}\frac{\partial\phi}{\partial X_{2}}, \tag{101}\] where the equality \[\left(\frac{\partial\phi}{\partial r_{i}}\right)^{2}-\left(\frac{\partial\phi ^{\rm R}}{\partial r_{i}}\right)^{2}=2\left(\frac{\partial\phi^{\rm A}}{ \partial r_{i}}\right)\left(\frac{\partial\phi^{\rm R}}{\partial r_{i}}\right) +\left(\frac{\partial\phi^{\rm A}}{\partial r_{i}}\right)^{2} \tag{102}\] is used to obtain the last equality of Eq. (C1). Substituting this into Eq. (C1), the final expressions for the subcomponents of EP are obtained as follows: \[\dot{S}_{\rm bDB}=\dot{S}_{\rm tot}-(1-\sigma)\dot{\phi}^{\rm A}-\sigma(1- \sigma)A-2\sigma\left(\frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1} \frac{\partial\phi}{\partial X_{1}}+\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{ \Gamma}p_{2}\frac{\partial\phi}{\partial X_{2}}\right) \tag{103}\] \[\dot{S}_{\rm as}=(1-\sigma)\dot{\phi}^{\rm A}+\sigma(1-\sigma)A+2\sigma\left( \frac{\zeta_{1}^{\prime}\Delta\mu_{1}}{\Gamma}p_{1}\frac{\partial\phi}{ \partial X_{1}}+\frac{\zeta_{2}^{\prime}\Delta\mu_{2}}{\Gamma}p_{2}\frac{ \partial\phi}{\partial X_{2}}\right). \tag{104}\] Finally, averaging Eq. (C1) gives Eq. (69) of the main text.
2305.06564
Undercover Deepfakes: Detecting Fake Segments in Videos
The recent renaissance in generative models, driven primarily by the advent of diffusion models and iterative improvement in GAN methods, has enabled many creative applications. However, each advancement is also accompanied by a rise in the potential for misuse. In the arena of the deepfake generation, this is a key societal issue. In particular, the ability to modify segments of videos using such generative techniques creates a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth. This paradigm has been under-explored by the current deepfake detection methods in the academic literature. In this paper, we present a deepfake detection method that can address this issue by performing deepfake prediction at the frame and video levels. To facilitate testing our method, we prepared a new benchmark dataset where videos have both real and fake frame sequences with very subtle transitions. We provide a benchmark on the proposed dataset with our detection method which utilizes the Vision Transformer based on Scaling and Shifting to learn spatial features, and a Timeseries Transformer to learn temporal features of the videos to help facilitate the interpretation of possible deepfakes. Extensive experiments on a variety of deepfake generation methods show excellent results by the proposed method on temporal segmentation and classical video-level predictions as well. In particular, the paradigm we address will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes. All experiments can be reproduced at: github.com/rgb91/temporal-deepfake-segmentation.
Sanjay Saha, Rashindrie Perera, Sachith Seneviratne, Tamasha Malepathirana, Sanka Rasnayaka, Deshani Geethika, Terence Sim, Saman Halgamuge
2023-05-11T04:43:10Z
http://arxiv.org/abs/2305.06564v4
# Undercover Deepfakes: Detecting Fake Segments in Videos ###### Abstract The recent renaissance in generative models, driven primarily by the advent of diffusion models and iterative improvement in GAN methods, has enabled many creative applications. However, each advancement is also accompanied by a rise in the potential for misuse. In the arena of deepfake generation this is a key societal issue. In particular, the ability to modify segments of videos using such generative techniques creates a new paradigm of deepfakes which are mostly real videos altered slightly to distort the truth. Current deepfake detection methods in the academic literature are not evaluated on this paradigm. In this paper, we present a deepfake detection method able to address this issue by performing both frame and video level deepfake prediction. To facilitate testing our method we create a new benchmark dataset where videos have both real and fake frame sequences. Our method utilizes the Vision Transformer, Scaling and Shifting[30] pretraining and Timeseries Transformer to temporally segment videos to help facilitate the interpretation of possible deepfakes. Extensive experiments on a variety of deepfake generation methods show excellent results on temporal segmentation and classical video level predictions as well. In particular, the paradigm we introduce will form a powerful tool for the moderation of deepfakes, where human oversight can be better targeted to the parts of videos suspected of being deepfakes. All experiments can be reproduced at: [https://github.com/sanjayasaha1311/temporal-deepfake-segmentation](https://github.com/sanjayasaha1311/temporal-deepfake-segmentation). ## I Introduction Deep learning has made significant advances over the last few years, with varying degrees of societal impact. The advent of diffusion models as viable alternatives to hitherto established generative models has revolutionized the domains of textual/language learning, visual generation, and cross-modal transformations [43]. This, alongside other recent advancements in generative AI such as GPT4 [4] has also brought to attention the social repercussions of advanced AI systems capable of realistic content generation. There exist many methods to tackle the deepfake detection problem formulated as a binary classification problem[1, 3, 8]. A common pitfall of these methods is the inability to generalize to unseen deepfake creation methods. However, there is a more pressing drawback in these studies that we aim to highlight and address in this paper. Considering the social impact of a deepfake video, we hypothesize that rather than fabricating an entire fake video, a person with malicious intent would alter smaller portions of a video to misrepresent a person's views, ideology and public image. For example, an attacker can generate a few fake frames to replace some real frames in a political speech, thus distorting their political views which can lead to considerable controversy and infamy. The task of identifying deepfake alterations within a longer video, known as deepfake video temporal segmentation, is Fig. 1: (a) We propose a new deepfake benchmark dataset consisting of videos with one or two manipulated segments, represented in the two images respectively. Fake frames are indicated by smaller red boxes and genuine video frames are denoted by green borders. (b) Our proposed deepfake detection method employs temporal segmentation to classify video frames as real or fake and identify the intervals containing the manipulated content. This is a departure from the conventional binary classification of videos as either entirely genuine or entirely manipulated. currently not well explored or understood. These types of deepfakes pose a more difficult challenge for automated deepfake analysis compared to other types of deepfakes. Moreover, they also pose a greater threat to society since the majority of the video may be legitimate, making it appear more realistic and convincing. Additionally, these deepfakes require significant manual oversight, especially in the moderation of online content platforms, as identifying the legitimacy of the entire video requires manual interpretation. However, performing frame-level detection allows the human moderator to save time by focusing only on the fake segments. Figure 1(a) demonstrates deepfake videos where the entire video is not fake, but some of the real frames were replaced by fake frames. We present a benchmark dataset with videos similar to those in Figure 1(a) to test our method on the temporal deepfake segmentation problem. In the temporal deepfake segmentation problem the detector makes frame level predictions and calculates the start and end of the fake sequences. This differs from classical deepfake detection where the detector makes a video level prediction as demonstrated in Figure 1(b). **Problem Definition** Deepfake Temporal Segmentation Task is defined as follows, _Given an input video identify temporal segments within the video that are computer generated i.e. fakes_. The output of this task is a labeling of'real' or 'fake' for each frame, which we call a temporal segmentation map. We can frame the classical deepfake detection problem as a special case of the temporal segmentation task, in which ALL frames are labeled either as'real' or 'fake'. With an emphasis on the novel deepfake temporal segmentation task, this paper makes the following contributions, * We identified the new threat of faking small parts of a longer video to pass it off as real. Current detection methods cannot handle this threat, since they assume the entire video is real or fake. This can be addressed through our proposed temporal segmentation of videos. This novel problem provides a new direction for future research. * We curated a new dataset specifically for deepfake temporal segmentation, which will be publicly available for researchers to evaluate their methods. Our rigorous experiments establish benchmark results for temporal segmentation of deepfakes, providing a baseline for future work. ## II Related Work ### _Face Image Synthesis_ Manipulation of face images has always been a popular research topic in the media forensics and biometrics domain. Synthesized digital faces can be used to deceive humans as well as machines and software. Prior to deepfakes, digitally manipulated faces[45, 46, 22] were utilized mainly to fool biometric verification and identification methods e.g., face recognition systems. Consequently, deepfake methods[19, 11, 56] started to generate very realistic fake videos of faces and became much more popular as a result. This led to a series of research works on developing a number of deepfake generation methods, categorized into mainly two types: Face swapping[12, 25, 38] and Face reenactment[50, 48]. Deepfake generation methods have since improved by a significant margin through the advancement of existing methods and better software integration as in Deepfacelab[38]. This has helped creators of deepfakes to create longer videos, including seamlessly blending fake frames with real frames which allows to have both real and fake video segments within a same deepfake video. Through more recent developments in generative AI[43, 51, 37] we are at the brink of experiencing even higher quality and more subtle deepfakes which raises the need for updated research in this area. ### _Deepfake Detection_ Initial works on deepfake detection methods focused on detecting artifacts in the deepfaked face images such as irregular eye colors, asymmetrically blinking eyes, abnormal heart beats, irregular lip, mouth and head movements[28, 47, 34, 7]. Some other earlier works tried to find higher level variability in the videos: erroneous blending after face swaps, or identity-aware detection approach[54, 15, 26, 9]. Compared to these earlier works more recent studies[36, 41, 2] that are independent of artifact based detection have achieved astounding results in detecting fake videos from most of the state-of-the-art datasets. Recently, more works[53, 55, 20, 55, 23, 18, 17] have given more attention towards generalization of the detectors. Although deepfake detection methods have seen significant progress in recent years, most of the previous works have assumed that an entire video is fake even when only a short segment is altered while the rest of the frames are real. In contrast, in this paper, we introduce the concept of not only detecting deepfake videos but also segmenting the fake frames within them. The proposed approach can accurately identify one or more fake segments in a deepfake video which can mitigate the risks associated with deepfakes that incorporate only a few fake segments. ## III Methodology We propose a two-stage method, where, in the first stage we use a Vision Transformer (ViT) with Scaling and Shifting (SSF). With the learned frame-level features from the first stage, we train a Timeseries Transformer (TsT) which learns the temporal features and helps to segment deepfake videos temporally. The preprocessing is simple, it includes extraction of the frames and cropping of the face region. The ViT from the first stage of our method learns the spatial features through fine-tuning with the help of SSF. The ViT learns frame-level features which comprise a single vector for each frame. These features are the inputs to the TsT classifier (the second stage of our method). The TsT is an adaptation of the original transformer encoder from [52], it helps in learning temporal features from sequential data such as the learned ViT features and uses the temporal features for classification. The feature vectors from stage 1 are sequentially accumulated and windowed through sliding window technique. It is important to use sequentially windowed feature vectors as input to the TsT since we need the temporal features to be learned for better temporal segmentation. ### _Model architecture_ #### Iv-A1 Vision Transformer (ViT) and Scaling and Shifting (SSF) Vision transformers (ViT) have achieved state-of-the-art results on several image classification benchmarks, demonstrating their effectiveness as an alternative to convolutional neural networks (CNNs). The ViT model first partitions the input image \(I\in\Re^{H\times W\times C}\) into a set of smaller patches of size \(N\times N\) where \(H\), \(W\), and \(C\) correspond to the height, width, and the number of channels of the image respectively. Each patch is then represented by a \(d\)-dimensional feature vector, which is obtained by flattening the patch into a vector of size \(N^{2}C\) and applying a linear projection to reduce its dimensionality. Next, to allow the model to learn the spatial relationships between the patches, positional encodings are added to the patch embeddings. The resulting patch embeddings are concatenated together to form a sequence, and a learnable class embedding that represents the classification output is prepended to the sequence which is then input through a series of transformer layers. Each transformer layer consists of a multi-head self-attention mechanism, which allows the model to attend to different parts of the input patches, a multi-layer perceptron, and a layer normalization (Fig. 2). Finally, a classification head is attached at the end of the transformer layers which produces a probability distribution over the target classes. Shifting (SSF) [30] to train the ViT model used in our pipeline Fig. 2. SSF attempts to alleviate the distribution mismatch between the pre-trained task and the downstream deep fake feature extraction task by modulating deep features. Specifically, during the fine-tuning phase, the original network parameters are frozen, and scaling and shifting parameters are introduced at each operation to learn a linear transformation of features as shown in Fig. 2. Formally, given the output from layer \(i\) as \(x\in\Re^{N^{2}+1}\times d\), the output \(y\in\Re^{N^{2}+1}\times d\) (is also the input to the next operation) is calculated by \[y=\gamma\cdot x+\beta \tag{1}\] where \(\gamma\in\Re^{d}\) and \(\beta\in\Re^{d}\) are the scale and shift parameters, respectively. As done in the original work, we too insert SSF parameters after each operation with a linear coefficient in the ViT. #### Iii-A2 Timeseries Transformer **Architecture and training.** The timeseries transformer is an adaptation of the sequence to sequence transformer in [52]. The transformer architecture is designed to learn and classify from sequential data instead of generating another sequence. It is composed of multiple transformer blocks and an MLP (Multilayer Perceptron) head. Each transformer block has a multi-head attention mechanism and a feed forward block as shown in Figure 4(c). Our method generates frame-level predictions for the input videos, which may contain some noisy predictions. To address this issue, we propose a simple smoothing technique (Figure 3) based on Algorithm 1. The algorithm sets a minimum duration for fake-segments, specified in terms of the number of frames. It processes each frame-prediction by computing the majority vote from past frames on the left and future frames on the right to determine the final label for that frame. By smoothing out the noisy predictions, our approach enhances performance, as demonstrated in Table V. ``` 0:\(\rho\), the list of predictions per frame 0:\(k\geq 0\), the offset for\(i\gets 0\dots len(\rho)\)do \(\rho_{left}\leftarrow\) sub-list of size \(k\) on left of \(\rho[i]\) \(\rho_{right}\leftarrow\) sub-list of size \(k\) on right of \(\rho[i]\) \(M_{left}\leftarrow\) majority-vote\((\rho_{left})\) \(M_{right}\leftarrow\) majority-vote\((\rho_{right})\) if\(\rho_{left}\) is empty \(and\)\(\rho[i]\neq M_{right}\)then \(\rho[i]\gets M_{right}\) elseif\(\rho_{right}\) is empty \(and\)\(\rho[i]\neq M_{left}\)then \(\rho[i]\gets M_{left}\) elseif\(M_{left}=M_{right}\) and \(\rho[i]\neq M_{left}\)then \(\rho[i]\gets M_{left}\) endif endfor return\(\rho\) ``` **Algorithm 1** Smoothing noisy predictions. **Data processing.** The learned features from the vision transformer consists of a vector for each of the frames in the dataset. However, for the timeseries transformer we accumulated the frames sequentially for each video and split them into overlapping windows of size \(W\). That is, we have features of \(W\) sequential frames in one window. the sub-datasets randomly with the condition that the length of the videos must be above \(500\) frames. For videos with one fake segment, we have selected a random starting point in the first half of the video and a random choice from \(125\), \(150\), and \(175\) frames for the length of the fake segment. The ratio of fake frames and the length of the videos for each deepfake generation method in the dataset are presented in Table I. A similar strategy is selected for videos with two fake segments as well. In the case of two fake segments in a video, the first fake segment starts at a random position within the first \(125\) frames and the second fake segment starts at a random position within the first \(75\) frames in the second half of the video. The length of the fake segments here are randomly chosen as in the case of one segment videos. The information and code related to reproducing the dataset are published at: [https://github.com/sanjayasaha1311/temporal-deepfake-segmentation](https://github.com/sanjayasaha1311/temporal-deepfake-segmentation). #### Iii-B2 Evaluation: Intersection over Union (IoU) Intersection over Union (IoU) is proposed to evaluate the temporal segmentation map. This metric is most commonly used to evaluate the fit of object detection bounding boxes[42, 40]. 1-D variations of IoU has been adopted for time series segment analysis, which we will be utilizing. Let the ground truth map be \(GT_{map}\) and predicted segmentation map be \(P_{map}\). Both will be 1-D vectors of equal length with a predicted Boolean class (\(R\) or \(F\)) for each frame in the video. \[GT_{map}=\{RRRRRRFFRR...\} \tag{2}\] \[P_{map}=\{RRRRRRFFRR...\} \tag{3}\] \[IoU=\frac{Intersection}{Union}=\frac{|GT_{map}\cap P_{map}|}{|GT_{map}\cup P_{ map}|} \tag{4}\] **Observation:**\(|GT_{map}\cap P_{map}|\) is the count of correctly predicted frames, and \(|GT_{map}\cup P_{map}|\) is the count of correctly predicted frames and wrongly predicted frames \(\times 2\). IoU falls in the range \([0,1]\); where the greater the value, the better the predicted segment map. While the theoretical lower bound of IoU is zero, in practice it is useful to understand how a random guessing algorithm will be scored. Let \(f\) be the ratio of Real frames in the \(GT_{map}\) and \(p\) be the probability at which the randomly predicted frame in \(P_{map}\) is classified as Real. The graph below shows the possible \(|GT_{map}\cap P_{map}|\) values (call it \(S\)). \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy0,0}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\\\\\\\\\\}}}\}\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}} with a decay rate of \(0.99992\) together with automatic mixed precision. The model was initialized with pretrained weights on Imagenet-21K-SSF. Timeseries settings.The dimension of the feature vector for each frame from the ViT was \(768\). After widowing these vectors as shown in Figure 2 the input dimension for the Time-series Transformer (TsT) was \((W,768)\). In our experiments, we used \(W=5\) however it is also possible to use different values for \(W\). There are a total of 8 transformer blocks in the TsT, with 8-headed attention connection. Each attention-head's dimension in \(512\). After the transformer blocks, we have 1-dimensional global average pooling prior to the MLP head. We have used categorical cross-entropy as the loss and 'Adam' as the optimizer with \(1e^{-4}\) learning for training this model. We used an early stopping technique with patience \(10\) to make the training procedure faster, training the TsT for \(16\) epochs with a batch size of \(64\). ### _Temporal segmentation analysis_ We have used our proposed benchmark dataset to test our method for the temporal segmentation problem where we try to classify deepfake videos in frame-level instead of video-level. The metrics we use to measure the performance for temporal segmentation are Intersection over Union (IoU) and Area under the ROC Curve (AUC). The baseline IoU (for random guessing the class of a frame) is \(1/3\) as shown in section III-B2. We have trained six separate models on six training sets: FaceForensics++ (FF++) and the five sub-datasets within FF++ i.e. Deepfakes (DF), Face-Shifter (FSh), Face2Face (F2F), Neural Textures (NT) and FaceSwap (FS). Similarly, we report the results for each model on the six test sets (FF++ and its five sub-datasets) in Table II. As seen, each model does very well when it was tested on the test-set from the same dataset as it was trained on, hence the results on the diagonals are either the best or the second-best in every column while the second-best results are only lower in the range from \(0.001\) to \(0.009\). As expected, the model trained on the whole FF++ is the overall best-performing model. However, the results from the other five models give us some important findings. Out of the five sub-datasets in FF++, three were made with a face swapping technique (DF, FSH and FS) and the other two were made with face re-enactment (F2F and NT). We can notice that models trained on F2F and NT perform better than the other three models. Since face re-enactment deepfakes are devoid of strong artifacts compared to face swapping deepfakes, models trained on face re-enactment methods tend to generalize well to other methods. Similarly, models trained on the face swapping methods do not generalize well to face re-enactment test data as seen on the 'F2F' and 'NT' columns in Table II. Overall, out of the five sub-datasets of FF++, NT is the best one to train a detection model if others are unavailable. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{**DF**} & \multicolumn{3}{c|}{**FSh**} & \multicolumn{3}{c|}{**F2F**} & \multicolumn{3}{c|}{**NT**} & \multicolumn{3}{c|}{**FS**} & \multicolumn{3}{c}{**FF++**} \\ \cline{2-19} & One seg & Two seg & One seg & Two seg & One seg & Two seg & One seg & Two seg & One seg & Two seg & One seg & Two seg & One seg & Two seg \\ \cline{2-19} & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC & IoU & AUC \\ \hline \hline **DF** & **0.987** & _0.986_ & **0.975** & **0.984** & 0.926 & 0.917 & 0.886 & 0.923 & 0.963 & 0.967 & 0.937 & 0.959 & 0.748 & 0.729 & 0.603 & 0.723 & 0.954 & 0.956 & 0.920 & 0.954 & 0.915 & 0.912 & 0.860 & 0.910 \\ **FSh** & 0.957 & 0.963 & 0.933 & 0.958 & _0.972_ & _0.984_ & _0.961_ & _0.980_ & 0.971 & 0.969 & 0.947 & 0.966 & 0.764 & 0.748 & 0.628 & 0.745 & 0.966 & 0.968 & 0.938 & 0.965 & 0.926 & 0.928 & 0.878 & 0.924 \\ **F2F** & 0.970 & 0.976 & 0.959 & 0.976 & 0.971 & 0.978 & 0.954 & 0.972 & **0.982** & _0.986_ & **0.974** & **0.984** & 0.840 & 0.836 & 0.739 & 0.832 & **0.983** & 0.985 & _0.966 & _0.981_ & 0.950 & 0.953 & 0.918 & 0.950 \\ **NT** & 0.971 & 0.985 & 0.961 & 0.980 & 0.969 & 0.983 & 0.958 & 0.979 & 0.963 & 0.980 & 0.955 & 0.977 & _0.949_ & **0.974** & _0.933_ & _0.966_ & 0.946 & 0.970 & 0.932 & 0.965 & _0.960_ & _0.979_ & _0.949_ & _0.974_ \\ **FS** & 0.941 & 0.935 & 0.898 & 0.932 & 0.963 & 0.962 & 0.931 & 0.955 & 0.970 & 0.970 & 0.950 & 0.969 & 0.679 & 0.679 & 0.514 & 0.642 & _0.981_ & _0.987_ & **0.967** & **0.983** & 0.904 & 0.901 & 0.842 & 0.898 \\ \hline **FF++** & _0.974_ & **0.988** & _0.962_ & 0.982 & **0.974** & **0.989** & **0.962** & **0.982** & _0.975_ & **0.988** & _0.965_ & _0.983_ & **0.959** & _0.972_ & **0.938** & **0.967** & 0.975 & **0.988** & 0.955 & 0.978 & **0.971** & **0.985** & **0.957** & **0.979** \\ \hline \end{tabular} \end{table} TABLE II: Results for temporal segmentation on the proposed benchmark temporal deepfake dataset. Each row indicates a model trained on a specific training sub-dataset; we have trained models with FaceForensics++ (FF++) and the five sub-datasets within FF++ i.e. Deepfakes (DF), Face-Shifter (FSh), Face2Face (F2F), Neural Textures (NT) and FaceSwap (FS). The last row presents the results for the model trained on the full FF++ dataset. The columns represent the data we have tested our models on; similar to training we have tested the models on all the five sub-datasets of FF++ and the whole FF++ (i.e. last four columns). For each test data, we used the one-fake-segment and two-fake-segments test data from our proposed benchmark dataset (see section III-B). We report IoU and AUC metrics with the best value in a column is in **bold** and the second-best is in _italic_. \begin{table} \begin{tabular}{c|c c c c c c|c c c c} \hline & **DF** & **FSh** & **F2F** & **NT** & **FS** & **FF++** & **C-DF** & **DFDC** & **WDF** \\ \hline \hline **DF** & **0.993** & 0.965 & 0.975 & 0.830 & 0.967 & 0.946 & 0.590 & 0.558 & 0.631 \\ **FSh** & 0.980 & _0.990_ & 0.978 & 0.848 & 0.980 & 0.955 & 0.609 & 0.535 & 0.619 \\ **F2F** & _0.990_ & **0.993** & **0.990** & 0.917 & _0.993_ & _0.977_ & 0.670 & 0.603 & 0.676 \\ **NT** & 0.973 & 0.967 & 0.967 & _0.965_ & 0.960 & 0.967 & _0.715_ & _0.626_ & _0.693_ \\ **FS** & 0.960 & 0.975 & 0.982 & 0.770 & **0.995** & 0.936 & 0.584 & 0.512 & 0.611 \\ \hline **FF++** & 0.985 & 0.983 & _0.983_ & **0.967** & 0.983 & **0.982** & **0.790** & **0.667** & **0.703** \\ \hline \end{tabular} \end{table} TABLE III: Results (in AUC) for video level classification. Similar to Table II each row represent results from models trained on specific training data. The columns constitute the test data. Along with FF++ and the sub-datasets of FF++ we have tested each model on other datasets such as CelebDF (C-DF), DFDC, and WildDeepFakes (WDF). The best value in a column is in **bold** and the second-best is in _italic_. ### _Video-level classification and Generalizability_ The classical approach to deepfake detection has always been to predict the class (real or fake) of a deepfake video i.e. to make video-level predictions. We run experiments on the test data from the original datasets and report the results in Table III using AUC as the metric. We also measure our models' performance on test data from datasets outside of FF++: CelebDF (C-DF), DFDC, and WildDeepFakes (WDF). Results for the sub-datasets of FF++ (DF, FSh, FZF, NT and FS) follow the results of the temporal analysis where the diagonal values are the best or the second-best in a column i.e. models when tested on data from the same sub-dataset generally perform very well. And, similar to the previous results (i.e. temporal segmentation) we see that models trained on face re-enactment data (F2F and NT) perform better than other sub-dataset-models when tested on unseen data i.e. these two models generalize well in comparison with other models. However, we see the best results from tests on CelebDF, DFDC, and WildDeepFakes from the model trained on the full FF++ dataset. It is also noticeable that AUC scores for DFDC and WildDeepFakes are lower compared to CelebDF. While CelebDF is a dataset with videos made solely by the face swapping technique, DFDC is a combination of multiple methods such as face swapping (Deepfake Autoencoder and Morphable-mask), Neural talking-heads and GAN-based methods. WildDeepFakes dataset contains videos from the Internet which may contain videos generated using a variety of methods. Some of these methods are totally unseen due to their absence in the FF++ dataset. We further evaluate and compare our results with state-of-the-art methods in Table IV using the model trained on FF++ and tested on FF++ and CelebDF. Our method performs very competitively with the most recent state-of-the-art deepfake detection methods[5, 55] and outperforms most of the methods on the video-level predictions in both same-dataset and cross-dataset scenarios. Our method comprehensively exceed the performance of several state-of-the-art methods such as Xception[44], F3Net[39], SMIL[27], Two-branch[33], SRM[32], SPSL[31] and MADD[55] in generalizability (i.e. cross-dataset/CelebDF). Also, our method does competitively well for the FF++ data while falling slightly behind only after Xception[44], MADD[55] and SLADD[5] by \(0.016,0.015\) and \(0.002\) respectively. These results demonstrate that our method can also be used with high confidence for traditional deepfake detection and for unseen data (i.e. video-level) alongside temporal segmentation despite not being optimized for this objective. ### _Varying Lengths of Fake Segments_ Our proposed method is effective in identifying even short segments of deepfake that can significantly alter the message conveyed by a video. To evaluate the performance of our method, we conducted experiments on a test set comprising \(100\) videos with varying lengths of fake segments, and the results are presented in Figure 5. Specifically, we created fake segments with durations ranging from \(0.2\) seconds to \(19\) seconds, with an increment of \(0.2\) seconds, and calculated the average IoU and AUC over the 100 videos. The increment of \(0.2\) seconds is the average duration of two phonemes in English[16], which we assume to be the unit duration for a fake segment. Notably, we did not use the smoothing of noisy frames (Algorithm 1) in this experiment. Our method achieves high accuracy in detecting very short fake-segments with a duration of less than \(1.0\) second, yielding an AUC value of over \(0.91\). Moreover, as the length of the fake Fig. 5: Performance (IoU and AUC) of the proposed approach across different lengths of deepfake segments. Our approach is largely robust to variation in the length of the injected deepfake segment. \begin{table} \begin{tabular}{c c c} \hline **Method** & **CelebDF** & **FF++** \\ \hline \hline **Xception[44]** & 0.653 & 0.997 \\ **Face X-ray[26]** & 0.660 & 0.985 \\ **F3Net[39]** & 0.661 & 0.893 \\ **SMIL[27]** & 0.563 & 0.932 \\ **Two-branch[33]** & 0.734 & 0.932 \\ **SRM[32]** & 0.659 & 0.969 \\ **SPSL[31]** & 0.724 & 0.969 \\ **MADD[55]** & 0.674 & **0.998** \\ **SLADD[5]** & **0.797** & _0.984_ \\ \hline **Ours** & _0.790_ & 0.982 \\ \hline \end{tabular} \end{table} TABLE IV: Comparison with other state-of-the-art methods in terms of AUC. The models were trained on FF++ and were evaluated on CelebDF and FF++. This comparison is for the video-level classification only, since existing works do not perform temporal segmentation. Most results from other methods were taken from their own paper and the rest were take from [5]. Our method achieves competitive performance with state-of-the-art methods on generalizable deepfake detection. segment increases, our method performs even better in terms of both AUC and IoU. This experiment provides evidence that our proposed method can identify even the slightest alterations in very short fake-segments, highlighting its effectiveness in detecting deepfake videos. ### _Ablation study_ We conducted experiments to evaluate the effectiveness of our proposed method without the TsT and the smoothing algorithm (Algorithm 1) for both the temporal segmentation detection and the video-level detection. The model was trained on the full FF++ training data and tested on the proposed temporal segmentation benchmark dataset and FF++ test set for temporal segmentation and video-level detection respectively. We used a MLP head on the ViT to classify frames for the experiment where the TsT was not included. Our ablated model achieved great results on both test sets, which are reported in Table V. To provide a better comparison, we also reported the results from the full model with the TsT and smoothing algorithm. While we observe that the ViT already performs very well, a significant improvement can be seen in both temporal and video-level performance with the inclusion of the TsT and the smoothing algorithm. ## V Discussion While many methods tackle deepfake detection at the video-level, we propose a method that can generate results at the frame, segment and entire video level. This allows for maximal flexibility in analysing content for the presence of deepfakes and additionally provides comparison points for future research along these related but separate evaluation protocols. Our method is based on supervised pretraining of the image encoder, which limits computational requirements in two forms. Firstly, the image encoder is trained independently on individual video frames with frame level supervision, nullifying the need for learning temporal relationships between frames. Secondly, a large part of the backbone is frozen and initialized using readily available weights from ImageNet, considerably reducing the computational cost of obtaining a deepfake related representation in the encoder. The proposed method achieves robust IoU metrics across the proposed single and multi-segment analysis of deepfakes, while maintaining competitive performance on video-level deepfake detection and generalized deepfake detection.
2308.07686
Boosting Multi-modal Model Performance with Adaptive Gradient Modulation
While the field of multi-modal learning keeps growing fast, the deficiency of the standard joint training paradigm has become clear through recent studies. They attribute the sub-optimal performance of the jointly trained model to the modality competition phenomenon. Existing works attempt to improve the jointly trained model by modulating the training process. Despite their effectiveness, those methods can only apply to late fusion models. More importantly, the mechanism of the modality competition remains unexplored. In this paper, we first propose an adaptive gradient modulation method that can boost the performance of multi-modal models with various fusion strategies. Extensive experiments show that our method surpasses all existing modulation methods. Furthermore, to have a quantitative understanding of the modality competition and the mechanism behind the effectiveness of our modulation method, we introduce a novel metric to measure the competition strength. This metric is built on the mono-modal concept, a function that is designed to represent the competition-less state of a modality. Through systematic investigation, our results confirm the intuition that the modulation encourages the model to rely on the more informative modality. In addition, we find that the jointly trained model typically has a preferred modality on which the competition is weaker than other modalities. However, this preferred modality need not dominate others. Our code will be available at https://github.com/lihong2303/AGM_ICCV2023.
Hong Li, Xingyu Li, Pengbo Hu, Yinuo Lei, Chunxiao Li, Yi Zhou
2023-08-15T10:37:03Z
http://arxiv.org/abs/2308.07686v1
# Boosting Multi-modal Model Performance with Adaptive Gradient Modulation ###### Abstract While the field of multi-modal learning keeps growing fast, the deficiency of the standard joint training paradigm has become clear through recent studies. They attribute the sub-optimal performance of the jointly trained model to the modality competition phenomenon. Existing works attempt to improve the jointly trained model by modulating the training process. Despite their effectiveness, those methods can only apply to late fusion models. More importantly, the mechanism of the modality competition remains unexplored. In this paper, we first propose an adaptive gradient modulation method that can boost the performance of multi-modal models with various fusion strategies. Extensive experiments show that our method surpasses all existing modulation methods. Furthermore, to have a quantitative understanding of the modality competition and the mechanism behind the effectiveness of our modulation method, we introduce a novel metric to measure the competition strength. This metric is built on the mono-modal concept, a function that is designed to represent the competition-less state of a modality. Through systematic investigation, our results confirm the intuition that the modulation encourages the model to rely on the more informative modality. In addition, we find that the jointly trained model typically has a preferred modality on which the competition is weaker than other modalities. However, this preferred modality need not dominate others. Our code will be available at [https://github.com/lihong2303/AGM_ICCV2023](https://github.com/lihong2303/AGM_ICCV2023). Machine Learning, ICML ## 1 Introduction Recent years have seen tremendous progress in deep multi-modal learning. Despite these advances, integrating information from multiple modalities remains challenging. Many efforts have been made to design sophisticated fusion methods for better performance. However, adding additional modalities only slightly improves accuracy in some multi-modal tasks. For example, trained on the CMU-MOSEI (Delbrouck et al., 2020) dataset, the accuracy of the text-based single-modal model is only about \(1\%\) point lower than that of the multi-modal model based on both text and audio modalities. Similar phenomena have also been observed across a wide variety of multi-modal datasets (Vielzeuf et al., 2018; Cao et al., 2014). Such inefficiency in exploiting and integrating information from multiple modalities presents a great challenge to the multi-modal learning field. It is commonly believed that this inefficiency is a consequence of the existence of the dominant modality, which prevents the model from fully exploiting the other relatively weak modalities (Ma et al., 2022; Hu et al., 2022). Recent studies (Allen-Zhu and Li, 2020; Huang et al., 2022; Han et al., 2022) theoretically investigate the training process of late fusion models and explain the production of the dominant modality with the concept of modality competition. In addition to the theoretical studies, there is a group of empirical works that attempts to develop methods to modulate the training of a multi-modal model and balance the learning of different modalities and, thus, achieve better performance. To our best knowledge, existing modulation methods are confined to late fusion models which greatly limits their application. More importantly, little effort has been paid to the study of the mechanism behind the effectiveness of those modulation methods. It is natural to ask _Can we design a modulation method that applies to more complex fusion strategies?_ and _Is it possible to understand the working mechanism of modulation in terms of modality competition?_ To this end, we propose an adaptive gradient modulation method, which utilizes a Shapley value-based attribution technique, that can in principle apply to any fusion strategy. Our approach achieves better performance compared with the current modulation methods. Moreover, we introduce the mono-modal concept to represent the competition-less state of a modality and build a metric on top of it to directly measure the competition strength of a modality in a multi-modal model. This novel metric lay the base for us to quantitatively study the behavior of modality competition and the working mechanism of our adaptive gradient modulation method. Our main contributions are three-fold: 1. We propose an adaptive gradient modulation method that can boost the performance of multi-modal models with various fusion strategies and justify its effectiveness through extensive experiments. 2. We introduce the mono-modal concept to capture the competition-less state of a modality and build a novel metric to measure the modality competition strength. 3. We systematically analyze the behavior of modality competition and study the mechanism of how our modulation method works. ## 2 Related work ### Multi-modal learning Multi-modal learning is a fast-growing research area. It addresses the needs of effectively processing multi-sensory data in real-world tasks and has applications in various fields, such as multi-modal sentiment classification (Zadeh et al., 2018; Cao et al., 2014), audio-visual localization (Tian et al., 2018) and visual question answering (Antol et al., 2015; Ilievski and Feng, 2017; Wu et al., 2021). According to the fusion strategy, one distinguishes three types (Baltrusaitis et al.), i.e., the late fusion, the early fusion, and the hybrid fusion, depending on when the fusion happens at the output stage, at the input stage, and in a complex manner, respectively. From another perspective, existing models can be divided into two categories, either jointly training different modalities in an end-to-end fashion or exploiting pre-trained models and building a multi-stage pipeline. In this paper, we focus on the multi-modal joint training models for the multi-modal classification task, and we will compare models with different fusion strategies. ### Modality-specific modulation Recent studies (Wang et al., 2020; Huang et al., 2022) reveal the deficiency of the multi-modal joint training paradigm that information on the input modalities is often under-exploited. To address this deficiency, existing works commonly propose to intervene in the training process. Geng et al. (2021) propose to obtain noise-free multi-view representations with the help of uncertainty in Dynamic Uncertainty-Aware Networks. Wang et al. (2020) devise the Gradient-blending technique which addresses the overfitting in a multi-modal model by optimally blending modalities. Wu et al. (2022) propose to balance the speed of learning from different modalities based on their conditional utilization rates. Fujimori et al. (2020) emphasize the heterogeneity of different network branches in joint training Figure 1: Schematic diagram of the adaptive gradient modulation (AGM) method. Firstly, based on the full input and corresponding muted inputs, the Shapley module produces mono-modal outputs \(\phi^{m}\), which disentangle the responses of the multi-modal model to individual modalities. Next, \(\phi^{m}\) are used to compute the mono-modal cross-entropy \(s^{m}\) that reflects the amount of information in modality \(m\). At last, \(s^{m}\) and their running average \(\bar{s}^{m}\) are fed to the Discrepancy Ratio module to compute the modulation coefficients \(\kappa^{m}\) for each modality, which in turn modulate the strength of corresponding gradient signals during back-propagation. and propose to avoid overfitting through modality-specific early stopping. Yao and Mihalcea (2022) advocates using modality-specific learning rates for different branches in a multi-modal model to fully explore the capacity of the corresponding network architecture. More recently, Peng et al. (2022) proposes to adjust the gradients of individual modalities based on their output magnitudes. The assumption is that in an ideal multi-modal model, the outputs of individual modalities should be balanced, i.e., having similar magnitudes. Consequently, the gradient of the modality with larger outputs will be modulated on-the-fly towards a lower magnitude during each training iteration. Despite the effectiveness of the above-mentioned methods, they are all confined to late fusion models, limiting their practical use. More importantly, the mechanism of why those methods work to improve the multi-modal model remains unexplored. ### Mono-modal behavior One way to investigate the mechanism underlying a multi-modal model is to quantify how much modalities affect each other in the model. In a recent theoretical analysis, Huang et al. (2022) term this interaction among modalities as the modality competition. Due to the complexity and non-linearity of neural network models, it is infeasible to isolate a part of the computations that account for the competition. Existing works instead attempt to measure the mono-modal behavior inside a multi-modal model, which can partly reflect the interactions among modalities. Hessel and Lee (2020) design the empirical multimodally-additive function projection (EMAP) that implicitly reflects the mono-modal behavior by averaging out all other modalities. Yao and Mihalcea (2022) employ the layer conductance (Shrikumar et al., 2018) to evaluate the importance of individual modalities in late fusion models. Gat et al. (2021) propose the perceptual scores to measure the mono-modal importance directly. The key idea of their method is the input permutation, which removes the influence of modalities other than the targeting one. What is most related to the goal of measuring the modality competition is the recently proposed SHAPE scores (Hu et al., 2022). The authors devise a way to compute the cross-modal cooperation strength based on the Shapley values. It is worth noting that all the above-mentioned methods are self-oriented in the sense that they only utilized the multi-modal model, where competition already presents. The lack of information about how each modality behaves without competition prevents those models from faithfully reflecting the modality competition strength. ## 3 Method ### Adaptive gradient modulation Drawing inspiration from the Shapley value-based attribution method (Hu et al., 2022) and the On-the-fly gradient modulation generalization enhancement (OGM-GE) algorithm (Peng et al., 2022), we propose an adaptive gradient modulation (AGM) method that modulates the level of participation of individual modalities. Figure 1 presents the illustration of the proposed AGM. Our approach is in line with the OGM-GE algorithm in the sense that both attempt to balance the mono-modal responses in a multi-modal model. Nonetheless, our approach differs from the OGM-GE in the following three important aspects: 1) We adopt a Shapley value-related method to compute the mono-modal responses. In this way, our approach applies to complex fusion strategies rather than being limited to the late fusion case. 2) We extend the method to calculate the discrepancy ratios so that our approach can deal with situations with more than two modalities. 3) In our approach, the discrepancy ratios are modulated towards their running average rather than 1, reflecting the distinctions among different modalities. #### 3.1.1 Isolating the mono-modal responses The core component of our approach is the algorithm to isolate the mono-modal responses, which enables us to further compute the mono-modal cross entropy and the mono-modal accuracy. Let \(\phi(x),x=(x^{m_{1}},\dots,x^{m_{k}})\) be a multi-modal model on the data with \(k\) modalities and \(\mathcal{M}:=\{m_{i}\}_{i\in[k]}\) be the set of all modalities. Same as in (Hu et al., 2022) we use zero-padding \(0^{m}\) to represent the absence of features of modality \(m\). When \(S\) is a subset of \(\mathcal{M}\), \(\phi(S)\) denotes that if \(m\in S\), the component \(x^{m}\) is substituted with \(0^{m}\). Then the mono-modal response for \(m\) is defined as \[\phi^{m}(x)=\sum_{S\subseteq\mathcal{M}/\{m\};S\neq\emptyset}\frac{|S|!(k-|S|- 1)!}{k!}V_{m}(S;\phi), \tag{1}\] where \(V_{m}(S;\phi)=\phi(S\cup\{m\})-\phi(S)\). Note that we exclude the empty subset from the above summation. In this way, we ensure the relation \[\phi(x)=\sum_{m}\phi^{m}(x). \tag{2}\] As an example, for the two-modality case Equation (1) is simplified to \[\phi^{m_{1}}(x)=\frac{1}{2} \left[\phi(\{m_{1},m_{2}\})-\phi(\{0^{m_{1}},m_{2}\})\right. \tag{3}\] \[\left.+\phi(\{m_{1},0^{m_{2}}\})\right].\] The mono-modal cross entropy and mono-modal accuracy are then defined subsequently, \[s^{m}=\mathbb{E}_{x\sim\mathcal{D}}\left[-\log\left(\text{Softmax}(\phi^{m}(x))_{ y}\right)\right], \tag{4}\] and \[Acc_{m}=\mathbb{E}_{x\sim\mathcal{D}}\left[\mathbb{1}_{y=y_{p}(x)}\right], \tag{5}\] where \(y\) is the ground-truth class of \(x\) and \(y_{p}\) the model prediction, \(y_{p}(x)=\arg\max_{y^{\prime}\in[K]}\phi_{y^{\prime}}^{m}(x)\). #### 3.1.2 Modulating the training process We modulate the level of participation of individual modalities through adjusting the intensity of the back-propagation signal of each modality, \[\theta_{t+1}=\theta_{t}-\eta\frac{\partial\mathcal{L}}{\partial\phi}\cdot\sum_ {m}\kappa_{t}^{m}\frac{\partial\phi^{m}}{\partial\theta}\bigg{|}_{t}, \tag{6}\] where \(t\) refers to a specific iteration of training, \(\theta\) denotes the trainable network parameters, \(\eta\) is the learning rate and \(\mathcal{L}\) is the loss function. Coefficient \(\kappa_{t}^{m}\) controls the magnitude of the update signal for modality \(m\) at iteration \(t\). Intuitively, if a modality is too strong (weak) we want to suppress (amplify) its update signal. The strength of a modality is measured by the averaged differences relative to the other modalities \[r_{t}^{m}=\exp\left(\frac{1}{K-1}\sum_{m^{\prime}\in[K];m^{\prime}\neq m}(s_{ t}^{m}-s_{t}^{m^{\prime}})\right). \tag{7}\] We choose to compare different modalities based on their mono-modal cross-entropy, since \(s_{t}^{m}\) reflects the amount of information attributed to modality \(m\) within the full model outputs. Then \(\kappa_{t}^{m}\) is defined as follows \[\kappa_{t}^{m}=\exp\left(-\alpha*(r_{t}^{m}-\tau_{t}^{m})\right), \tag{8}\] where \(\alpha>0\) is a hyper-parameter that controls the degree of modulation and \(\tau_{t}^{m}\) is the reference for modulation. Consequently, when a modality is too strong (\(r_{t}^{m}>\tau_{t}^{m}\)), we lower its update signal (\(\kappa_{t}^{m}<1\)). In the current implementation, we choose \(\tau_{t}^{m}\) to be \[\tau_{t}^{m}=\exp\left(\frac{1}{K-1}\sum_{m^{\prime}\in[K];m^{\prime}\neq m} \left(\hat{s}^{m}(t)-\hat{s}^{m^{\prime}}(t)\right)\right), \tag{9}\] where \(\hat{s}^{m}(t)\) denotes the running average of mono-modal cross-entropy at iteration \(t\), \[\hat{s}^{m}(t)=\hat{s}^{m}(t-1)\cdot\frac{t-1}{t}+\frac{s_{t}^{m}}{t}. \tag{10}\] The above steps are summarized in Algorithm 1 below. ``` 1: Training dataset \(\mathcal{D}=\{(x^{m_{1}},x^{m_{2}},..,x^{m_{k}}),y_{i}\}\), iteration number \(T\), logits output of a modality \(o_{t}^{m}\), model logits output \(o_{t}\), softmax output of a modality \(p_{t}^{m}\), batch size \(N\), mono-modal information \(s_{t}^{m}\), batch information discrepancy \(r_{t}^{m}\), running average information discrepancy \(\tau_{t}^{m}\), modulation coefficient \(\kappa_{t}^{m}\), \(m\in\{m_{1},m_{2},...,m_{k}\}\). 2:\(\hat{s}^{m}=0\). 3:for t=1,2,...,T do 4:\(o_{t}^{m_{1}},o_{t}^{m_{2}},...,o_{t}^{m_{k}},o_{t}=\text{net}(x^{m_{1}},x^{ m_{2}},...,x^{m_{k}})\) 5:\(p_{t}^{m}=\text{Softmax}(o_{t}^{m})\) 6:\(s_{t}^{m}=\frac{1}{N}\sum_{i=1}^{N}log^{p_{t}^{m}[i][j][i][j]}\) 7:\(\overline{s}_{t}=\frac{s_{t}^{m_{1}}+s_{t}^{m_{2}}+...,s_{t}^{m_{k}}}{2}\), \(\overline{s}_{t}=\frac{\hat{s}_{t}^{m_{1}}+\hat{s}_{t}^{m_{2}}+...,+\hat{s}_{t }^{m_{k}}}{2}\) 8:\(r_{t}^{m}=e^{((s_{t}^{m}-\overline{s}_{t}),\frac{k}{k-1})}\), \(\tau_{t}^{m}=e^{((\hat{s}^{m}-\overline{s}_{t}),\frac{k}{k-1})}\) 9:\(\kappa_{t}^{m}=e^{(-\alpha*(r_{t}^{m}-\tau_{t}^{m}))}\) 10:\(\hat{s}^{m}=\frac{\hat{s}^{m}.t}{t+1}+\frac{s_{t}^{m}}{t+1}\) 11: Update using \(\theta_{t+1}=\theta_{t}-\eta\frac{\partial\mathcal{L}}{\partial\phi}\cdot \sum_{m}\kappa_{t}^{m}\frac{\partial\phi^{m}}{\partial\theta}\bigg{|}_{t}\) 12:endfor ``` **Algorithm 1** Adaptive Gradient Modulation ### Mono-modal competition strength The empirical study (Wang et al., 2020) demonstrates that multi-modal joint training can lead to suboptimal performance that is even worse than the mono-modal model. Recently, Huang et al. (Huang et al., 2022) theoretically study this phenomenon in a simplified setting and attribute it to the modality competition mechanism that the representation learning of a modality is generally affected by the presence of other modalities. The authors further suggest that modality competition potentially explains the effectiveness of the adaptive learning methods (Wang et al., 2020; Peng et al., 2022), which are designed to improve the performance of joint training. However, the above-mentioned studies are all confined to late fusion cases. It remains unexplored whether the modality competition mechanism can generalize to other fusion strategies and how it alters the representation learning in realistic multi-modal models. This leads to an urgent need for methods that directly measure competition strength. To quantify modality competition, one must specify the competition-less state for each modality. Previous attribution methods (Hessel and Lee, 2020; Yao and Mihalcea, 2022; Gat et al., 2021; Hu et al., 2022) only utilize the responses of the underlying multi-modal model where the competition already took place and, hence, is in principle incapable of reflecting modality competition. To address this challenge, we introduce the mono-modal concept, which defines how the corresponding modality in a given multi-modal model will behave in the absence of modality competition. Then the competition strength is estimated based on the deviation of the multi-modal model outputs with respect to this mono-modal concept. #### 3.2.1 Mono-modal concept Let \(x=(x^{m_{1}},x^{m_{2}})\) denote a multi-modal input feature, where \(x^{m_{1}}\) and \(x^{m_{2}}\) refer to the mono-modal components. We focus on two modalities case below and the extension to more modalities is straightforward. The processing of \(x^{m_{1}}\) by a multi-modal model is determined by the complementary component \(x^{m_{2}}\), the network architecture \(\phi\)1, the training settings \(\mathcal{T}^{2}\) and the dataset \(\mathcal{D}\). We call this quadruple \(\mathcal{E}_{m_{1}}:=(x^{m_{2}},\phi,\mathcal{T},\mathcal{D})\) as the environment of mono-modal input \(x^{m_{1}}\). Roughly speaking, in the competition-less state we want to remove the effects of \(x^{m_{2}}\) while retaining the "normal" processing of \(x^{m_{1}}\). This can be formally denoted as \(\mathcal{E}_{m_{1}}/m_{2}\). Footnote 1: we abuse the symbol \(\phi\) a little so that it may refer to both the network architecture and the corresponding network function. With the above notations, we abstract the competition-less state for \(m_{1}\) as a function \(\mathcal{C}^{m_{1}}(x^{m_{1}};\mathcal{E}_{m_{1}}/m_{2})\) that maps the inputs to vectors in \(\mathbb{R}^{K}\), where \(K\) is the number of classes. Intuitively, \(\mathcal{C}^{m_{1}}\) captures the responses, of a given multi-modal model, to the mono-modal inputs without modality competition. Following the terminology in (McGrath et al., 2022), \(\mathcal{C}^{m_{1}}\) is referred as the _mono-modal concept_ of modality \(m_{1}\). In the following, we elaborate the construction of \(\mathcal{C}^{m},m\in\{m_{1},m_{2}\}\) under different situations. Late fusion case.In late fusion the multi-modal model can be written as \(\phi(x)=\phi^{m_{1}}(x^{m_{1}})+\phi^{m_{2}}(x^{m_{2}})\). It is natural to set \(\mathcal{E}_{m_{1}}/m_{2}=(\mathbf{0}^{m_{2}},\phi^{m_{1}},\mathcal{T}_{m_{1 }},\mathcal{D}_{m_{1}})\). \(\mathbf{0}^{m_{2}}\) denotes the null input of modality \(m_{2}\), which is realized, in the current case, by simply discarding the branch \(\phi^{m_{2}}\). \(\mathcal{T}_{m_{1}}\) refers to the same training set for the \(m_{1}\) branch as it was during the training of the multi-modal model \(\phi\). At last, \(\mathcal{D}_{m_{1}}\) denotes the set of mono-model feature components \(\{x^{m_{1}}_{i}\}_{i\in[N]}\), where \(N\) is the number of data samples and \([N]:=\{1,\ldots,N\}\). In practice, we need to _train_\(\phi^{m_{1}}\) on \(\mathcal{D}_{m_{1}}\) with settings \(\mathcal{T}_{m_{1}}\), and \(\mathcal{C}^{m_{1}}\) is nothing but the resulting network function. Early and hybrid fusion cases.In these situations, the model can only be written as \(\phi(x^{m_{1}},x^{m_{2}})\). There is no apparent way to separate the processing of \(x^{m_{1}}\) and \(x^{m_{2}}\) at the architecture level. In order to mute the influence from \(m_{2}\), we substitute \(x^{m_{2}}\) with a zero vector of the same dimension. Since the zero vector bears no information about the task, it won't introduce modality competition. Therefore, one can formally write \(\mathcal{E}_{m_{1}}/m_{2}=(\mathbf{0}^{m_{2}},\phi,\mathcal{T},\mathcal{D}_{m_ {1}})\), indicating that the architecture and training settings are the same as for the multi-modal model. This time \(\mathbf{0}^{m_{2}}\) refers to the zero input of \(m_{2}\) feature components 3. Practically, to construct \(\mathcal{C}^{m_{1}}\), we need to _train_\(\phi\) on \(\mathcal{D}^{\prime}:=\mathcal{D}_{m_{1}}\otimes\{\mathbf{0}^{m_{2}}\}\) with \(\mathcal{T}\). Samples in \(\mathcal{D}^{\prime}\) are of form \((x^{m_{1}},\mathbf{0}^{m_{2}})\). Footnote 3: \(\mathcal{T}\) includes the initialization, the loss function, hyper-parameters, and specific techniques, e.g., the learning rate scheduler, used in training. #### 3.2.2 Competition strength With the mono-modal concepts as a reference, we are ready to quantify the deviation of the multi-modal model responses from those competition-less states. A linear probing method (McGrath et al., 2022) is employed to estimate this deviation. Specifically, let \(z\) be the latent feature before the last classifier layer in the multi-modal model, we train a linear predictor from \(z\) to the targeting mono-modal concept \(\mathcal{C}^{m}\), \[f^{m}(z)=\mathbf{W}z+\mathbf{b}, \tag{11}\] whose parameters \(\mathbf{W}\) and \(\mathbf{b}\) are determined by minimizing the empirical mean square error of the predictions, \[\begin{split}\mathbf{W}^{m,*},\mathbf{b}^{m,*}=& \operatorname*{arg\,min}_{\mathbf{w},b}\frac{1}{N}\sum_{i\in[N]}\|f^{m}(z_{i})- \mathcal{C}^{m}(x^{m}_{i})\|_{2}^{2}\\ &+\lambda\left(\|\mathbf{W}\|_{2}+\|\mathbf{b}\|_{2}\right), \end{split} \tag{12}\] where \(\|\cdot\|_{p}\) denotes the \(L_{p}\) norm, \(i\) refers to the index of data samples and \(\lambda\) is the regularization strength. The \(L_{2}\) regularization term is introduced to avoid overfitting. The quality of the above linear fitting reflects how much the multi-modal features deviate from their competition-less states. Thus we define the competition strength as \[d^{m}=\frac{\sum_{i}\left(\mathcal{C}^{m}(x^{m}_{i})-f^{m}(z_{i})\right)^{2}}{ \sum_{i}(\mathcal{C}^{m}\left(x^{m}_{i}\right)-\overline{\mathcal{C}^{m}} \right)^{2}}, \tag{13}\] where \(\overline{\mathcal{C}^{m}}\) is the mean mono-modal concept value over data samples. \(d^{m}\) measures the quality of the linear predictions with respect to the naive baseline, i.e., simply predicting the mean value. Its value ranges from \(0\) to \(1\), indicating the weakest and strongest competition levels respectively. In practice, we reserve two hold-out datasets for computing the competition strength. One of them is used to train the linear predictor and the other to calculate \(d^{m}\). ## 4 Experiments and discussion ### Experimental settings In this paper, we systematically apply our adaptive gradient modulation approach to situations that cover different fusion strategies, different modality combinations, and different network architectures. For the late fusion case, our approach is compared with existing modulation methods. Moreover, we also include the mono-modal accuracy and the modality competition strength for all the situations. We carry out experiments 4 on five popular multi-modal datasets. The AV-MNIST (Vielzeuf et al., 2018) is collected for a multi-media classification task that involves disturbed images and audio features. The CREMA-D (Cao et al., 2014) is an audio-visual dataset for speech emotion recognition which consists of six emotional labels. The UR-Funny (Hasan et al., 2019) is created for humor detection, involving words (text), gestures (vision), and prosodic cues (acoustic) modalities. The AVE (Tian et al., 2018) is devised for an audio-visual event localization classification task, including 28 event classes. The CMU-MOSEI (Zadeh et al., 2018) is collected for sentence-level emotion recognition and sentiment analysis, including audio, visual, and text modalities. Here we only use text and audio modalities. Footnote 4: To better demonstrate the universal effectiveness of AGM, we further carry out experiments on the Kinetics-Sound (Kay et al., 2017) using both the late fusion and the FiLM (Perez et al., 2018) fusion strategies. These results are included in the supplementary material due to the space limit. The experiments can be grouped into two classes. The first one concerns the performance of our approach and the behavior of modality competition in the late and early fusion strategies across different multi-modal datasets. We adopt a unified design of the multi-modal models in this class. The fusion module in the early fusion case is all built with the MAXOUT (Goodfellow et al., 2013) network. In addition, for each dataset, the network models for both fusion strategies use the same encoder architecture. Specifically, for the AV-MNIST, the CREMA-D, and the Kinetics-Sound datasets, ResNet18 (He et al., 2016) is used as an encoder for both the audio and visual modalities. For the UR-Funny dataset, we use Transformer (Vaswani et al., 2017) for the encoder for all three modalities. In the second class, we carry out experiments with current SOTA models and show that our approach can also enhance more complex models in a realistic application. For the AVE dataset, the PSP (Zhou et al., 2021) network is used, which features elaborately designed methods that align the audio and visual representations during fusion. For the CMU-MOSEI dataset, we adopt the Transformer-based joint-encoding (TBJE) (Delbrouck et al., 2020) as the model. TBJE jointly encodes input modalities through the modular co-attention and the glimpse layer. Our code is implemented in Pytorch 1.2, and experiments are run on a single NVIDIA 3090 GPU. For the detailed experimental settings and hyper-parameters, please refer to the supplementary material. ### The effectiveness of AGM In this subsection, we focus on the \(Acc\) column in all the tables and demonstrate the universal effectiveness of our AGM method in improving the model performance. Tables 1 to 3 summarize the results on the AV-MNIST, the CREMA-D, and the UR-Funny dataset, respectively. In the late fusion cases, our approach is compared with the Modality-Specific Early Stopping (MSES) and Modality-Specific Learning Rate (MSLR) methods. For situations with only two modalities, we also include the results of the Gradient Blending (G-Blending), Characterizing and Overcoming the Greedy Nature of Learning (Greedy), and On-the-fly Gradient Modulation Generalization Enhancement (OGM-GE) method. It is evident that our approach constantly improves the performance w.r.t. the Joint-Train case and achieves the best accuracy in all situations. In the late fusion case, while all modulation methods generally boost the performance compared to the Joint-Train baseline, our approach exceeds the second-best one for a gap of at least \(1.06\%\). It is no \begin{table} \begin{tabular}{l|l c c c c c} \hline \hline AV-MNIST & \(Acc\) & \(Acc_{a}\) & \(Acc_{v}\) & \(d^{a}\) & \(d^{v}\) \\ \hline \multirow{4}{*}{\begin{tabular}{l} Joint-Train \\ \end{tabular} } & \(\mathcal{C}^{a}\) & - & 39.61 & - & - & - \\ & \(\mathcal{C}^{v}\) & - & - & 65.14 & - & - \\ & Joint-Train & 69.77 & 16.05 & 55.83 & 0.7838 & 0.1408 \\ & G-Blending & 70.32 & 14.36 & 56.59 & 0.7963 & 0.1359 \\ & Greedy & 70.65 & 18.80 & 63.46 & 0.7358 & 0.1340 \\ & MSES & 70.68 & 27.50 & 63.34 & 0.7538 & 0.1372 \\ & MSLR & 70.62 & 22.72 & 62.92 & 0.7300 & 0.1437 \\ & OGM-GE & 71.08 & 24.53 & 55.85 & 0.7445 & 0.1617 \\ & AGM & **72.14** & 38.90 & 63.65 & 0.6787 & 0.1197 \\ \hline \multirow{4}{*}{ \begin{tabular}{l} Joint-Train \\ \end{tabular} } & \(\mathcal{C}^{a}\) & - & 41.60 & - & - & - \\ & \(\mathcal{C}^{v}\) & - & - & 65.46 & - & - \\ \cline{1-1} & Joint-Train & 71.15 & 24.28 & 60.14 & 0.7668 & 0.1825 \\ \cline{1-1} & AGM & **72.26** & 47.79 & 68.48 & 0.7146 & 0.1796 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy (\(Acc\), \(Acc_{a}\), \(Acc_{v}\)) and the competition strength (\(d^{a}\), \(d^{v}\)) on the AV-MNIST dataset for multi-modal models using different fusion strategies. In late fusion, comparison with several modality-specific intervention methods: Modality-Specific Early Stop (MSES), Modality-Specific Learning Rate(MSLR), and On-the-fly Gradient Modulation Generalization Enhancement (OGM-GE). The results of Joint-Train are included as baselines. \(\mathcal{C}_{a}\) and \(\mathcal{C}_{v}\) indicate the performance of audio and visual modality concepts, respectively. The best results are shown in **bold**. table that the improvement in the early fusion case by our approach is comparable with the ones in late fusion cases. We note the significant increase in accuracy on CREMA-D, where, after modulating, the results of our approach are \(17.34\%\) and \(19.58\%\) higher than the ones of Joint-Train in late and early fusion, respectively. There is also a gap of \(10.34\%\) between our approach and OGM-GE. Such super-sizing effectiveness may be attributed to the fact that the most informative modality in CREMA-D, i.e., the visual modality, is considerably under-exploited in the Joint-Train. In fact, the mono-modal accuracy of the visual modality is only \(22.72\%\), which is much lower than its potential performance of the mono-modal concept, i.e., \(75.93\%\). We observe that the improvement from MSES and MSLR is often very limited. Actually, on CREMA-D the accuracy of MSES in the late fusion case is worse than the one of Joint-Train. This could be the consequence that MSES only controls the time to stop training and, thus, can only provide limited guidance to the weights update. We next show that our approach can also boost the performance of existing SOTA models. Those models normally equip with elaborately designed fusion modules to ensure higher prediction accuracy. Table 4 shows the results on the AVE dataset and CMU-MOSEI dataset, on which the improvements are \(1.09\%\) and \(0.85\%\), respectively. It is worth noting that all other modulation methods can not apply to such complex situations, as there are no separable branches in the network models for different modalities. AGM adjusts the modulation coefficients based on the running average of the mono-modal cross entropy which serves \begin{table} \begin{tabular}{l c c c c c} \hline \hline AVE & \(Acc\) & \(Acc_{a}\) & \(Acc_{v}\) & \(d^{a}\) & \(d^{v}\) \\ \hline \(\mathcal{C}^{a}\) & - & 65.00 & - & - & - \\ \(\mathcal{C}^{v}\) & - & - & 64.69 & - & - \\ PSP & 76.02 & 52.58 & 50.18 & 0.6223 & 0.6232 \\ AGM & **77.11** & 72.34 & 70.68 & 0.6198 & 0.6337 \\ \hline \hline CMU-MOSEI & \(Acc\) & \(Acc_{t}\) & \(Acc_{a}\) & \(d^{t}\) & \(d^{a}\) \\ \hline \(\mathcal{C}^{t}\) & - & 80.92 & - & - & - \\ \(\mathcal{C}^{a}\) & - & - & 74.46 & - & - \\ TBJE & 80.91 & 73.59 & 73.08 & 0.5794 & 0.9450 \\ AGM & **81.76** & 79.41 & 73.08 & 0.5774 & 0.9540 \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy and competition strength on AVE and MOSEI dataset for the general joint-training network with elaborating fusion structures network. Audio and visual are involved in the AVE dataset and audio and text in MOSEI. PSP stands for general joint training network for the AVE dataset and TBJE for the CMU-MOSEI dataset. \(\mathcal{C}_{a}\), \(\mathcal{C}_{v}\) and \(\mathcal{C}_{a}\) indicate the performance of audio, visual, and text modality, respectively. The best results are shown in **bold**. \begin{table} \begin{tabular}{l|l c c c c c c} \hline \hline \multicolumn{2}{l}{UR-Funny} & \(Acc\) & \(Acc_{a}\) & \(Acc_{v}\) & \(Acc_{t}\) & \(d^{a}\) & \(d^{v}\) & \(d^{t}\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \(\mathcal{C}^{a}\) & - & 59.23 & - & - & - & - & - \\ & & \(\mathcal{C}^{v}\) & - & - & 53.16 & - & - & - & - \\ & & \(\mathcal{C}^{t}\) & - & - & - & 63.46 & - & - & - \\ & & Joint-Train & 64.50 & 50.31 & 51.53 & 49.78 & 0.5558 & 0.1058 & 0.4513 \\ & & MSES & 64.23 & 50.31 & 49.69 & 57.87 & 0.5605 & 0.1028 & 0.4592 \\ & & MSLR & 64.74 & 50.31 & 48.62 & 49.69 & 0.5257 & 0.0975 & 0.4316 \\ & & AGM & **65.97** & 54.87 & 49.36 & 62.22 & 0.5234 & 0.0725 & 0.5147 \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & \(\mathcal{C}^{a}\) & - & 58.25 & - & - & - & - & - \\ & & \(\mathcal{C}^{v}\) & - & - & 53.29 & - & - & - & - \\ & & \(\mathcal{C}^{t}\) & - & - & 53.29 & - & - & - & - \\ & & \(\mathcal{C}^{t}\) & - & - & - & 61.07 & - & - & - \\ & & Joint-Train & 65.15 & 54.87 & 50.86 & 54.14 & 0.7217 & 0.2672 & 0.2906 \\ & & AGM & **66.07** & 64.87 & 55.20 & 63.36 & 0.6962 & 0.2697 & 0.3200 \\ \hline \hline \end{tabular} \end{table} Table 2: The same as Table 1, but for UR-Funny dataset. The involved modalities are audio, visual, and text. \begin{table} \begin{tabular}{l|l c c c c c} \hline \hline CREMA-D & \(Acc\) & \(Acc_{a}\) & \(Acc_{v}\) & \(d^{a}\) & \(d^{v}\) \\ \hline \multirow{8}{*}{\begin{tabular}{} \end{tabular} } & \(\mathcal{C}^{a}\) & - & 62.63 & - & - & - \\ & & \(\mathcal{C}^{v}\) & - & - & 75.93 & - & - \\ & Joint-Train & 61.14 & 57.10 & 22.72 & 0.4593 & 0.7555 \\ & & G-Blending & 62.03 & 19.58 & 16.89 & 0.4706 & 0.8005 \\ & Greedy & 63.08 & 43.05 & 16.89 & 0.4598 & 0.7661 \\ & MSES & 60.99 & 54.86 & 22.57 & 0.4607 & 0.7546 \\ & MSLR & 64.42 & 54.86 & 26.31 & 0.4614 & 0.7150 \\ & OGM-GE & 68.16 & 55.16 & 36.32 & 0.5448 & 0.6929 \\ & AGM & **78.48** & 48.58 & 57.85 & 0.6624 & 0.5067 \\ \hline \multirow{8}{*}{ \begin{tabular}{} \end{tabular} } & \(\mathcal{C}^{a}\) & - & 61.29 & - & - & - \\ & \(\mathcal{C}^{v}\) & - & - & 75.78 & - & - \\ \cline{1-1} & Joint-Train & 61.88 & 42.60 & 16.89 & 0.5345 & 0.9905 \\ \cline{1-1} & AGM & **81.46** & 76.53 & 80.42 & 0.8753 & 0.6496 \\ \hline \hline \end{tabular} \end{table} Table 3: The same as Table 1, but for CREMA-D dataset. as a reference of idea relative strengths of individual modalities. Additional experiments demonstrate that this reference is better than the brutal force requirement of equal contribution from all modalities. Further, we consider an in-depth comparison between AGM and the OGM-GE as their performance outstands in our experiments. Specifically, we investigate whether the Generalization Enhancement (GE) technique can hence AGM and, in turn, whether a running average reference can boost the performance of OGM-GE. We find that neither provides an improvement. The details of the above-mentioned results can be found in the supplementary material. Combining all the above results, we conclude that our modulation approach can help boost the model performance regardless of the fusion strategy, the number and types of involved modalities, and the network architecture. ### Modality competition The competition strength metric provides us a base to analyze the states of individual modalities in a joint-trained model and understand the mechanism of how the modulation methods work. In the following, we first compare the changes in competition strength before and after modulating and investigate what is brought to the multi-modal model by our adaptive gradient modulation. This follows a discussion of the modality competition behavior. #### 4.3.1 Gradient modulation & modality competition Our primary concern is how the modulation affects the model performance in terms of changing the competition state. The modality competition directly measures the deviation from the competition-less state and provides more accurate information about the competition state compared to the mono-modal accuracy, which mainly reflects the information in a single modality. Generally, we distinguish two different types of change in competition strength. In the first type, modality competition is mitigated by modulation. The results on AV-MNIST ( Table 1) exemplify this situation. For both fusion strategies, the competition strengths of audio (\(d^{a}\)) and visual (\(d^{v}\)) modalities decrease, and their mono-modal accuracy (\(Acc_{a}\) and \(Acc_{v}\)) increases as well as the multi-modal performance. This suggests that suppressing the competition, allows the model to better utilize inputs from different modalities. Figure 2 illustrates the change in performance and competition strength along with training. For the joint training baseline (left panel in Figure 2), \(d^{a}\) increases while \(d^{v}\) decreases in the initial training stage up to the \(9\)-th epoch. Hence, the model initially learns information from the visual modality. Indeed, \(Acc_{a}\) is almost the random guess while \(Acc_{v}\) is close to the full multi-modal accuracy. In later epochs, \(d^{a}\) starts to decrease and its mono-modal accuracy increases accordingly. On the other hand, the increase of \(d^{v}\) is accompanied by the decrease of \(Acc_{v}\). When adaptive gradient modulation is applied (right panel in Figure 2), the competition strength of both modalities decreases along training and converges to lower values than their counterpart in the joint training case. At the same time, their mono-modal accuracy keeps increasing. We find that the model starts to learn the audio modality at a relatively earlier epoch and \(Acc_{a}\) is boosted considerably. In the second type, the competition of some modalities could be strengthened. Results in Tables 2 to 4 belong to this type. For CREMA-D, \(d^{v}\) decreases while \(d^{a}\) increases. This allows the model to better exploit the visual modality 5, which is more informative 6. Similar behaviors are observed on the AVE and CMU-MOSEI datasets. In both cases, the modulation leads to a decrease in competition strength of the more informative modality, i.e., the audio modality of AVE and the text modality of CMU-MOSEI. The results for UR-Funny differ from previous cases. It mainly reflects a balance in information usage between the audio and text modalities. Interestingly, we note that even though the text modality possesses better information, its \(d^{t}\) increases after modulation. We suspect this could be due to a high-order effect when multiple modalities are present. In other words, combining the text and the visual modalities could be more informative than combining the audio and visual modalities. Footnote 5: We remark that, in this case, the modality collapse in joint training on CREMA-D can be attributed to the modality competition. Footnote 6: The accuracy of the visual mono-modal concept is higher than the one of the audio modality. In summary, the results quantitatively demonstrate the behavior behind the effectiveness of our modulation method. In most cases, the picture is clear that while the raw model possesses a certain bias towards some modalities, the modulation pushes the model to rely on the more informative modalities 7. Footnote 7: Note that better use of informative modalities does not necessarily lead to low competition strengths of these modalities. #### 4.3.2 Behavior of modality competition In the following, we proceed to investigate the modality competition in the joint training situation. We systematically study the competition's behavior from various perspectives that cover the model's preference towards individual modalities, the relation to the fusion strategy, and the relation to the input data. Existence of preferred modality.Our results reveal that modality competition is commonly present in multi-modal
2304.01122
Charge-density wave fluctuation driven composite order in the layered Kagome Metals
The newly discovered kagome metals AV$_3$Sb$_5$ (A = K, Rb, Cs) offer an exciting route to study exotic phases arising due to interplay between electronic correlations and topology. Besides superconductivity, these materials exhibit a charge-density wave (CDW) phase occurring at around 100 K, whose origin still remains elusive. The robust multi-component $2 \times 2$ CDW phase in these systems is of great interest due to the presence of an unusually large anomalous Hall effect. In quasi-2D systems with weak inter-layer coupling fluctuation driven exotic phases may appear. In particular in systems with multi-component order parameters fluctuations may lead to establishment of composite order when only products of individual order parameters condense while the individual ones themselves remain disordered. We argue that such fluctuation-driven regime of composite CDW order may exist in thin films of kagome metals above the CDW transition temperature. It is suggested that the melting of the Trihexagonal state in the material doped way from the van Hove singularities gives rise to a pseudogap regime where the spectral weight is concentrated in small pockets and most of the original Fermi surface is gapped. Our findings suggest possible presence of exotic phases in the weakly coupled layered kagome metals, more so in the newly synthesized thin films of kagome metals.
Alexei M. Tsvelik, Saheli Sarkar
2023-04-03T16:38:52Z
http://arxiv.org/abs/2304.01122v3
# Charge-density wave fluctuation driven composite order in the layered Kagome Metals ###### Abstract The newly discovered kagome metals AV\({}_{3}\)Sb\({}_{5}\) (A = K, Rb, Cs) offer an exciting route to study exotic phases arising due to interplay between electronic correlations and topology. Besides superconductivity, these materials exhibit a charge-density wave (CDW) phase occurring at around 100 K, whose origin still remains elusive. The robust multi-component \(2\times 2\) CDW phase in these systems is of great interest due to the presence of an unusually large anomalous Hall effect. In quasi-2D systems with weak inter-layer coupling fluctuation driven exotic phases may appear. In particular in systems with multi-component order parameters fluctuations may lead to establishment of composite order when only products of individual order parameters condense while the individual ones themselves remain disordered. We argue that such fluctuation-driven regime of composite CDW order may exist in thin films of Kagome metals above the CDW transition temperature. It is suggested that the melting of the Trinkagonal state in the material doped way from the van Hove singularities gives rise to a pseudogap regime where the spectral weight is concentrated in small pockets and most of the original Fermi surface is gapped. Our findings suggest possible presence of exotic phases in the weakly coupled layered kagome metals, more so in the newly synthesized thin films of kagome metals. ## I Introduction The interplay between electronic correlations and topology is a major field of study in the condensed matter systems.[1; 2] The recently discovered kagome metals AV\({}_{3}\)Sb\({}_{5}\) with (A = K, Rb, Cs) are quasi-two dimensional (2D) system with hexagonal lattice symmetry.[3] The band structure of the kagome metals exhibit a flat band, saddle-point van Hove singularities (vHSs) and a pair of Dirac points. Owing to such an electronic structure, these systems have created a new platform to study exotic phases which can occur due to presence of both correlations and topology.[4; 5] All of the AV\({}_{3}\)Sb\({}_{5}\) undergo a charge-density-wave (CDW) transition[6; 7; 8; 9; 10] at around temperature T\({}_{CDW}\)\(\sim\) 100 K. Along with the emergence of the CDW order, experiments and theoretical studies have found different unusual properties, such as bond density modulations,[11] a chiral flux phase,[12; 13] a giant anomalous Hall effect[14; 15; 16] with time-reversal symmetry breaking,[17; 18; 19; 20] which can be associated with loop currents.[21; 22; 23; 24; 25] At much lower temperatures these materials may exhibit superconductivity[4; 24; 25; 26] with T\({}_{c}\)\(\sim\) 1 K. The nature of the superconducting phase is still under debate. Some experiments found the gap to be nodeless,[27] some to contain nodes.[28] Theoretical studies suggest unconventional nature of the superconductivity.[29; 30; 31; 32; 33] There have been also proposals of more exotic superconductivity like pair-density wave,[34; 35] charge 4e and charge 6e superconducting states[36] and nematicity.[36; 37; 38] There have been a great amount of works[39; 40; 41; 42; 43; 44] to gain insight into the nature of the CDW phase. So far it is well established that the CDW order of the kagome metals is a multicomponent (3Q) one, although, the real space structure of the CDW phase still remains elusive. Experiments[45; 46] observe both Star of David (SoD) and Trihexagonal (TrH) pattern in the two-dimensional plane of these systems. Moreover, the CDW order doubles the unit cell in the (a,b) plane and hence has a robust \(2\times 2\) feature as found in scanning tunneling-microscopy (STM),[6] angle-resolved photoemission spectroscopy (ARPES)[47] and X-ray[48] experiments. However, some X-ray and STM experiments found a modulation in the crystallographic c- direction for the kagome metals with alkali atoms Rb and Cs. The simultaneous ordering of CDW phase with commensurate momenta 3Q are believed to be driven by nested Fermi surface instabilities,[49; 50; 51; 52] enhanced through the presence of vHS due to logarithmically diverging density of states at the vHS points[52] in two dimensions. In this paper we explore the situation[29] of 5/12 filled band when the chemical potential lies at the van Hove singularity. According to the Mermin -Wagner theorem[53] fluctuations are enhanced in low dimensions. The presence of strong fluctuations is well established in such quasi-2D systems as cuprates, iron based superconductors[54] where they are responsible for pseudogap phase,[55; 56; 57] anomalous phonon softening[58] and also different emergent orders.[59; 60; 61] These prototype examples indicate that fluctuations may also play an important role in the layered quasi-2D kagome metal materials. However, as of now, although several theoretical works have considered a mean-field scenario of the CDW order parameters, the effect of fluctuations in kagome CDW metals has not been discussed. Their effect will become even more important in the kagome metal mono-layers[62] and thin films.[63; 64; 65] In this paper, we go beyond the mean-field theory of the multi-component CDW order and consider the fluctuations in these orders within a Ginzburg-Landau (GL) free energy model. As its microscopic justification we consider an effective low energy theory[69] described by the patch model considering only the V atoms of AV\({}_{3}\)Sb\({}_{5}\), giving rise to vHS at the three \(\mathbf{M}\) points in the Brillouin zone. We consider two-dimensional systems where topologically nontrivial configurations of the order parameter fields - vortices, can melt away the CDW order and restore the original lattice symmetry without destroying the quasiparticle gaps. We find that there an interval of temperatures above the CDW phase transition where only a composite order of the three CDWs can exist while the individual CDW order parameters remain fluctuating. The latter ones condense at low temperatures. We organize the rest of the paper as follows. In Section II, we present our working microscopic model which includes the interactions in the system, giving rise to the electronic CDW instability. Then, in Section III, we perform the mean field analysis of the CDW orders. In Section IV, we consider fluctuations of the CDW order parameters and present a GL free energy by incorporating the vortex configurations by means of dual fields. In Section V, we consider a simplified case, where only two CDW orders develop. We discuss appearance of the composite order in this model. Then in Section VI we discuss the effects of doping away from the van Hove singularities. We argue that melting of the TrH phase in a doped system leads to emergence of a pseudogap regime resembling the one observed in the underdoped cuprates. At last we give a conclusion of our work in Section VII. ## II Model The goal of our work is to describe fluctuations in the CDW regime of Kagome metals described by the patch model adopted by T. Park _et.al._.[49] A similar model leading to the same Ginzburg-Landau energy was used in.[11] Both models consider just one vanadium orbital per site of the kagome lattice. The CDW order is believed to be electronically driven. However, there are some experiments which point to the role of phonons.[66] The first principles calculations[67; 68; 4] for the kagome metals AV\({}_{3}\)Sb\({}_{5}\) show saddle points at the \(\mathbf{M}_{a}\) points of the hexagonal Brillouin zone, giving rise to the logarithmically divergent density of states. Hence we consider an effective low-energy model which takes into account only patches of the Fermi surface around the \(\mathbf{M}_{a}\) points in the Brillouin zone [ Fig. 1] of kagome metals AV\({}_{3}\)Sb\({}_{5}\) and interactions among the fermionic states between these saddle points as was done in.[49] The non-interacting Hamiltonian is given by \[H_{0}=\sum_{a=1}^{3}\sum_{|k|\leq\Lambda}c_{a\sigma}^{\dagger}(k)[\epsilon_{a} (k)-\mu]c_{a\sigma}(k), \tag{1}\] where the single electron dispersion close to the saddle points \(\mathbf{M}_{a}\) are given by, \[\epsilon_{1} =k_{1}(k_{1}+k_{2}),\] \[\epsilon_{2} =-k_{1}k_{2},\] \[\epsilon_{3} =k_{2}(k_{1}+k_{2}),\] \[k_{1,2} =k_{x}\pm\sqrt{3}k_{y}, \tag{2}\] and \(\mu\) is the chemical potential. Now, we consider electron-electron interactions among the fermions in the three patches close to the \(M_{a}\) saddle points. \[H_{int} =\sum_{a\neq b}\sum_{k_{1},k_{2},k_{3},k_{4}}\Big{[}g_{1}(c_{a,k_ {1},\sigma}^{\dagger}c_{b,k_{4},\sigma})(c_{b,k_{2},\sigma^{\prime}}^{ \dagger}c_{a,k_{3},\sigma^{\prime}})+\] \[g_{2}(c_{a,k_{1},\sigma}^{\dagger}c_{a,k_{4},\sigma})(c_{b,k_{2},\sigma^{\prime}}^{\dagger}c_{b,k_{3},\sigma^{\prime}})+\] \[g_{3}(c_{a,k_{1},\sigma}^{\dagger}c_{a,k_{2},-\sigma}^{\dagger} )(c_{b,k_{3},-\sigma}c_{b,k_{4},\sigma})\Big{]}\] \[+\sum_{a}\sum_{k_{1},k_{2},k_{3},k_{4}}g_{4}(c_{ak_{1},\sigma}^{ \dagger}c_{a,k_{4},\sigma})(c_{a,k_{2},-\sigma}^{\dagger}c_{a,k_{3},-\sigma}).\] The total effective Hamiltonian is given by \[H=H_{0}+H_{int} \tag{4}\] In the Hamiltonian, \(a=1,2,3\) are the patch indices and \(\mathbf{k}\) is momentum measured from the \(\mathbf{M}_{a}\). In the Eqn.(II), we have the constraint \(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}+\mathbf{k}_{4}=0\). The coupling constants \(g_{1},g_{2},g_{3}\) and \(g_{4}\) represent interpatch exchange, interpatch density-density, Umklapp and intra-patch scattering terms respectively. The parquet renormalization group (pRG) analysis[49] suggests that an instability in the system occurs if the corresponding interaction strength becomes positive. For our work, we are interested only in various charge-density wave (CDW) instabilities. We do not consider[49] interplay between the superconductivity and the CDW, as the superconductivity appears only at very low temperature. Now one can construct the following CDW type order parameters in the patch model. For the real CDW (rCDW), and imaginary CDW (iCDW), the order parameters are respectively, \[\Omega_{a} \sim G_{1}\sum_{k,\sigma}\langle c_{a2k}^{\dagger}c_{a3k^{\prime} }\rangle, \tag{5}\] \[\Psi_{a} \sim\frac{G_{2}}{i}\sum_{k,\sigma}\langle c_{a2k}^{\dagger}c_{a3k ^{\prime}}\rangle. \tag{6}\] The effective interaction strengths for the rCDW and iCDW are \(G_{1}=-2g_{1}+g_{2}-g_{3}\) and \(G_{2}=-2g_{1}+g_{2}+g_{3}\) respectively. Moreover, \(\mathbf{k}^{\prime}=\mathbf{k}+\mathbf{Q}_{a}\). \(\mathbf{Q}_{a}\) are the three nesting vectors connecting the \(\mathbf{M}_{a}\) and the ordering wave-vectors for the CDW order parameters as shown in the Fig. 1. The interactions are assumed to be quasi-local with \(\Lambda\) being the UV cut-off. The Hamiltonian is \(SU(2)\times Z_{3}\times U(1)\) invariant. Figure 1: The hexagonal Brillouin zone, showing the high symmetry points. AV\({}_{3}\)Sb\({}_{5}\) exhibit saddle points at the \(\mathbf{M}_{1,2,3}\), shown by green and blue circles. The \(\mathbf{M}_{a}\) points are connected by the three nesting vectors \(\mathbf{Q}_{1,2,3}\), which are also the ordering wave-vectors of the CDW. ## III Mean field theory In this section we perform a mean-field decoupling of the Eqn.(4) in the CDW channels. This already suggests that we have \(g_{4}=0\), as the effective interactions for rCDW and iCDW do not depend on \(g_{4}\). We keep both the rCDW and iCDW and derive a mean-field Hamiltonian and a GL free energy in terms of a complex CDW order parameters. If the leading interaction term is \(g_{1}\), one can perform the Hubbard-Stratonovich transformation, \[H_{mf}=|V_{ab}|^{2}/2g_{1}+\sum_{a>b}\Big{[}V_{ab}c_{a\sigma}^{+} c_{b\sigma}+V_{ab}^{*}c_{b\sigma}^{+}c_{a\sigma}\Big{]}+ \tag{7}\] \[\sum_{a=1}^{3}\sum_{|k|<\Lambda}\epsilon_{a}(k)c_{a\sigma}^{+}(k )c_{a\sigma}(k).\] We introduce notations \(V_{12}=\Delta_{3},~{}~{}V_{13}=\Delta_{2},~{}~{}V_{23}=\Delta_{1}^{*}\). Now, by integrating out the fermion field, we obtain the action in terms of the CDW order parameter fields \(\Delta_{a}=\Omega_{a}+i\Psi_{a}=|\Delta_{a}|e^{i\phi_{a}}\): \[F=\frac{1}{2g_{1}}\int d^{2}xd\tau|\Delta_{a}|^{2}-\text{Tr}\;\text{ln}\; \mathcal{G}^{-1}, \tag{8}\] where \(\mathcal{G}^{-1}\) is the inverse Green's function matrix. The electronic spectrum at the saddle point is determined by the equation: \[\left|\begin{array}{ccc}-E+\epsilon_{1}&\Delta_{3}&\Delta_{2}\\ \Delta_{3}^{*}&-E+\epsilon_{2}&\Delta_{1}^{*}\\ \Delta_{2}^{*}&\Delta_{1}&-E+\epsilon_{3}\end{array}\right|=0, \tag{9}\] The result is \[(\epsilon_{1}-E)(\epsilon_{2}-E)(\epsilon_{3}-E)+E(|\Delta_{1}|^ {2}+|\Delta_{2}|^{2}+|\Delta_{3}|^{2})\] \[-\epsilon_{1}|\Delta_{1}|^{2}-\epsilon_{2}|\Delta_{2}|^{2}- \epsilon_{3}|\Delta_{3}|^{2}+\Delta_{1}\Delta_{3}\Delta_{2}+\Delta_{1}^{*} \Delta_{2}^{*}\Delta_{3}^{*}\] \[=0. \tag{10}\] We assume that the fluctuations of moduli are gapped and consider the saddle point where all \(|\Delta_{a}|\) are equal. It follows from Eqn.(2) that \[\epsilon_{1}\epsilon_{2}+\epsilon_{1}\epsilon_{3}+\epsilon_{2} \epsilon_{3}=0, \tag{11}\] which leads to simplification of Eqn.(10) resulting in \[(E\pm 1)^{2}(E\mp 2)-3(k_{x}^{2}+k_{y}^{2})(E^{2}-1)+4k_{x}^{2}(k_{x }^{2}-3k_{y}^{2})^{2}\] \[=0. \tag{12}\] where we set \(|\Delta_{a}|=1\). According to,[69] plus sign in the first bracket corresponds to the \(-3Q\) phase (SoD). In this phase there is a Fermi surface [see Fig. 2]. The minus sign corresponds to the \(+3Q\) (TrH) phase where the quasiparticle spectrum is fully gapped. At \(g_{3}\neq 0\), the mean-field spectrum should be corrected : \[\Delta_{a}\rightarrow\Delta_{a}+(g_{3}/g_{1})\Delta_{a}^{*}, \tag{13}\] This change does not modify the spectrum qualitatively though it modifies the Green's function. ## IV Ginzburg-Landau free energy We will follow the conclusions of the previous papers and, as we have mentioned above, consider the saddle point solution with all \(|\Delta_{a}|\) being equal and treat fluctuations of the moduli of \(\Delta\)'s as gapped. Hence the subsequent analysis of the GL free energy will include only phase fluctuations. In the absence of the Umklapp \(g_{3}=0\) the only phase dependent term in the free energy density corresponds to the product of all three \(\Delta\)'s: \[\delta F=-G\cos(\phi_{1}+\phi_{2}+\phi_{3}). \tag{14}\] Hence in the absence of the Umklapp two phase fields remain critical in the low - temperature phase. However, if \(g_{3}\neq 0\) there is a contribution to the free energy density: \[g_{3}\Big{[}\Delta_{a}^{2}+(\Delta_{a}^{*})^{2}]\sim g_{3}[\cos(2 \phi_{1})+\cos(2\phi_{2})+\cos(2\phi_{3})] \tag{15}\] ### Fluctuations in the CDW order parameters Now we will consider phase fluctuations of the CDW order parameters which requires inclusion of the gradient terms. In two dimensions one must account for topologically nontrivial configurations of order parameter fields - vortices, which are being point-like objects with finite energy and can be thermally excited. To properly account for such configurations we regularize the model by putting it on a suitable lattice with lattice constant \(b\) and then taking a continuum limit. The form of the free energy functional Eqn.(16) reflects the fact that the order parameters are periodic functions of \(\phi_{a}\) Figure 2: Contour plots of the gapless quasiparticle branch in the \(-3Q\) phase. The Fermi surface is the boundary between the grey and the brown areas. This feature allows for topologically nontrivial configurations of the fields in the form of vortices - configurations where \(\phi_{a}\) fields change by \(2\pi\) along closed spacial loops. In the continuum limit such configurations are singular which explains the necessity for lattice regularization. It is well known that in 2D vortices can change a character of phase transitions. One way to take them into account in the continuous limit is to introduce dual phase fields \(\bar{\phi}_{a}\).[70] In the present case is slightly unusual because in the region of interest the GL action contains the terms which depend on both \(\phi\) and \(\bar{\phi}\). The corresponding formalism was introduced in [71] (see also [60]). The regularized GL free energy density is \[\mathcal{F}/T =-\frac{J}{T}\sum_{a}\sum_{<b>}\frac{1}{b^{2}}\cos\Big{[}\phi_{a}( \mathbf{x})-\phi_{a}(\mathbf{x}+\mathbf{b})\Big{]}\] \[-G\cos(\phi_{1}+\phi_{2}+\phi_{3})+Ag_{3}\sum_{a}\cos(2\phi_{a}), \tag{16}\] where coefficient \(A\sim\Delta^{2}/T>0\). Now we can follow the standard procedure and write down the the continuum limit of Eqn.(16) as (in what follows we will set the stiffness \(J=1\)): \[\mathcal{F}/T =\sum_{a}\Big{[}\frac{1}{2T}(\partial_{x}\phi_{a})^{2}+\frac{T}{ 2}(\partial_{x}\bar{\phi}_{a})^{2}+i\partial_{x}\phi_{a}\partial_{y}\bar{\phi }_{a}\] \[+Ag_{3}\cos(2\phi_{a})+\eta\cos(2\pi\bar{\phi}_{a})\Big{]}-G\cos( \phi_{1}+\phi_{2}+\phi_{3}). \tag{17}\] The coupling \(\eta\) is proportional to the vortex fugacity. The model Eqn.(17) contains both original fields \(\phi_{a}\) and their dual fields \(\bar{\phi}_{a}\) which take care of the vortex configurations. The corresponding path integral for the partition function includes integration over both fields: \[Z=\int D\phi_{a}(x)D\bar{\phi}_{a}(x)\exp\Big{(}-\int d^{2}x\mathcal{F}/T \Big{)}. \tag{18}\] To determine whether the cosine terms are relevant or irrelevant, one has to calculate their scaling dimensions. To compute the scaling dimensions of various perturbations, we start with the Gaussian model. The results are \[d_{g_{3}}=T/\pi,\;\;d_{\eta}=\pi/T,\;\;d_{G}=3T/4\pi, \tag{19}\] The direct and dual operators cannot order simultaneously; this creates an interesting situation at the transition where both of them are relevant. It is a nontrivial situation, see, for example.[60] In what follows we consider a limit of large \(G\) when the sum of all phases is fixed.[31] Now, we can make a transformation as follows: \[\phi_{a} =\Phi/\sqrt{3}+(\sqrt{2/3})\mathbf{e}_{a}\mathbf{\chi},\] \[\mathbf{e}_{a} =(1,0),\;(-1/2,\sqrt{3}/2),\;(-1/2,-\sqrt{3}/2). \tag{20}\] with \(\mathbf{\chi}=(\chi_{1},\chi_{2})\) and treat \(\Phi\) as gapped. In this case we get following the calculation shown in Appendix A, an effective free energy: \[\mathcal{F}_{eff}/T =\sum_{a=1}^{3}\Big{[}\bar{A}g_{3}\cos(\sqrt{8/3}\mathbf{e}_{a} \mathbf{\chi})-B\cos(2\pi\sqrt{2}\mathbf{\omega}_{a}\bar{\mathbf{\chi}})\Big{]}+\] \[\sum_{i=1,2}\Big{[}\frac{1}{2T}(\partial_{x}\chi_{i})^{2}+\frac{T }{2}(\partial_{x}\bar{\chi}_{i})^{2}+i\partial_{x}\chi_{i}\partial_{y}\bar{ \chi}_{i}\Big{]}, \tag{21}\] where \(\mathbf{\omega}_{a}=(0,1),(\sqrt{3},1)/2,(\sqrt{3},-1)/2\), \(B\sim\eta^{2}\) and \(\bar{A}=\langle\cos(\Phi/\sqrt{3})\rangle A\). The scaling dimensions of the cosines are \[d_{g_{3}}=2T/3\pi,\;\;d_{B}=2\pi/T \tag{22}\] The perturbations are relevant or irrelevant when the scaling dimension of the operators \(d_{op}<D\) and \(d_{op}>D\) respectively, \(D\) being the spatial dimension of the system. We observe that below \(T/\pi=3\) both direct and dual cosine terms are relevant provided the \(G\)-term is relevant which is true for \(T/\pi<8/3\). Below \(T_{c1}/\pi=8/3\), the \(G\)-term is relevant and the sum of all phases is frozen. However, above certain temperature \(T_{c2}\) the vortices destroy the order of individual CDWs. Only the product of their order parameters acquires a finite average, which we refer to as a composite order (\(\Delta_{1}\Delta_{2}\Delta_{3}\)) [see Fig. 3]. Since the periodicity of this order parameter coincides with the periodicity of the lattice, \(T_{c1}\) is likely to mark a crossover. At \(T_{c2}\) there is a phase transition into a phase where individual phases are frozen which breaks the symmetry of lattice and may also break the time reversal symmetry (see below and also see Fig. 3). The character of the low temperature phase is determined by the signs of \(G\) and \(g_{3}\). At \(G>0\) the product of \(\Delta\)'s have the same sign and we have the TrH order, at Figure 3: The phase diagram of the kagome metal layer. For J =1, there is a crossover into a regime with composite order around \(\frac{T}{\pi}=\frac{8}{3}\), where the sum of the CDW phases are frozen. For a doping slightly away from the vHS (\(\mu\neq 0\)), it will exhibit a ‘pseudogap’ -like behavior. Around \(\frac{T}{\pi}=\sqrt{3}\) there is a phase transition into the state where individual CDW’s order. For \(g_{3}>0\) the low temperature phase breaks time-reversal symmetry. \(G<0\) it is negative and we have the SoD pattern. If \(g_{3}<0\) the vacuum corresponds to \(\chi_{1}=\chi_{2}=0\). This is \(rCDW\) - real CDWs. If \(g_{3}>0\) there are degenerate vacua situated on a hexagonal lattice of \(\chi_{1,2}\). There are two inequivalent points \(\sqrt{8/3}(\chi_{1},\chi_{2})=(4\pi/3,0)\) and \((2\pi/3,2\pi/\sqrt{3})\). At each of these vacua all \(\Delta_{a}\)'s are the same and are equal either to \(\exp(4\pi i/3)\) or its complex conjugate. In the broken symmetry state one of this vacua is chosen which corresponds to complex \(rCDW+iCDW\) with a broken time-reversal. This time-reversal symmetry breaking spontaneously induces orbital currents which can manifest in anomalous Hall effect.[14] The resulting real space pattern for the corresponding bond order can be either SoD or TrH along with the current order pattern as also discussed in.[11; 49] The transition temperature is determined by the competition between normal and dual cosine perturbations. It can be estimated by comparing the mass scales generated by the competing operators. A relevant perturbation can drive a phase transition and the transition point can be determined by estimating the scale of mass gaps in the corresponding phases and comparing them with each other. The scale of the mass gap can be estimated by the fact that the contribution to the action of the relevant operator inducing a finite correlation length, becomes of the order of unity. Therefore, we notice that the phase transition from the composite order state to the state with individual CDWs occurs when \[B^{1/(2-d_{B})}\sim(\bar{A}g_{3})^{1/(2-d_{g_{3}})}. \tag{23}\] Solving this equation with logarithmic accuracy we get the estimate for the transition temperature: \[T_{c2}/\pi=\frac{3}{2}\Big{(}1-\alpha+\sqrt{1-2\alpha/3+\alpha^{2}}\Big{)}, \ \ \alpha=\frac{\ln(\bar{A}g_{3})}{\ln B}. \tag{24}\] For comparable coupling constants it yields \(T_{c2}/\pi=\sqrt{3}\) and \(d^{*}=2/\sqrt{3}\). The model Eqn.(21) belongs to the class of affine XY models which have been studied in connection to the problem of quark confinement.[72; 73] Although this particular model has not been studied, some insights can be drawn. An affine XY model with different operators was studied numerically in[73] and the results indicate that the transition is probably weak first order. The hysteresis, however, has not been observed which leaves a possibility of a second order phase transition. The uncertainty remains and first order transition remains a possibility also for our case. If, however, it is a second order transition, then following the results for another similar model,[74] we suggest that it would belong to the \(\mathbb{Z}_{3}\) Potts universality class.[75] This suggest that the critical exponents are \(\nu=\frac{5}{6}\), \(\eta=\frac{4}{15}\).[76] ## V Simplified case The purpose of this section is to study an example of a treatable model describing a phase transition driven by mutually dual cosines. This model describes the case when only two CDWs develop: \(\Delta_{3}=0,\ |\Delta_{1}|=|\Delta_{2}|\), describing a nematicity[38] in the system. Then there are two phases \(\phi_{1}\) and \(\phi_{2}\), whose fluctuations are described by action Eqn.(16). Once there sum is frozen we arrive at model (Eqn.(21)) with a single pair of fields \(\chi,\bar{\chi}\). This situation was studied in[60] repeat the calculations here for illustrative purposes. In this case at \(T/\pi=1\) the scaling dimensions of the cosines are equal to 1, giving rise to comparable values of \(\bar{A}g_{3}\) and \(B\). At this point, the bosonic action (Eqn.(21)) can be refermionized and recast as a model of relativistic fermions with two kinds of mass terms:[77] \[\mathcal{F}_{eff}/T=R^{+}(\partial_{y}-i\partial_{x})R+L^{+}( \partial_{y}+i\partial_{x})L+\] \[\bar{A}g_{3}(R^{+}L+L^{+}R)+B(RL+L^{+}R^{+}). \tag{25}\] The next step is two express the Dirac fermions in terms of Majoranas: \[R,L=\frac{1}{\sqrt{2}}\Big{[}\rho_{R,L}^{(+)}+i\rho_{R,L}^{(-)}\Big{]}. \tag{26}\] As a result we get two separate models for Majorana fermions with masses \(m_{\pm}=\bar{A}g_{3}\pm B\). Each Majorana species corresponds to 2D Ising model where the mass is proportional to \((T-T_{c})\). For any sign of \(g_{3}\) the transition occurs only for one Majorana species. As was shown in[60] the CDW order parameter (for instance, \(\Delta_{1}\), since in the given case \(\Delta_{2}=\Delta_{1}^{*}\)) can be written as \[\Delta=i\sigma_{+}\sigma_{-}+\mu_{+}\mu_{-}, \tag{27}\] where \(\sigma_{\pm}\) are the order and \(\mu_{\pm}\) are the disorder parameters of the Ising models with masses \(m_{\pm}\). One of these models is always in the ordered \(\langle\sigma\rangle\neq 0\) or disordered \((\langle\mu\rangle\neq 0)\) phase, the other one undergoes a phase transition. It can be shown that in the part of the phase diagram where \(m_{\pm}>0\), the expectation value of \(\sigma_{1/2}\) is non vanishing, whereas average of \(\mu_{1/2}\) vanishes. Also, for \(m_{\pm}<0\), the average of \(\sigma_{\pm}\) vanishes, while average of \(\mu_{\pm}\) becomes finite. The \(\mathbb{Z}_{3}\) symmetric case is more complicated. If the transition is of the second order then some insight can be drawn from,[74] where a similar model at the transition point was represented as a sum of the critical \(\mathbb{Z}_{3}\) Potts model and a W\({}_{3}\) Conformal Field Theory perturbed by a relevant operator: \[H=H_{\mathbb{Z}_{3}}^{0}+H_{W_{3}}^{0}-\gamma\Phi_{\lambda_{1}+\lambda_{2},0}, \tag{28}\] where \(\lambda\)'s are fundamental weights of the SU(3) group. The perturbed \(W_{3}\) theory is massive. ## VI Doping All previous calculations remain valid if the chemical potential is slightly away from the vHS. In the low temperature state the CDW order will lead to reconstruction of the Fermi surface through Brillouin zone folding with appearance of small Fermi pockets as described in, for example, in.[35] Once the temperature exceeds \(T_{c2}\) the individual CDW order will melt, but the spectral gaps will survive. The system will enter in a pseudogap regime similar to the one observed in the underdoped cuprates where below the certain crossover temperature most of the original Fermi surface gradually fades away and the low energy spectral weight is concentrated at small pockets. As in the cuprates the predicted crossover is not accompanied by a broken lattice symmetry. The ideas that melting of the low temperature Neel order may explain the observations of the Fermi surface arcs.[78] ## VII Conclusions In this work we have studied a fluctuation regime in the CDW order, within an effective low energy interacting patch model[49] describing a layered kagome system or a two-dimensional film. We study the fluctuation by considering a field theoretic technique which allows us to treat simultaneously the effects of the discrete symmetry breaking order and the vortex physics. We observe that the interplay of fluctuations and topology (vortices) _in two dimensions_ leads to formation of a special regime where the individual low temperature CDW orders melt restoring the lattice symmetry but keeping intact the quasiparticle gaps. At further lowering of the temperature the system undergoes a phase transition into the phase with individual CDW order. The suggested mechanism is similar to the mechanism of formation of charge \(6e\) superconducting condensate described theoretically in [79; 80; 81; 85; 86; 87] and recently observed in the thin flakes of the kagome superconductor CsV\({}_{3}\)Sb\({}_{5}\).[82] The measurements were performed on mesoscopic CsV\({}_{3}\)Sb\({}_{5}\) rings are fabricated by etching the kagome superconductor thin flakes exfoliated from bulk samples. We suggest a similar arrangement for the CDW experiments. We identify the CDW transition as belonging to the \(\mathbb{Z}_{3}\) Potts universality class. ## VIII Acknowledgements We are grateful to Andrey Chubukov, Philippe Lecheminant and Dmitry Kovrizhin for valuable discussions. We are grateful to Philippe Lecheminant for attracting our attention to several beautiful papers relevant to the present topic. This work was supported by Office of Basic Energy Sciences, Material Sciences and Engineering Division, U.S. Department of Energy (DOE) under Contracts No. DE-SC0012704. ## Appendix A Free energy for a large value of G: The free energy considered in the section IV.1 is given by: \[\mathcal{F}/T =\sum_{a}\Big{[}\frac{1}{2T}(\partial_{x}\phi_{a})^{2}+\frac{T} {2}(\partial_{x}\bar{\phi}_{a})^{2}+i\partial_{x}\phi_{a}\partial_{y}\bar{ \phi}_{a}\] \[+Ag_{3}\cos(2\phi_{a})+\eta\cos(2\pi\bar{\phi}_{a})\Big{]}-G\cos( \phi_{1}+\phi_{2}+\phi_{3}). \tag{16}\] At \(\eta=0\) one can integrate over \(\partial_{x}\bar{\phi}_{a}\): \[\int D\bar{\phi}_{a}\exp\{-\int d^{2}x[\frac{T}{2}(\partial_{x} \bar{\phi}_{a})^{2}+i\partial_{x}\phi_{a}\partial_{y}\bar{\phi}_{a}]\}\] \[=const.\int D\bar{\phi}_{a}\exp\{-\int d^{2}x[\frac{T}{2}(\partial _{x}\bar{\phi}_{a})^{2}-i\partial_{y}\phi_{a}\partial_{x}\bar{\phi}_{a}]\}\] \[\sim\exp[-\frac{1}{2T}\int d^{2}x(\partial_{y}\phi)^{2}] \tag{17}\] The result is the partition function for \(\phi\) fields: \[Z[\phi]=\int D\phi_{a}\exp\{-\int d^{2}x[\frac{1}{2T}[(\partial _{x}\phi_{a})^{2}+\] \[(\partial_{y}\phi_{a})^{2}]+Ag_{3}\cos(2\phi_{a})-G\cos(\phi_{1}+ \phi_{2}+\phi_{3})\}. \tag{18}\] In a similar way at \(g_{3}=0\) we can integrate out the \(\phi\)-fields. In the limit of large G, we can consider the sum of all phases to be fixed. Hence the GL free energy can be transformed with \[\phi_{1} =\Phi/\sqrt{3}+(\sqrt{2/3})\chi_{1},\] \[\phi_{2} =\Phi/\sqrt{3}+(\sqrt{2/3})(-\chi_{1}/2+\sqrt{3}/2\chi_{2}),\] \[\phi_{3} =\Phi/\sqrt{3}-(\sqrt{2/3})(\chi_{1}/2+\sqrt{3}/2\chi_{2}). \tag{19}\] In this case, \(\cos(\phi_{1}+\phi_{2}+\phi_{3})=\cos(\sqrt{3}\Phi)\). When \(\Phi\) is frozen, the dual field \(\bar{\Phi}=\sum_{a}\bar{\phi}_{a}\) fluctuates strongly so that correlators of the dual exponents decay exponentially. Then the dual perturbation is generated in the second order in \(\eta\): \[\eta^{2}\int d^{2}x\exp\{i\bar{\Phi}/\sqrt{3}+\sqrt{8/3\pi} \mathbf{e}_{a}\mathbf{\bar{\chi}}\}_{\mathbf{r}}\times\] \[\exp\{-i\bar{\Phi}/\sqrt{3}-\sqrt{8/3\pi}\mathbf{e}_{b}\mathbf{\bar{\chi} }\}_{\mathbf{r}+\mathbf{x}}, \tag{20}\] giving rise to the operators \[\cos[\sqrt{8/3}\pi(\mathbf{e}_{a}-\mathbf{e}_{b})\mathbf{\bar{\chi}}], \tag{21}\] with scaling dimension \(2\pi/T\). In that case we get \[\mathcal{F}_{eff}/T =\sum_{a=1}^{3}\Big{[}\bar{A}g_{3}\cos(\sqrt{8/3}\mathbf{e}_{a}\mathbf{ \chi})-B\cos(2\pi\sqrt{2}\mathbf{\omega}_{a}\mathbf{\bar{\chi}})\Big{]}+\] \[\sum_{i=1,2}\Big{[}\frac{1}{2T}(\partial_{x}\chi_{i})^{2}+\frac {T}{2}(\partial_{x}\bar{\chi}_{i})^{2}+i\partial_{x}\chi_{i}\partial_{y}\bar{ \chi}_{i}\Big{]}. \tag{22}\] with \(\mathbf{\omega}_{a}=(0,1),(\sqrt{3},1)/2,(\sqrt{3},-1)/2\). More explicitly: \[\sum_{a}\cos(\sqrt{8/3}\mathbf{e}_{a}\mathbf{\chi})=\cos[\sqrt{8/3}\chi_ {1}]+\cos[\sqrt{2/3}(\chi_{1}-\sqrt{3}\chi_{2})]+\] \[\cos[\sqrt{2/3}(\chi_{1}+\sqrt{3}\chi_{2})], \tag{23}\] and, \[\sum_{a}\cos(2\pi\sqrt{2}\mathbf{\omega}_{a}\bar{\mathbf{\chi}})\Bigr{]} =\cos[2\pi\sqrt{2}\tilde{\chi}_{2}]+\cos[\pi\sqrt{2}(\tilde{\chi}_{ 2}+\sqrt{3}\tilde{\chi}_{1})]\] \[+\cos[\pi\sqrt{2}(\tilde{\chi}_{2}-\sqrt{3}\tilde{\chi}_{1})]. \tag{34}\]
2301.04684
Design and Characterization of Viscoelastic McKibben Actuators with Tunable Force-Velocity Curves
The McKibben pneumatic artificial muscle is a commonly studied soft robotic actuator, and its quasistatic force-length properties have been well characterized and modeled. However, its damping and force-velocity properties are less well studied. Understanding these properties will allow for more robust dynamic modeling of soft robotic systems. The force-velocity response of these actuators is of particular interest because these actuators are often used as hardware models of skeletal muscles for bioinspired robots, and this force-velocity relationship is fundamental to muscle physiology. In this work, we investigated the force-velocity response of McKibben actuators and the ability to tune this response through the use of viscoelastic polymer sheaths. These viscoelastic McKibben actuators (VMAs) were characterized using iso-velocity experiments inspired by skeletal muscle physiology tests. A simplified 1D model of the actuators was developed to connect the shape of the force-velocity curve to the material parameters of the actuator and sheaths. Using these viscoelastic materials, we were able to modulate the shape and magnitude of the actuators' force-velocity curves, and using the developed model, these changes were connected back to the material properties of the sheaths.
Michael J. Bennington, Tuo Wang, Jiaguo Yin, Sarah Bergbreiter, Carmel Majidi, Victoria A. Webster-Wood
2023-01-11T19:22:12Z
http://arxiv.org/abs/2301.04684v1
# Design and Characterization of Viscoelastic McKibben Actuators with Tunable Force-Velocity Curves ###### Abstract The McKibben pneumatic artificial muscle is a commonly studied soft robotic actuator, and its quasistatic force-length properties have been well characterized and modeled. However, its damping and force-velocity properties are less well studied. Understanding these properties will allow for more robust dynamic modeling of soft robotic systems. The force-velocity response of these actuators is of particular interest because these actuators are often used as hardware models of skeletal muscles for bioinspired robots, and this force-velocity relationship is fundamental to muscle physiology. In this work, we investigated the force-velocity response of McKibben actuators and the ability to tune this response through the use of viscoelastic polymer sheaths. These viscoelastic McKibben actuators (VMAs) were characterized using iso-velocity experiments inspired by skeletal muscle physiology tests. A simplified 1D model of the actuators was developed to connect the shape of the force-velocity curve to the material parameters of the actuator and sheaths. Using these viscoelastic materials, we were able to modulate the shape and magnitude of the actuators' force-velocity curves, and using the developed model, these changes were connected back to the material properties of the sheaths. ## I Introduction Originally introduced in the 1930s-1940s [1], and popularized by Joseph McKibben in the 1950s [2, 3, 4], pneumatic artificial muscles are a commonly studied soft robotic actuator and have been used in traditional rigid robotics [1, 5, 6], soft robotic platforms [7, 8], and wearable and assistive devices [9, 10, 11, 12, 13]. Consisting of an inner rubber bladder and an outer constraining mesh, the McKibben actuator is able to achieve high actuator strains and large force relative to its light weight [14]. McKibben actuators are of particular interest in bioinspired robotics and prosthetics because of their functional similarity to biological muscle in terms of contracting in response to activation and introducing compliance into the system. As a consequence, they can serve as first-order hardware models of skeletal muscle [14, 15, 16, 4]. Current experimental characterizations and models of these actuators tend to focus on their quasistatic properties, relating their inflation pressure, length, and axial force [17, 18, 19, 20, 3], but less attention has been given to their dynamics properties. These properties are important both to the design and modeling of the dynamics of a robotic system composed by these actuators and to the use of McKibben muscles as biomimetic actuators. While few studies have been reported on the dynamic properties of McKibben actuators, those that have done so have often focused on the force-velocity relationship. For example, Tondu et al. performed isotonic quick-release experiments on McKibben actuators and showed that, for a particular combination of rubber bladder and mesh materials, the force-velocity relationship can resemble that of the Hill muscle model [14]. Other works have shown that the velocity-dependence of the McKibben actuator's force is minimal compared to that of biological muscle [15, 21]. The authors instead augmented the muscle with parallel hydraulic damping elements to better mimic the biological tissue [15]. However, these solutions either rely on very particular woven mesh materials or large auxiliary equipment to tune the shape of the actuator's force-velocity response. In this work, we begin to investigate the force-velocity relationship of McKibben actuators and the ability to tune these relationships using simple viscoelastic material sheaths. Four different actuator architectures are investigated using actuators of three different diameters. The force-velocity response of these viscoelastic McKibben actuators (VMA) is measured using iso-velocity tests adapted from the muscle physiology literature [22]. To connect the measured force-velocity response to the material properties of the sheath and the mechanics of the underlying McKibben actuator, a simplified 1D model, consisting of parallel chains of standard Fig. 1: Viscoelastic McKibben Actuator (VMA): (a) Plain McKibben Actuator (control), (b) Ecoflex-30 sheath, (c) urethane sheath, (d) Ecoflex-30 and Carbopol composite sheath (10mm diameter shown for all). Each VMA contains a plain McKibben actuator at its core, fabricated in the same method as the control. linear solid elements (SLSEs), is formulated. ## II Materials and Methods ### _Actuator Design and Fabrication_ Each viscoelastic muscle actuator consists of a traditional McKibben actuator, serving as the contractile element, and a viscoelastic sheath around the McKibben, serving as a passive damper (Fig. 2a). Four 90 mm long McKibben actuators each of three different diameters (6 mm, 10 mm, 12 mm nominal mesh diameter) were fabricated. The design of the actuator was adapted from [7]. Briefly, a latex balloon inner bladder is connected to two barbed tube ends and is constrained by commercially available overexpanded cable meshes (PET Expandable Sleeving, Alex Tech). Kevlar fibers and cyanoacrylate glue were used to seal and connect the bladder and mesh to the end caps of the actuator. Thin hollow sheaths of different viscoelastic and thixotropic materials were attached to the outside of the McKibben to act as the damping element of the actuator. To create the outer viscoelastic sheaths, 2 single-layered, concentric-cylindrical models were 3D printed (Object 30, Stratasys), with inner diameters of 9 mm and 12 mm. For both diameters, the resulting sheath has a thickness of 2 mm. Polyurethane (Vytaflex, Smooth-On Inc.), Ecoflex-30 (Ecoflex 00-30, Smooth-On Inc.) and 5% Carbopol (Carbomer 940, Sanare) gel were used to fabricate the McKibben sheaths. For both the Ecoflex-30 and polyurethane sheaths, the liquid elastomer was prepared by mixing the 2-part polymer in a 1:1 ratio. The mixed polymer was placed in a vacuum chamber for 5 minutes to remove air bubbles. The 3D-printed molds were prepared by spraying a thin layer of mold release (Ease Release 200, Mann Release Technologies) on the inner surfaces of the mold. The elastomer was then injected into the mold and cured at room temperature (25\({}^{\circ}\)C) for 12 hours. The Carbopol gel used in this project was adapted from [23]. First, 10g of Carbopol 940 powder (Carbomer) was mixed with 190g of deionized water. The mixture was then mechanically stirred for 4 hours. After stirring, 4g of 10M NaOH solution was added to the mixture. The new mixture was then mechanically stirred for 30 min. Finally, the gel was injected in between an Ecoflex-30 sheath and the McKibben actuator. The resulting sheaths were connected at the ends of the actuator using silicone epoxy (Sil-poxy, Smooth-On Inc.) and Kevlar threads. The geometric and material parameters of all 12 actuators fabricated for experimental characterization, with and without sheaths, are provided in Table I. The fabricated length of each actuator was measured by a digital caliper. The Max Contraction Ratio is defined as the ratio of the length of the actuator at 20 psi to the length of the actuator at 0 psi (initial length). ### _Experimental Characterization_ Inspired by biological muscle testing [22], iso-velocity tests were performed at different pressure levels for all sample actuators on a universal material testing system (5969, Instron, 1 kN load cell). Inflation pressure was measured with a digital pressure sensor (ELVH-030G-HAND-C-PSA4, ALL SENSORS, maximum pressure 30 psi, resolution 0.1 psi) and recorded using a microcontroller (Teensy 3.6, PJRC). Two pairs of 3D printed holders were designed to hold both ends of the actuator and provide consistent friction between the actuator and the testing system. The force and length data from the universal material testing system and pressure data from the microcontroller were collected independently and synchronized later in MATLAB. Iso-velocity tests (Fig. 4 (a)) were performed at five velocity magnitudes (2, 4, 6, 8, 10, all in mm/s) at 4 pressure levels (5, 10, 15, 20, all in psi). All five velocities were tested in a single session at a given pressure level. For a given pressure: the actuator was first held at its rest length in the testing system and pressurized to the desired level. After allowing the actuator force to reach steady state, the actuator was stretched between +4 and -4 mm at 0.01 mm/s for one cycle, returned to the unpressurized rest length, and again allowed to come to steady state. This was done to minimize preconditioning effects on the first ramp. For each velocity magnitude \(v\): the actuator was stretched 2 mm at \(v\) mm/s and then held for 30 seconds. The actuator was then returned to the unpressurized rest length at 0.01 mm/s and held for 30 seconds. This same profile was then repeated at Fig. 2: Fabrication, Characterization, and Modeling. (a) Each viscoelastic muscle actuator consists of a standard McKibben actuator (fabricated following [7]) and a viscoelastic polymer sheath. (b) To characterize the dynamic properties of the actuators, iso-velocity experiments were performed on an Instron 5969 at various velocities and inflation pressures, (c) The dynamics of the actuators were modeled using parallel chains of Standard Linear Solid elements (SLSE), with one arm capturing the dynamics of the McKibben actuator and the other the dynamics of the sheath material. Using this model, an analytical expression for the force-velocity curves can be obtained (d), and the shape of the curve can be related to the material properties of the constituents. The height of this curve above the \(v=0\) point, \(\Delta FV(v)\), can be related to two material properties of the actuator. Here, shortening velocity (negative of the extension rate) is reported in alignment with standard muscle physiology experiments. a velocity of \(-v\) mm/s. For 5 psi, only 1 mm of extension was applied, as shortening by more than 1mm from the _unpressurized_ rest length would have led to shortening below the _pressurized_ rest length of the actuators. Five repetitions of this full protocol were conducted for each actuator at each pressure and velocity. ### _Modeling_ To relate changes in the experimental force-velocity curves to design parameters of the actuator materials, a simplified, 1D model was developed where both the McKibben actuator and polymer sheaths were treated as standard linear solid elements (SLSE) (Fig. 2c). The resulting force-velocity expressions are parameterized by mechanical properties of the actuator constituents and can therefore be used as a design tool to inform future designs. In this model, elastic elements are assumed to have a force linearly proportional to strain (\(F=k\varepsilon\) where the normalized stiffness \(k\) has units [N]), and the viscous elements are assumed to have a force linearly proportional to the strain rate (\(F=\eta\dot{\varepsilon}\) where the damping coefficient \(\eta\) has units [Ns]). For the case of a single SLSE, the system force is given by: \[F(t)=k_{1_{i}}\varepsilon(t)+k_{2_{i}}\varepsilon_{2_{i}}(t) \tag{1}\] where \(k_{1_{i}}\) and \(k_{2_{i}}\) are the stiffness of the parallel and series elastic elements, and \(\varepsilon\) and \(\varepsilon_{2_{i}}\) are the strains of the parallel and series elastic elements of the \(i^{\text{th}}\) SLSE. For the actuators presented here only two SLSEs are included: a control McKibben (c) and the sheath (s). For each SLSE, \[\dot{\varepsilon}_{2_{i}}=\dot{\varepsilon}-\frac{k_{2_{i}}}{\eta_{i}} \varepsilon_{2_{i}} \tag{2}\] where \(\eta_{i}\) is the damping coefficient of the series damper. Starting from steady state (\(\dot{\varepsilon}_{2_{i}}(t=0^{-})=0\)), a constant strain rate ramp (\(\dot{\varepsilon}=\hat{v}\) where \(\hat{v}\) has units [1/s]) yields a system force \[F_{i}(t)=k_{1_{i}}(\varepsilon_{0}+\hat{v}t)+\frac{\hat{v}\eta_{i}}{k_{2_{i}}} (1-e^{-\frac{k_{2_{i}}}{\eta_{i}}t})\,. \tag{3}\] For a fixed final applied strain \(d\varepsilon\), the peak system force is a function of the velocity (\(t_{peak}=d\varepsilon/\hat{v}\)). Normalizing by the pre-extension, steady state force (\(F_{i}(t=0^{-})=k_{1_{i}}\varepsilon_{0}\)), the force-velocity curve for the model can be written as: \[FV_{i}(\hat{v})=1+\text{sgn}(\hat{v})\frac{d\varepsilon}{\varepsilon_{0}}+ \frac{\hat{v}\kappa_{i}}{\varepsilon_{0}\gamma_{i}}(1-e^{-\text{sgn}(\hat{v} )\frac{\gamma_{i}d\varepsilon}{\hat{v}}}) \tag{4}\] where \(\kappa_{i}=k_{2_{i}}/k_{1_{i}}\) is the relative stiffness of the elastic elements and \(\gamma_{i}=k_{2_{i}}/\eta_{i}\) is the inverse of the time constant of the viscous arm (Fig. 2d). Here, \(\text{sgn}(x)\) is the sign function (\(1:x>0,-1:x<0\)). The height of the force-velocity curve above the \(v=0\) discontinuity then takes the form: \[\Delta FV_{i}(\hat{v})=\frac{\hat{v}\kappa_{i}}{\varepsilon_{0}\gamma_{i}}(1- e^{-\text{sgn}(\hat{v})\frac{\gamma_{i}d\varepsilon}{\hat{v}}})\,. \tag{5}\] Using this equation, the parameters \(\kappa_{i}\) and \(\gamma_{i}\) can be related to the shape of force-velocity curve. Specifically, the horizontal asymptote is given by: \[\Delta FV_{i}(\hat{v}_{\infty})=\frac{d\varepsilon}{\varepsilon_{0}}\kappa_{i} \tag{6}\] and the velocity \(\hat{v}_{\alpha}\) at which the force-velocity curve reaches \(\alpha\Delta FV_{i}(\hat{v}_{\infty})\) can be approximated as: \[\hat{v}_{\alpha}\approx\frac{d\varepsilon}{2(1-\alpha)}\gamma_{i}\,. \tag{7}\] This approximation is valid within 5% for \(\alpha>0.75\). Thus the height of the force-velocity curve is governed by \(\kappa_{i}\) and the steepness of the force-velocity curve is governed by \(\gamma_{i}\) (Fig.3a,b). For the two-SLSE case (McKibben actuator and the sheath), similar relationships can be found. The shape of the force-velocity curve takes the form: \[\Delta FV_{c+s}(\hat{v})=\frac{\beta_{c}}{\beta_{c}+\beta_{s}}\Delta FV_{c}( \hat{v})+\frac{\beta_{s}}{\beta_{c}+\beta_{s}}\Delta FV_{s}(\hat{v}) \tag{8}\] where \(\Delta FV_{s}\) and \(\Delta FV_{c}\) both take the form of \(\Delta FV_{i}\) from the 1 SLSE case. The height and steepness are governed by weighted averages of \(\kappa_{i}\) and \(\gamma_{i}\): \[\Delta FV_{c+s}(\hat{v}_{\infty})=\frac{d\varepsilon}{\varepsilon_{0}}\frac{ \beta_{c}\kappa_{c}+\beta_{s}\kappa_{s}}{\beta_{c}+\beta_{s}} \tag{9}\] and \[\hat{v}_{\alpha}\approx\frac{d\varepsilon}{2(1-\alpha)}\frac{\beta_{c}\kappa_{ c}\gamma_{c}+\beta_{s}\kappa_{s}\gamma_{s}}{\beta_{c}\kappa_{c}+\beta_{s}\kappa_{s}} \tag{10}\] where \(\beta_{i}={k_{1_{i}}}/{k_{1_{i}}}\) is the stiffness ratio of parallel elastic element to the McKibben parallel stiffness (\(\beta_{c}=1\)) in the different elements. With different combinations of \(\beta_{s}\), \(\kappa_{s}\), and \(\gamma_{s}\), the force-velocity response of the McKibben actuator can be tuned (Fig. 3 (c),(d)). These expressions can also be extended to any number of parallel SLSEs following this weighted average scheme. ### _Analysis_ Experimental force-velocity curves were compiled for each actuator and each pressure level using data measured during characterization experiments. Individual velocity ramps were identified and extracted from the larger experiment (Fig. 4a). The average velocity was found by fitting a piece-wise-linear ramp function to the extension data, the slope of which corresponds to the average velocity (Fig. 4b). The average velocity is then normalized by the pressurized rest length of the actuator to obtained the strain rate. The starting force (\(F_{0}\)) was calculated as the mean force during the two seconds prior to the start of the ramp. The peak force was taken as the force value when the extension first reached its target point. This was to avoid artifacts introduced by extension overshoot, which occurred at higher velocities. The peak force was then normalized by the starting force. These experimental force-velocity curves were then used to obtain model parameters for the McKibben actuators and viscoelastic sheaths as functions of pressure. For all experiments, the values of \(d\varepsilon\), \(\varepsilon_{0}\), \(\varepsilon\), and \(\hat{v}\) in the model were calculated relative to the pressurized rest length of the actuator. First, for each control McKibben actuator at each pressure, \(\kappa_{c}\) and \(\gamma_{c}\) were fitted using a nonlinear least squares method (code generated by using MATLAB Curve Fitting Toolbox). Parameter initialization was chosen based on the Equations 6 and 7 for the horizontal asymptote and \(\hat{v}_{\alpha}\). Specifically, the normalized force from the \(\pm 10\) mm/s tests were used as \(FV_{c}(\hat{v}_{\infty})\) to approximate \(\kappa_{c}\), and the data from the \(\pm 4\) mm/s was used as the \(\alpha\) point to estimate \(\gamma_{c}\). To fit \(\kappa_{s}\), \(\gamma_{s}\), and \(\beta_{s}\) for each material, diameter, and pressure, the corresponding McKibben parameters were set as \(\kappa_{c}\) and \(\gamma_{c}\) and not optimized. The parameter \(\beta_{s}\) was initialized to 1, and \(\kappa_{s}\) and \(\gamma_{s}\) were initialized following the same procedure as in the plain McKibben case. The same optimization process was then carried out for these three parameters. ## III Results and Discussion ### _Characterization_ The force-velocity response of the twelve actuators was measured as a function of the pressure, sheath material, and mesh diameter (Fig. 5). In these figures, the normalized shortening velocity (negative of the extension strain rate) Fig. 4: Characterization and Modeling of the Force-Velocity Curve. (a) An example iso-velocity experiment (6mm control McKibben actuator at 10 PSI) with the individual velocity ramps overlayed. Data from these force responses are used to construct an experimental force-velocity curve. (b) To avoid confounding effects from various amounts of overshoot, the peak force that is normalized by the initial force and the velocity is normalized by the pressurized rest length for the force-velocity curve is taken at the point when the ramp first reaches its target point. This occurs just prior to the extension overshoot. Fig. 3: Investigation of model parameters for a 1-SLSE model ((a) and (b)) and for a 2-SLSE model ((c) and (d)). Here the normalized shortening velocity (negative of the extension strain rate, \(v\)) is reported in alignment with standard muscle physiology experiments. (a) By varying the stiffness : damping ratio in the viscous arm of the SLSE, the slope of the force-velocity curve can be changed. As \(\gamma\) decreases (increased damping time constant), the force-velocity curve approaches a step response, with no velocity dependence. Conversely, as \(\gamma\) increases, the curve approaches a linear response. (b) By varying the stiffness ratio between the two arms of the model, the height of the force-velocity curve is changed, with the height increasing with increasing \(\kappa\). For (c) and (d), one SLSE in the model was fixed with \(\kappa_{1}=10\) and \(\gamma_{1}=50\), and the parameters of the other arm were varied. (c) By varying \(\gamma_{2}\) in the second arm, the slope of the force-velocity can be tuned, and (d) by varying \(\kappa_{2}\), the height of the force-velocity curve can be adjusted. In both cases, increasing \(\beta\) (the stiffness ratio of the two parallel elastic elements), the effect of changing \(\kappa_{2}\) or \(\gamma_{2}\) is amplified. (e) These parameters can grouped into four different material classes. By combining materials of classes, the force-velocity curve can be tuned for a desired dynamic response. is reported in alignment with standard muscle physiology experiments. The shortening velocity is normalized by the pressurized rest length of the actuator (units of shortening velocity here are [1/s]). Common force-velocity features were found across all actuators. Unlike what is predicted in the model, all actuators showed an asymmetric force-velocity response, with a larger magnitude asymptote for extensions (negative shortening velocities) than for shortening. This reflects the nonlinear stiffness properties of the McKibben actuators that have been previously reported, with stiffness increasing with increased length [3]. Additionally, an increase in pressure led to a decrease in the height of force-velocity curve at all velocities and diameters, suggesting a more elastically dominant behavior at high pressures (Fig. 3b,e). However, the difference in height for a given pressure increase diminished with increasing pressure. This is most pronounced in the 10 and 12 mm diameter actuators at the 5 psi level. This could be related to changes in the contact state of the inner bladder. In these larger actuators, the bladder is not in full contact with mesh at lower pressures, but at higher pressures has made full contact. This low-pressure discrepancy being related to the contact state is also supported by the observation this discrepancy is not seen in the 6 mm actuators, where the mesh is in full contact with the actuator even at low pressures. However, the mechanism that causes this contact state to result in a more viscous-dominated response would require additional investigation. The addition of the viscoelastic polymer sheaths was successful in altering the force-velocity response of the McKibben actuators. In the case of the 6 and 12 mm diameter urethane actuators, a more viscous-dominating response was achieved, with the height of the force-velocity curve being higher than the control actuator response at all pressures and velocities. The 10 mm urethane actuator showed a different response, with the height in much closer agreement with the control. This could be due to the 10 mm urethane actuator requiring a larger diameter sheath than the other 10 mm actuators. The larger diameter sheath was used because the smaller sheath diameter consistently ruptured at higher pressures. However, this meant that the sheath was less in contact with the underlying McKibben than in the 6 and 12 mm cases and was thus less engaged. Conversely, the Ecoflex sheath led to a decrease in the height of the force-velocity curve in extension for all diameters and pressure, showing a more elastic-dominant response. In shortening, the Ecoflex actuators showed closer agreement with the control actuators. Finally, the effect of the Carbopol actuators varied. For the 6 mm actuator, almost no change was seem from the control actuator. In the 10 mm actuator, the response was much closer to that of the Ecoflex, showing a more elastic-dominant response. In the 12 mm case, the effect varied with pressure and direction of motion, with an increased height Fig. 5: Experimental Characterization of Viscoelastic McKibben Actuators. Each column shows the data for a different actuator, and each row shows a different actuator diameter. Along the dashed line, the experimental data is reported as mean \(\pm\) 1 standard deviation. The solid line shows the corresponding model fit for that actuator and that pressure. For the control actuators, a 1-SLSE model is used, and for each of the VMA, a 2-SLSE model is used, with the control element parameters set by the corresponding control actuator model. Inset: The 5PSI curve for the 12mm urethane actuator is inset to allow a smaller axis range for the rest of the 12mm actuators. seen at 10 and 15 psi in extension, but no difference seen at 20 psi or in shortening at 10, 15 or 20 psi. This characterization is limited in a number of ways. The force-velocity curve, while relevant to the actuator in terms of its role as a model of skeletal muscle, is only one metric by which to determine these actuators' dynamic properties or the ability of these material sheaths to tune them. More complete characterization will require cyclic testing at various speeds to determine hysteresis as a function of velocity. Additionally, the minimum extension rate of 2 mm/s was near the horizontal asymptote for many actuators, resulting in poor characterization of the high slope region of the force-velocity curve near \(\hat{v}=0\). A more complete investigation of the force-velocity curve will require lower velocities to be incorporated. These higher test rates also resulted in extension and shortening overshoot in the tests, which made the calculation of the peak force and the following force decay more challenging. These overshoots would be minimized with lower velocity tests. ### _Modeling_ The presented model was able to successfully capture major trends in the experimental force-velocity curves (\(R^{2}=0.94\pm 0.05\) for all actuators and pressures), and the changes in the VMA curve relative to the control curves can be explained through the model parameters. For example, in all actuators, an increase in pressure leads to a decrease in the height of the force-velocity curve. This is expected under this model, as increasing the pressure of the McKibben actuator increases its stiffness [3], and this increased stiffness results in a lower \(\kappa_{c}\) and thus \(FV(\hat{v}_{\infty})\). This model can also be used to explain changes in the force-velocity curves associated with the material sheaths (Fig. 6). Based on preliminary materials testing, the urethane sheath would fall into the viscous dominant, long time constant class, and Ecoflex would fall into the elastic dominant, short time constant class (relative to the McKibben actuator). Therefore, we would expect that the urethane would cause an increase in the height of the force-velocity curve (Fig. 3e). However, with increased pressure, the relative stiffness of the McKibben to the urethane sheath increase (decreasing \(\beta_{s}\)), so we would expect this difference to decrease with increased pressure as the weighted average begins to favor the McKibben actuator (Fig. 6a). For the Ecoflex sheath, the relatively shorter time constant would lead to a high slope of the force-velocity curve, which is seen at low pressures (Fig. 6b). However, as with the urethane sheath, an increased pressure leads to the McKibben properties dominating once again. While this model can capture many of the trends in the data, there are some limitations in its accuracy and predictive power. Both the McKibben actuators and the sheath materials are non-linearly elastic, with their stiffness increasing with increased strain. This results in an asymmetrical force-velocity curve, with a larger response for extension (negative shortening velocity) relative to shortening at the same rate. This cannot be captured by the linear springs in the proposed model. As a consequence, the model fits tend to under-predict extension responses and to over-predict shortening responses. Furthermore, the asymmetry also results in high parameter uncertainty. Improvements can be made through the inclusion of nonlinear spring elements and more appropriate models of the McKibben actuator at the cost of decreased interpretability of the model parameters. Additionally, the optimized sheath parameters tend to vary with pressure, whereas it would be expected that they would be pressure-independent for linear materials. However, a pressure dependence would be expected for nonlinear materials, as the McKibben actuator's pressure will determine the deformation state of the sheath material. In the future, this pressure dependence could be incorporated into the model as well, but it would require 3D geometric information about the actuator. Both of these issues could be addressed by incorporating a more complete quasi-static McKibben model [3, 9, 17] to capture the strain stiffening of the McKibben actuators and provide the geometry needed to estimate the sheath stiffness pressure dependence. Finally, this model only includes damping from standard dash-pot elements. However, previous work has shown that a velocity dependence in McKibben actuators can actually come from non-linear friction interactions in the mesh material [14] and Coulomb friction between the bladder material and the sheath [4, 21]. Future model development should incorporate such friction into a more complete model of the McKibben to replace one of the SLSEs in this model. Fig. 6: Comparison of 1 SLSE and 2 SLSE Model. For (a) and (b), the black dashed line shows the model fit of the corresponding plain Mckibben actuators (1 SLSE model), and the solid colored line shows the adjusted 2 SLSE model. As with Fig. 5, the experimental data are shown as mean \(\pm\) 1 STD. (a) 6mm Urethane VMA. For all pressures, the viscous nature of the urethane led to an increased height of the force-velocity curve, captured by the 2 SLSE model have a higher asymptote. As pressure increases and the McKibben stiffens, this asymptote difference decreases as the McKibben begins to dominate. (\(\delta\Delta FV(\hat{v}_{\infty})=FV_{2SLSE}(\hat{v}_{\infty})-FV_{1SLSE}(\hat{v}_{ \infty})\)). (b) 10mm Ecoflex VMA. At low pressure, the low viscous effects (large \(\tau_{\rm E}\)) of the Ecoflex sheath are able to change the slope of the force-velocity curve, but at higher pressures, the relative stiffness of the McKibben actuator again dominates, making the VMA response into alignment with the standard McKibben actuator. ## IV Conclusion and Future Works This work presents the characterization and modeling of the force-velocity relationships of viscoelastic McKibben actuators (VMA). These VMAs consist of a standard McKibben actuator surrounded by a viscoelastic polymer sheath. Iso-velocity experiments were performed to measure the force-velocity response of these actuators, and a simplified 1D model was developed to relate the shape of these experimental force-velocity curves to the material properties of the actuators. Using these polymer sheaths, we were able to successfully augment the force-velocity response of a standard McKibben, changing either its asymptotic height or its slope. The 1D model performed well in capturing the trends in these force-velocity curves, but missed key features, including the asymmetry in extension/shortening and the pressure dependence of sheath properties. Future works on these actuators will include iso-velocity tests at slower speeds to further investigate the steep portion of the force-velocity curve near the \(\hat{v}=0\) discontinuity. Additionally, to increase the predictive power of the model, more accurate quasistatic models of the McKibben's length-pressure-force properties will be implemented to replace the linear spring element. This will also require the measurement of the actuator geometry during quasi-static testing. Geometric information from these models will be used to capture the deformation-dependent properties of the sheath materials as well. With better predictive power, these models can be used as a design tool for creating actuators with a desired force-velocity response. Future characterization will also include cyclic testing of the actuators at various speeds to more robustly investigate their dynamic properties. The work presented here lays the foundation for the fabrication and design of pneumatic actuators with tunable force-velocity dynamics for broad applications in bioinspired and biomimetic robotics. ## Acknowledgements This work was supported in part by the National Science Foundation (NSF) through grant no. FRR-2138873, and in part by NSF DBI-2015317 as part of the NSF/CIHR/DFG/FRQ/UKRI-MRC Next Generation Networks for Neuroscience Program. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.
2307.03265
Mach number and wall thermal boundary condition effects on near-wall compressible turbulence
We investigate the effects of thermal boundary conditions and Mach number on turbulence close to walls. In particular, we study the near-wall asymptotic behavior for adiabatic and pseudo-adiabatic walls, and compare to the asymptotic behavior recently found near isothermal cold walls (Baranwal et al. (2022)). This is done by analyzing a new large database of highly-resolved direct numerical simulations of turbulent channels with different wall thermal conditions and centerline Mach numbers. We observe that the asymptotic power-law behavior of Reynolds stresses as well as heat fluxes does change with both centerline Mach number and thermal-condition at the wall. Power-law exponents transition from their analytical expansion for solenoidal fields to those for non-solenoidal field as the Mach number is increased, though this transition is found to be dependent on the thermal boundary conditions. The correlation coefficients between velocity and temperature are also found to be affected by these factors. Consistent with recent proposals on universal behavior of compressible turbulence, we find that dilatation at the wall is the key scaling parameter for this power-law exponents providing a universal functional law which can provide a basis for general models of near-wall behavior.
Akanksha Baranwal, Diego A. Donzis, Rodney D. W. Bowersox
2023-07-06T19:55:23Z
http://arxiv.org/abs/2307.03265v1
[ ###### Abstract We investigate the effects of thermal boundary conditions and Mach number on turbulence close to walls. In particular, we study the near-wall asymptotic behavior for adiabatic and pseudo-adiabatic walls, and compare to the asymptotic behavior recently found near isothermal cold walls (Baranwal et al. (2022)). This is done by analyzing a new large database of highly-resolved direct numerical simulations of turbulent channels with different wall thermal conditions and centerline Mach numbers. We observe that the asymptotic power-law behavior of Reynolds stresses as well as heat fluxes does change with both centerline Mach number and thermal-condition at the wall. Power-law exponents transition from their analytical expansion for solenoidal fields to those for non-solenoidal field as the Mach number is increased, though this transition is found to be dependent on the thermal boundary conditions. The correlation coefficients between velocity and temperature are also found to be affected by these factors. Consistent with recent proposals on universal behavior of compressible turbulence, we find that dilatation at the wall is the key scaling parameter for this power-law exponents providing a universal functional law which can provide a basis for general models of near-wall behavior. Mach number and wall thermal boundary condition effects on near-wall compressible turbulence]Mach number and wall thermal boundary condition effects on near-wall compressible turbulence A. Baranwal et al.]Akanksha Baranwal\({}^{1}\)+, Diego A. Donzis\({}^{1}\), and Rodney D. W. Bowersox\({}^{1}\) 2020 1 Footnote †: Email address for correspondence: [email protected] ## 1 Introduction The detailed dynamics of turbulence near the wall has first-order effects on phenomena such as heat transfer and viscous drag. When speeds are relatively low, many aspects of these flows are relatively well understood such as scaling laws for mean quantities and Reynolds stresses. The situation is more challenging at higher speeds where compressibility effects become important and the physics more involved due to the interaction of hydrodynamics with thermodynamics. Understanding the detail dynamics in such regimes is critical for accurate predictions and, ultimately, control of these flows. It is also critical for model development in the context of Reynolds Averaged Navier-Stokes (RANS) approaches which are widely used in applications. Substantial effort have been devoted to develop RANS models for compressible wall-bounded flows, with adiabatic and weekly cooled walls (Menter, 1992; Spalart and Allmaras, 1992; Catris and Aupoix, 2000). However, these models result in poor prediction of statistics at high speeds (Roy and Blottner, 2006; Rumsey, 2010; Aiken et al., 2020) due to the lack of an accurate representation of the different physics and flow behavior in different conditions. One important difference between different regimes is the wall thermal boundary condition (WTBC) which, in general, is modeled as adiabatic at supersonic speeds but cold-wall isothermal in hypersonic regimes. In certain situations, it is also possible to have mixed boundary conditions which can again alter the flow dynamics. Direct numerical simulations (DNS) of a number of wall-bounded flows, such as channels (Coleman et al., 1995; Huang et al., 1995; Foysi et al., 2004; Morinishi et al., 2004; Gerolymos and Vallet, 2014; Sciacovelli et al., 2017; Yu et al., 2019; Yao and Hussain, 2020) and flat plate boundary layers ((Smits and Dussauge, 2006; Wenzel et al., 2018) and references therein) have been conducted to try to understand compressibility effects on turbulent statistics in high-speed regimes. Efforts have also been made to study the effects of WTBC on the scaling of velocity and temperature statistics and the relationship between them in high-speed regimes (Huang et al., 1995; Morinishi et al., 2004; Tamano and Morinishi, 2006; Mader, 2000; Duan et al., 2010; Shadloo et al., 2015; Hadjadj et al., 2015; Shahab et al., 2011; Zhang et al., 2014, 2018, 2022). Recent studies have investigated the effects of thermal wall condition on pressure fluctuations (Zhang et al., 2017, 2022), kinetic energy transfer (Xu et al. (2021)), density and temperature resolvent mode shapes (Bae et al. (2020)) highlighting WTBC effects on turbulent processes and structures. Several studies focused on finding scaling laws and others on using these scaling laws to collapse first and second order statistics in high-speed regimes for different flow conditions and different WTBCs (Brun et al., 2008; Zhang et al., 2012; Trettel and Larsson, 2016; Patel et al., 2015; Volpiani et al., 2020; Griffin et al., 2021). These WTBCs can be broadly characterized as isothermal (constant temperature) and isoflux (constant heat flux) conditions. For the former, studies have been conducted to investigate the effect of wall temperature, and for the latter, the effect of varying rate of heat transfer. Some studies have also used the so-called pseudo-adiabatic wall, a constant wall temperature (based on the recovery factor) whose value is such that the mean heat transfer to the wall vanishes, mimicking an adiabatic boundary. Some of the studies mentioned above (Shadloo et al. (2015); Wenzel et al. (2018); Zhang et al. (2022)) found that variation in turbulent statistics (e.g., mean velocity, mean temperature, Reynolds stresses) are not due to changes in the WTBC itself (i.e., change from isothermal to isoflux), but instead due to change in the heat transfer at the wall. However, direct effects of changing the boundary condition from isothermal to isoflux were observed on temperature fluctuation statistics, e.g. temperature fluxes in the near-wall region and these effects extended beyond the viscous sublayer. Another important observation was the change in asymptotic behavior of turbulent heat fluxes for different WTBCs. From a fundamental and a modeling perspective, it is crucial to understand the precise asymptotic behavior of turbulence close to the wall. Indeed, accurate predictions necessitates models to satisfy the correct asymptotic scaling laws (Lai and So, 1990; So et al., 1991, 1991; Zhang et al., 1992; Durbin, 1993; Sommer et al., 1993; So et al., 1998; Germano et al., 1991; Bowersox, 2009; Agrawal et al., 2022), and thus many studies have reported the asymptotic behavior of turbulent fluxes for both incompressible and compressible flows and under different WTBCs (Morinishi et al. (2004); Shadloo et al. (2015); Hadjadj et al. (2015); Zhang et al. (2022); Li et al. (2009)). All these studies compared their data to the theoretical asymptotes obtained from Taylor series expansions in wall-normal direction and found good agreement. However, none of these studies examined well-resolved wall asymptotes with a systematic variation of Mach number for different WTBCs. We have recently conducted DNS of turbulent channels with finer near-wall resolution than the standard in the literature to capture true asymptotic behavior (Baranwal et al. (2022)). In that study, which was done with cooled isothermal walls, we systematically varied the centerline Mach number from \(M\gtrsim 0.2\) (virtually incompressible) to \(M\lesssim 2.2\). We showed that turbulent stresses and wall-normal heat flux comprising at least one wall-normal velocity component do not collapse when the Mach number was changed as suggested by widely used scaling laws which, thus, undermines Morkovin's hypothesis. In particular, due to the extremely high wall resolution, we were able to unveil a new region very close to the wall where power-law scaling exponents were found to differ from theoretical asymptotes and, furthermore, depend on Mach number. Previous studies at the standard resolution are not able to capture this region. We have also found that increasing the centerline Mach number resulted in enhanced levels of dilatation motions at the wall which is the key factor to understand changes in the power-law asymptotes close to the wall. Dilatational levels at the wall were also found to be affected by WTBCs in boundary layers (Xu et al., 2021; Zhang et al., 2022). Zhang et al. (2022) further found that wall cooling effects on dilatation depends also on the Mach number. These complex dependencies on both WTBCs and Mach number is the motivation behind the present work. In particular we investigate, for the first time, the asymptotic behavior of various turbulent stresses and heat fluxes at different Mach numbers and for different WTBCs. This systematic investigation is possible due to extremely well resolved turbulent channels with centerline Mach number ranging from 0.2 and 2.2 with isothermal, adiabatic and pseudo-adiabatic walls. The new adiabatic and pseudo-adiabatic results complement the isothermal data in Baranwal et al. (2022). This is also relevant in the context of classical scaling laws based on Morkovin's hypothesis which are more effective at collapsing statistics when the walls are adiabatic or weakly-cooled, than when they are isothermal in which case there is significant wall cooling. Adiabatic walls, thus, possess the additional advantage of isolating the effects of Mach number from wall cooling and provide a more direct way to assess the effects of Mach number in isolation and the validity of Morkovin's hypothesis on the asymptotic scaling of turbulent statistics. The rest of the paper is organized as follows. We first present the numerical method, configuration, and DNS database. Then, we present results on the asymptotic behavior of Reynolds stresses and their dependency on centerline Mach number and WTBCs. This analysis is then extended to temperature fluctuations and heat fluxes. We conclude with a summary and some remarks on the implications of the results presented here. ## 2 Numerical Method We perform direct numerical simulations of the equations governing mass, momentum, and energy conservation for a compressible channel flow. The equations are discretized on a uniform mesh in the streamwise (\(x\)) and spanwise (\(z\)) directions. In the wall-normal (\(y\)) direction, the grid is clustered close to the wall using a hyperbolic tangent function. We use sixth-order compact schemes to compute spatial derivatives in the \(x\) and \(z\) directions. For the \(y\) direction, we utilize the sixth-order compact scheme in interior points and the order is reduced to fourth and third at the last two grid points in the domain. The variables are marched in time using a third-order low-storage Runge-Kutta scheme. More details on simulations can be found in Baranwal et al. (2022) where we also present detailed grid convergence studies and validations against other DNS databases in the literature (e.g. Coleman et al., 1995). The simulations presented here satisfy those resolution criteria which are summarized in table 1. Periodic boundary conditions are used in the streamwise and spanwise directions. At the walls, we apply no-slip boundary conditions for all velocity components. The boundary condition for pressure is obtained by evaluating the momentum equation in the normal direction at the wall which was found to have a greater numerical stability than the commonly used zero-pressure gradient (Baranwal et al., 2022). In all the simulations presented here, the bottom wall (\(y=0\)) is isothermal with \(T=300\). For the top wall, three different thermal boundary conditions are investigated, namely, isothermal, adiabatic and pseudo-adiabatic cases denoted by I, A and PA respectively. For isothermal cases, the top wall is kept at the same temperature as the bottom wall (\(T=300\)). These simulations, which were studied in our previous study (Baranwal et al. (2022)), act as base case to compare with other thermal wall conditions. For adiabatic cases, we specify zero temperature gradient at the top wall. This approach with mixed boundary conditions in a channel have been used before (Morinishi et al., 2004; Tamano and Morinishi, 2006; Zhang et al., 2022; Lusher and Coleman, 2022; Baranwal et al., 2023). Finally, the pseudo-adiabatic case consists of imposing an isothermal boundary condition at the average temperature obtained from the adiabatic simulation with all other flow parameters kept the same as in the adiabatic case. Following standard notation, the bulk, wall and centerline values of a variable \(f\) are denoted by \(f_{b}\), \(f_{w}\) and \(f_{c}\), respectively. Reynolds and Favre decompositions are denoted by \(\overline{q}+q^{\prime}\) and \(\widetilde{q}+q^{\prime\prime}\), respectively. The averages in these decompositions are taken along the homogeneous directions (i.e. \(x\)-\(z\) planes) and time. As done in Baranwal et al. (2022), snapshots of all fields are saved at time intervals of \(5h/\overline{u_{b}}\) for all simulations, where \(h\) is the channel half width and \(u\) is the streamwise velocity component. This time scale (\(h/\overline{u_{b}}\)) is commensurate with the eddy-turnover time of the turbulence in the center of the channel and thus representative of the largest turbulent structures. Our temporal averages involved 25 snapshots for velocity, density and temperature fields. Consider a forced, periodic channel with Dirichlet boundary conditions for temperature at the walls, that is isothermal walls. If initialized with zero velocity and constant temperature, the flow will accelerate and develop velocity gradients that lead to viscous dissipation. This leads to an increase in temperature inside the channel which is higher than that imposed at the walls. Because of the thermal gradient that forms at the wall, there is a flux of energy from the fluid to the wall and the flow eventually reaches a statistically steady state where the rate of production of internal energy due to viscous dissipation is compensated by the energy transfer through the wall. If, on the other hand, we apply a Neumann boundary condition for temperature at the wall, in particular zero temperature gradient, then the heat transfer to the walls is identically zero. In this case, the increase in temperature due to dissipation maintained by the forcing in the momentum equation is not balanced by heat flux through the walls. Therefore, the internal energy in the channel increases continuously leading to a time-dependent mean thermodynamic state. Alternatively, one can apply a (cold) isothermal condition to one wall and an adiabatic condition the other wall. This allows for heat transfer through one wall and results in a decreased rate of change of mean thermodynamic parameters. In this case, the flow also achieves a _pseudo-steady state_ where statistics (at least to second order) are in a statistically steady state when normalized by their corresponding (slowly varying) means. This can be seen in figure 1(b)(c)(d) where we show the temporal evolution of the root-mean-square (r.m.s.) of several variables normalized by their respective time-varying means for \(M_{b}\approx 1.2\) and very long simulation time (\(\approx 200h/u_{b}\)). While global quantities (Reynolds and Mach numbers in panel (a)) are seen to decrease slowly, normalized fluctuations statistics are virtually in a statistical steady state. This is, in fact, consistent with observation in forced isotropic flows (Kida & Orszag, 1990). We do note that there seems to be a (very weak) increase in \(T_{rms}\) at the centerline (\(y^{+}=173\), red symbols). Because our interest lies close to the wall, we have verified this trend very far from the wall is not a concern in this study. The normalized r.m.s. dilatation at the wall, as shown in figure 1 (a), \(\theta_{w,rms}^{+}=\overline{(\partial v^{\prime}/\partial y)_{w}^{2}}\nu_{w} ^{2}/u_{\tau}^{4}\) is another quantity of interest which also exhibits a steady-state behavior. We take advantage of this pseudo-steady state to find averages over the simulation time. The statistics below are based on this averaging. The friction Reynolds numbers based on wall quantities and the friction Reynolds numbers based on centerline viscosity and density, are defined as \(Re_{\tau}=\overline{\rho_{w}}u_{\tau}h/\overline{\mu_{w}}\) and \(Re_{\tau}^{*}\equiv\overline{\rho_{c}}(\tau_{w}/\overline{\rho_{c}})^{1/2}h/ \overline{\mu_{c}}\) respectively, with \(u_{\tau}\equiv\sqrt{\tau_{w}/\overline{\rho_{w}}}\) being the friction velocity. The centerline Reynolds number and centerline Mach numbers are \(Re_{c}\equiv\overline{\rho_{c}}\ \overline{u_{c}}h/\overline{\mu_{c}}\) and \(M_{c}\equiv\overline{u_{c}}/\sqrt{\gamma RT_{c}}\), respectively. Our domain has dimensions \(4\pi h\times 2h\times 4\pi/3h\) for all our simulations. This is larger than widely used in literateure ((e.g. Trettel & Larsson, 2016; Yu _et al._, 2019)). Finally, as a direct assessment of boundary conditions effects on the quantities studied here, we have run additional simulations with a domain which is 20% shorter and confirmed that the near-wall scaling laws are unaffected. Table 1 summarizes the important parameters for the DNS database used here. In subsequent sections, we investigate various statistics near isothermal (solid lines) and adiabatic walls (dash-dotted lines) for three different centerline Mach numbers, \(M_{c}\approx 0.23\) (red), \(M_{c}\approx 1.2\) (black) and \(M_{c}\approx 1.9\) (magenta) and near pseudo-adiabatic walls (dashed lines) for \(M_{c}\approx 1.2\) using our DNS database. The adiabatic and pseudo-adiabatic results are taken from the upper halves of channel from the A and PA simulations respectively where bottom walls are isothermal. The isothermal case throughout the work refers to simulations where both walls are isothermal, unless specifically noted otherwise. We note that wall quantities for a particular case refer to the statistics at the wall with that particular thermal boundary condition (e.g. for pseudo-adiabatic case, wall quantities refer to statistics at pseudo-adiabatic wall). ## 3 First-order statistics The mean streamwise velocity normalized by the friction velocity and the mean temperature normalized by the wall temperature are shown in figure 2 (a) and (b) respectively. Consistent with the literature, wall-normalization (\(u_{\tau}\)) performs well in collapsing velocity for all presented cases in the viscous sublayer and the majority of the buffer layer (\(y^{+}\lesssim 20\)) as shown in figure 2 (a). In the log-law region, such collapse is not \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Wall & \(M_{c}\) & \(Re_{c}\) & \(Re_{\tau}\) & \(Re_{\tau}^{*}\) & \(\triangle y_{min}^{+}\) & \(\triangle y_{max}^{+}\) & \(\triangle x^{+}\) & \(\triangle z^{+}\) & Line style \\ \hline Isothermal & 0.23 & 5692 & 295 & 293 & 0.08 & 2.9 & 14.5 & 4.8 & \\ Adiabatic & 0.23 & 5684 & 296 & 292 & 0.08 & 2.9 & 14.5 & 4.8 & \\ Isothermal & 0.35 & 5638 & 294 & 289 & 0.08 & 2.9 & 14.4 & 4.8 & \\ Isothermal & 0.46 & 5582 & 294 & 286 & 0.08 & 2.9 & 14.4 & 4.8 & \\ Isothermal & 0.57 & 5476 & 293 & 281 & 0.05 & 3.2 & 14.4 & 4.8 & \\ Adiabatic & 0.57 & 5476 & 293 & 281 & 0.05 & 3.2 & 14.4 & 4.8 & \\ Isothermal & 0.68 & 5498 & 301 & 283 & 0.05 & 3.3 & 14.8 & 4.9 & \\ Isothermal & 0.89 & 5371 & 307 & 276 & 0.05 & 3.4 & 15.1 & 5.0 & \\ Adiabatic & 0.84 & 5099 & 225 & 260 & 0.05 & 2.2 & 11.0 & 3.6 & \\ Isothermal & 1.26 & 5022 & 325 & 259 & 0.05 & 3.6 & 15.9 & 5.3 & \\ Adiabatic & 1.12 & 4513 & 177 & 226 & 0.05 & 1.7 & 8.7 & 2.9 & \\ Pseudo-adiabatic & 1.12 & 4306 & 179 & 220 & 0.05 & 1.7 & 8.8 & 2.9 & \\ Isothermal & 1.50 & 5489 & 393 & 277 & 0.10 & 4.0 & 19.3 & 6.4 & \\ Isothermal & 1.98 & 5631 & 572 & 279 & 0.10 & 6.2 & 14.0 & 4.7 & \\ Adiabatic & 1.9 & 5092 & 138 & 236 & 0.05 & 1.0 & 3.4 & 1.0 & \\ Isothermal & 2.22 & 5666 & 745 & 273 & 0.09 & 8.8 & 14.8 & 6.1 & \\ \hline \hline \end{tabular} \end{table} Table 1: Details of flow conditions and grid resolutions Figure 2: (a) Streamwise mean velocity normalized by the friction velocity (b) Mean temperature normalized by the mean wall temperature plotted against wall-normal coordinate in viscous units for isothermal (—), adiabatic (-\(\cdots\)) and pseudo-adiabatic (-\(\cdots\)) cases. Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. Dotted red lines represent viscous and log layer scalings. observed and Mach number and WTBC effects exist on the mean velocity. In particular, Mach number effects are more pronounced in isothermal than in adiabatic cases due to the increased heat transfer to the wall as the Mach number increases for the former. In figure 2 (b) we find that, for isothermal cases, wall cooling leads to a temperature inside the channel which is higher than the wall temperature with a maximum at the centerline. This maximum temperature along with the wall cooling rate increase with the Mach number. For adiabatic cases, because of the zero heat flux at the top wall, there is a rise of temperature across the channel, with the maximum temperature at the upper wall. The temperature gradient for adiabatic and pseudo-adiabatic cases is not zero at the centerline, which may result in non-zero temperature fluxes at the channel half-width, an effect that will be discussed later. The mean viscosity, mean pressure and mean density are shown in figure 3 (a-f) against wall-normalized (a-c) and semi-local (d-f) wall-normal coordinates. On comparing figure 3 (a) with (d), (b) with (e), and (c) with (f), we observe that some features become independent of Mach number or WTBC when the statistics are plotted against the semi-local wall-normal coordinate \(y^{*}\equiv\overline{\rho}(\tau_{w}/\overline{\rho})^{1/2}y/\overline{\mu}\) as opposed to \(y^{+}\). For example, in figure 3 (e) we see that pressure start decreasing significantly only at \(y^{*}\approx 5\), reaching a minimum at \(y^{*}\approx 65\) for all \(M_{c}\), and then increasing towards the channel centerline. Similar observations can be made for viscosity and density for isothermal cases (figure 3 (d, f)). As expected, the mean viscosity follows a similar trend as the mean temperature (figure 2 (b)). Pressure is relatively constant across the channel with a small dip outside of the viscous sublayer which increases with \(M_{c}\) to about \(1.5\%\) at the highest Mach number shown (\(M_{c}\approx 1.9\)). In figure 3 (c) and (f) we show the mean density normalized by the mean density at the wall. Because the mean pressure is roughly constant across the channel, the mean density is inversely proportional to the mean temperature which is what we observe in these plots. Figure 3: Mean viscosity (a,d), pressure (b,e), and density (c,f) normalized by their corresponding wall values and plotted versus wall-normal coordinate in viscous units (a-c) and semi-local units (d-f) for isothermal, (—) adiabatic (\(\cdots\)-) and pseudo-adiabatic (\(\cdots\)-) cases. Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. We can also see opposite trends depending on WTBC. For isothermal cases, the density decreases as one moves away from the wall or when the Mach number increases. For adiabatic and pseudo-adiabatic cases, on the other hand, the density increases as one moves away from the wall or when the Mach number decreases. A result of these trends is that, close to the wall, the density gradients are higher for isothermal cases at higher Mach numbers indicating that the statistics will change more rapidly from their wall values in the near-wall region when the level of compressibility and heat transfer to the wall are increased. Because of the different scaling observed with wall and semilocal units, a natural question is, thus, on the relation between these two normalized distances to the wall. In figure 4, we show the wall-normal coordinate in semi-local units (\(y^{*}\)) against the wall-normal coordinate in viscous units (\(y^{+}\)). The two normalizations are virtually the same in the viscous sublayer for the given range of Mach numbers. Further away from the wall, isothermal and adiabatic walls lead to opposite trends when the Mach number is increased. These can be explained as follows. From figure 3 (a) and (c), we found that away from the wall, viscosity increases and density decreases as the Mach number is increased for isothermal cases while opposite trends are observed for adiabatic cases. Thus, the ratio \(\sqrt{\rho}/\overline{\mu}\) decreases with increasing Mach number for isothermal cases, while this ratio increases for adiabatic cases when \(M_{c}\) is increased. Thus, following this trend, \(y^{*}\) decreases with \(M_{c}\) for fixed \(y^{+}\) for isothermal cases while it increases with \(M_{c}\) for adiabatic cases as shown in figure 4(a). We can also see that \(y^{*}\) at the centerline (\(y^{*}_{c}\)) reaches a range of values of \(y^{*}_{c}\approx 220-290\) for all Mach numbers and WTBCs. This range is much wider for wall units, \(y^{*}_{c}\approx 140-750\), which seems to support the idea that semi-local units provide a better self-similar normalization than wall units. However, we note that this may be, in part, due to the fact that simulations were conducted with an approximately constant \(Re^{*}_{\tau}\)(Trettel & Larsson, 2016). Further simulations at a wide range of \(Re^{*}_{\tau}\) are needed to provide a more definite assessment of this claim. Figure 4: Wall-normal coordinate in semi-local units versus wall-normal coordinate in viscous units for isothermal (—), and adiabatic (-\(\cdot\cdot\)) cases. Colors as in table 1. Black and blue arrows indicate increase in \(M_{c}\) for isothermal and adiabatic cases respectively. ## 4 Effects of thermal boundary conditions on turbulent stresses The wall-normal coordinate in semi-local units, \(y^{*}\) along with local density-weighted averaging have been widely used to try to collapse turbulent stresses in compressible wall-bounded flow with varying WTBC, with their incompressible counterparts (Huang et al., 1995; Foysi et al., 2004; Morinishi et al., 2004; Trettel and Larsson, 2016; Modesti and Pirozzoli, 2016; Zhang et al., 2018). We have recently shown Baranwal et al. (2022), however, that semi-local scaling is not able to collapse turbulent stresses \(R^{*}_{\alpha\beta}\equiv\widetilde{\rho\alpha^{\prime\prime}\beta^{\prime \prime}}/\tau_{w}\) (\(\alpha\) and \(\beta\) are velocity components; e.g., \(R^{*}_{uv}\equiv\widetilde{\rho u^{\prime\prime}\widetilde{v^{\prime\prime}}} /\tau_{w}\)) or the wall-normal turbulent heat flux, \(R^{*}_{vT}=\widetilde{\rho v^{\prime\prime}}\widetilde{T^{\prime\prime}}/(\rho _{w}u_{\tau}T_{\tau})\) close to an isothermal wall in turbulent channels for centerline Mach numbers ranging from the incompressible limit to supersonic regimes. This can also be observed here in, e.g., figure 5 (a)(b). In figure 5 (a) and (b), we show \(R^{*}_{vv}\) and \(R^{*}_{uv}\) respectively for three Mach numbers, \(M_{c}\approx 0.23,1.2\) and \(1.9\), for both isothermal (solid lines) and adiabatic (dash-dotted lines) walls. The figure also include one pseudo-adiabatic case (dashed line) at \(M_{c}\approx 1.2\). At the lowest Mach number (\(M_{c}\approx 0.23\)), turbulent stresses (\(R^{*}_{vv}\), \(R^{*}_{uv}\)) collapse well for isothermal and adiabatic walls suggesting no appreciable WTBC effect as one approaches the incompressible limit. As the Mach number is increased, however, we can clearly observe differences between isothermal, adiabatic and pseudo-adiabatic cases for \(R^{*}_{vv}\) and \(R^{*}_{uv}\) which are apparent for \(M_{c}\approx 1.2\) and beyond. This effect is especially strong in the viscous sub-layer where we can clearly see higher normal Reynolds stresses close to isothermal (solid line) than to adiabatic (dashed-dotted line) walls. However, one can also observe that some Mach number effects are similar in isothermal cases and adiabatic cases. Investigating these differences and similarities are the main focus of the current work. Three observations can be made. First, in the region adjacent to the wall, indicated by R1 in figure 5, we can see power-law behavior for both \(R^{*}_{uv}\) and \(R^{*}_{vv}\) with exponents that decrease with \(M_{c}\) for adiabatic cases and, as observed before, isothermal cases (Baranwal et al. (2022)). The slope of \(R^{*}_{uv}\) in R1, however, does change with WTBC when \(M_{c}\) is kept constant. This WTBC effect is much weaker for \(R^{*}_{vv}\). Second, \(R^{*}_{uv}\) and \(R^{*}_{vv}\) transition to another scaling regime, indicated as R2 in figure 5, with much weaker WTBC and \(M_{c}\) effects. Finally, the transition location changes with both \(M_{c}\) and WTBC. Taken Figure 5: (a)-(b) Density-scaled Reynolds stresses distributions versus semi-local wall-normal coordinate for isothermal (—), adiabatic (\(\cdots\)) and pseudo-adiabatic (\(\dashdot\)) cases. Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. Insets show same profiles up to \(y^{*}\approx 300\). together, these general observations suggest that significant WTBC and Mach number effects are observed close the wall as Mach number increases. The near-wall asymptotic behavior of turbulent stresses can be theoretically estimated by expanding the constituent velocity components as Taylor series expansions in \(y\): \[u^{\prime}=a_{u}+b_{u}y+c_{u}y^{2}+\dots,\quad v^{\prime}=a_{v}+b_{v}y+c_{v}y^{2 }+\dots \tag{1}\] The coefficients \(a_{\alpha}\) for \(\alpha=u\) and \(v\) are identically zero due to the no-slip boundary condition at the wall. The other coefficients are given by \(b_{v}=\partial v^{\prime}/\partial y\), and \(c_{v}=(1/2)\partial^{2}v^{\prime}/\partial y^{2}\), and similarly for \(u\). If the flow is incompressible (solenoidal), mass conservation combined with the no-slip condition at the wall leads to an additional constraint in the wall-normal velocity component, namely, \(\partial v^{\prime}/\partial y=b_{v}=0\). On the other hand, if the flow is non-solenoidal, \(b_{v}\neq 0\). By taking the product between the expansions of different components and averaging, one can formulate Reynolds averaged turbulent stresses (\(R_{\alpha\beta}\equiv\overline{\alpha^{\prime}\beta^{\prime}}/u_{\tau}^{2}\)), resulting in near-wall scaling laws of the form \(R_{\alpha\beta}\approx\sigma_{\alpha\beta}y^{\gamma_{\alpha\beta}}\) with exponents summarized in table 2. These theoretical exponents are the same for \(R_{\alpha\beta}\) and \(R_{\alpha\beta}^{*}\) given that density has a finite value at the wall. From table 2, we see that the solenoidal and non-solenoidal exponents are different for turbulent stresses containing a wall-normal velocity component. As in Baranwal et al. (2022), we investigate exponents (\(\gamma_{\alpha\beta}\)) and pre-factors (\(\sigma_{\alpha\beta}\)) but extending the analysis to include WTBC effects. Following Baranwal et al. (2022), we fit power laws in regions R1 and R2, as shown in figure 5 for both wall (\(\overline{R_{\alpha\beta}}\) versus \(y^{+}\): \(\square\)) and semi-local (\(R_{\alpha\beta}^{*}\) versus \(y^{*}\): \(\triangle\)) normalizations, to obtain (\(\gamma_{\alpha\beta}^{+}\), \(\sigma_{\alpha\beta}^{+}\)) and (\(\gamma_{\alpha\beta}^{*}\), \(\sigma_{\alpha\beta}^{*}\)) respectively for all cases in our database. In figure 6 (a), we show the exponent \(\gamma_{vv}\) for isothermal (empty markers), adiabatic (dark-filled markers) and pseudo-adiabatic (light-filled markers) wall conditions as a function of \(M_{c}\). The theoretical asymptotic values in table 2 are expected to be attained for exponents in R1 (blue symbols) which are the closest to the wall. On changing thermal wall conditions, the difference between \(\gamma_{vv}\) in R1 is small for the same centerline Mach \begin{table} \begin{tabular}{l|c|c|c c c|c|} \hline & \multicolumn{2}{c|}{\(\overline{v^{\prime}v^{\prime}}\)} & \multicolumn{2}{c|}{\(\overline{u^{\prime}v^{\prime}}\)} & \multicolumn{2}{c|}{\(\overline{v^{\prime}T^{\prime}}\)} \\ & & & isothermal & adiabatic & pseudo-adiabatic \\ \hline solenoidal & 4 & 3 & 3 & 2 & 3 \\ non-solenoidal & 2 & 2 & 2 & 1 & 2 \\ \hline \end{tabular} \end{table} Table 2: Exponents \(\gamma_{\alpha\beta}\) for near wall asymptotic behavior for \(R_{\alpha\beta}\) (\(\alpha\) and \(\beta\) are \(u\), \(v\) or \(T\)). \begin{table} \begin{tabular}{l|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Isothermal} & \multicolumn{2}{c|}{Adiabatic} & \multicolumn{2}{c|}{Pseudo-adiabatic} \\ & R1 & R2 & R1 & R2 & R1 & R2 \\ \hline wall & \(\square\) & \(\square\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) & \(\blacksquare\) \\ semi-local & \(\bigtriangleup\) & \(\bigtriangleup\) & \(\bigtriangleup\) & \(\bigtriangleup\) & \(\bigtriangleup\) & \(\bigtriangleup\) \\ \hline \end{tabular} \end{table} Table 3: Marker styles used for exponents \(\gamma_{vv}\) and \(\gamma_{uv}\) for different WTBCs and scaling regimes. number except for \(M_{c}=0.5\) where the adiabatic case has a slightly larger exponent. The exponent \(\gamma_{vv}\) approaches its solenoidal and non-solenoidal limiting behavior (see table 2) for \(M_{c}\lesssim 0.2\) and \(M_{c}\gtrsim 0.8\), respectively. Between these two limits there is a smooth transition with \(M_{c}\) for both isothermal and adiabatic cases. In figure 6(b) we show the exponents for the shear Reynolds stress, \(\gamma_{uv}\), versus \(M_{c}\) and observe a much stronger influence of thermal boundary conditions with larger values of \(\gamma_{uv}\) in R1 for adiabatic cases at all Mach numbers. The pseudo-adiabatic case appears to match the isothermal case, which may not be completely unexpected given that in this case we also impose a constant temperature at the wall. This may indicate that \(\gamma_{uv}\) is independent of \(T_{w}\) since exponents for isothermal and pseudo-adiabatic are very close to each other even though wall temperature is markedly different. Furthermore, this may also suggest that differences in exponents for isothermal and adiabatic cases are not due to differences in wall temperature. The values obtained for the R1 exponents, however, are independent of whether one uses wall or semilocal units for all WTBCs. This is in line with the theoretical behavior discussed earlier. In R2, semi-local normalization provides a better collapse of exponents with different WTBCs. This can be seen in figure 6(a) where we see that, for a fixed \(M_{c}\), there are negligible differences between \(\gamma_{vv}^{*}\) for isothermal (red empty triangles), adiabatic (red filled triangles), and pseudo-adiabatic (light red filled triangles) cases. Note also that \(\gamma_{vv}^{+}\) and \(\gamma_{vv}^{*}\) in R2 are the same for all Mach numbers for adiabatic cases but not for isothermal cases. For isothermal cases, when \(M_{c}\) is roughly above unity, \(\gamma_{vv}^{+}\) and \(\gamma_{vv}^{*}\) differ. This can be understood by noting that the temperature and density gradients are higher near the isothermal wall than the adiabatic wall. Therefore, in adiabatic cases, local density and viscosity are closer to wall values as compared to those in isothermal cases (also seen in figure 3(a)(c)). Similar behavior is observed for \(\gamma_{uv}\). In figure 5, we found that turbulent stresses in R2 are less affected by variations in Mach number when semi-local normalizations are used for different WTBCs. This is consistent with the results in figure 6(a)(b), where we see a very weak \(M_{c}\) effect on \(\gamma_{uv}^{*}\) (and to a lesser Figure 6: Power law exponents for (a) wall-normal Reynolds stress (b) Reynolds shear stress plotted against centerline Mach number. Horizontal gray lines for solenoidal (- -) and non-solenoidal (- -) asymptotic exponents (table 2). Markers in all panels (table 3): \(\square\) indicates wall normalizations (\(R_{\alpha\beta}=\sigma_{\alpha\beta}^{+}(y^{+})^{\gamma_{\alpha\beta}^{+}}\)), \(\triangle\) indicates semi-local normalizations (\(R_{\alpha\beta}^{*}=\sigma_{\alpha\beta}^{*}(y^{*})^{\gamma_{\alpha\beta}^{*}}\)) for isothermal (empty markers), pseudo-adiabatic (light-filled markers) and adiabatic (dark-filled markers) cases. Blue and red markers correspond to R1 and R2 regions, respectively. The solid line in all panels connects isothermal data for comparison. degree on \(\gamma_{vv}^{*}\)) for all WTBCs. In general, though, we observe a weaker \(M_{c}\) dependence for adiabatic than isothermal walls for exponents in wall units. In addition to obtaining exponents for isothermal cases from simulations where both walls are isothermal and at the same temperature, we also obtain the exponents close to the isothermal wall from simulations with different thermal boundary conditions (pseudo-adiabatic or adiabatic) on the other wall. The exponents \(\gamma_{vv}\) and \(\gamma_{uv}\) in R1 near the isothermal wall was found to, in fact, be independent of the boundary condition of the other wall, indicating that the near-wall asymptotic behavior is not significantly affected by the WTBC on the non-identical wall. non-solenoidal (circles) analytical values for \(M_{c}\lesssim 0.2\) and \(M_{c}\gtrsim 0.8\), respectively for isothermal (empty markers) and adiabatic (dark-filled markers) wall conditions. Pseudo-adiabatic (light-filled marker) case with \(M_{c}\approx 1.2\) also follows analytical non-solenoidal value. These observations are consistent with the behavior of exponents obtained from the fit. The value of \(\sigma_{vv}^{+}\) (squares) is also found to be lower for adiabatic (dark-filled) than isothermal (empty) cases at \(M_{c}\gtrsim 0.8\). We finally note that, at high Mach numbers, the dominant prefactor is the one involving \(b_{v}\) which for no-slip walls, is equal to the level of dilatation motions at the wall (Baranwal et al., 2022). Thus, from a purely kinematic standpoint, the particular scaling laws observed will depend only on dilatation (i.e. \(b_{v}\)) regardless of how those dilatations are generated. It is known that different levels of dilatation at the wall can be generated either by changing the centerline Mach number (Baranwal et al., 2022) or thermal boundary condition at the wall (Xu et al. (2021)). This is also clear in figure 7(b), where we observe that the level of dilatational motions at the wall is different for different Mach numbers and WTBCs. Dilatation levels are weaker for adiabatic than isothermal walls with the same \(M_{c}\). Pseudo-adiabatic walls have intermediate dilatation levels close to the wall. As previously stated, dilatation is a key factor governing the scaling laws, and one may, thus, expect better collapse of different statistics when using the dilatational content as a normalizing parameter. This general concept of universality based on the level of dilatational motions independent of the specific mechanism that generated them was indeed recently proposed (Donzis and Panickacheril, 2020) though only for homogeneous flows. To test these concepts, in figure 8 (a)(b) we show the exponents as a function of the r.m.s. of dilatation at the wall normalized with wall units, \(\theta_{w,rms}^{+}\). We clearly see a better collapse of exponents than in the corresponding panels (a) and (b) of figure 6, supporting the idea that dilatational levels, regardless of how they are generated, provide the appropriate scaling parameter for near-wall behavior at high speeds. This is consistent with Donzis and Panickacheril (2020) where the use of dilatational content as a governing parameter yielded a universal behavior for a number of statistics including pressure variance, dissipation, and skewness of the velocity gradients. From a modeling perspective, it may be useful to parametrize these seemingly universal curves. We have Figure 8: Power-law exponents in R1 for (a) wall-normal turbulent stress (b)turbulent shear stress plotted against r.m.s of dilatation at the wall. Markers as in table 3. Horizontal gray lines for solenoidal (- - -) and non-solenoidal (- -) asymptotic exponents (table 2). Solid lines are scalings, (a) \(2+2\exp(-10^{10}\theta_{w,rms}^{+}\)\({}^{1.69})\) (b) \(2+\exp(-126\theta_{w,rms}^{+}\)\({}^{0.45})\). found that these curves can be represented reasonably well with simple exponentials in \(\theta_{w,rms}^{+}\), which are included in figure 8(a)(b) and noted in its caption. On comparing figure 6(a) with (b), we find that the transition from the low to the high Mach number limit in R1 for \(\gamma_{uv}\) is smoother than that of \(\gamma_{vv}\) for isothermal as well as adiabatic cases (adiabtic cases exhibit an even slower transition than isothermal cases). A similar observation can also be made from figure 8 where the transition (with levels of dilatation at the wall in this case) is smoother for \(\gamma_{uv}\) as compared to \(\gamma_{vv}\). This suggests a slow decorrelation between \(u^{\prime}\) and \(v^{\prime}\) as compressibility levels increase close to the wall. To study this, we show in figure 9(a) the correlation coefficient \(C_{uv}\equiv\overline{u^{\prime}v^{\prime}}/u_{rms}v_{rms}\) for all isothermal cases in the database. We similarly define the correlation coefficient \(C_{\alpha\beta}\) for arbitrary variables \(\alpha\) and \(\beta\) as \[C_{\alpha\beta}\equiv\frac{\overline{\alpha^{\prime}\beta^{\prime}}}{\alpha_{ rms}\beta_{rms}} \tag{10}\] We see that for the lowest Mach numbers, \(C_{uv}\) is relatively constant close to the wall (\(y^{*}\lesssim 1\)). As \(M_{c}\) increases, the overall magnitude of the correlation is reduced in this region, though all the lines seem to approach, a region of relatively constant correlation of about 0.45, a value consistent with those observed in supersonic boundary layers (Shadloo et al. (2015)). The distance from the wall at which this region starts, however, increases with \(M_{c}\), indicating that compressibility effects are felt at increasing distance from the wall as the Mach number increases. The increasing decorrelation close to the wall with \(M_{c}\) has also been observed in Sciacovelli et al. (2017), an effect that was also found to be independent of Reynolds number. This near-wall decorrelation that becomes stronger as \(M_{c}\) increases suggests that while a simple product of Taylor expansions can describe diagonal stresses (e.g. \(R_{uu}\) or \(R_{vv}\)), this is not the case for off-diagonal stresses (\(R_{uv}\)) which comprise the correlation between two different variables. In particular, we see that for low and high \(M_{c}\), the correlation \(C_{uv}\) is relatively constant close to the wall, though at different levels. It is at intermediate Mach numbers that \(C_{uv}\) presents a positive slope in this region. Thus, because \(\overline{u^{\prime}v^{\prime}}=C_{uv}u_{rms}v_{rms}\) we can see how the R1 exponent for \(R_{uv}\), would be close to the sum of the exponents for \(u_{rms}\) and \(v_{rms}\) for low and high \(M_{c}\) while it would be larger at intermediate \(M_{c}\). This explains, then, why the transition from the solenoidal to the non-solenoidal asymptotes is smoother for \(R_{uv}\) than for the case of diagonal stresses. At the centerline of the channel, \(C_{uv}\) vanishes due to reflective Figure 9: Correlation coefficient for \(R_{uv}\) (a) Isothermal wall (b) Isothermal (—), adiabatic (- -) and pseudo-adiabatic (- - -) walls. Inset contains the same data in linear scales. Colors as in table 1. symmetry across the centerline plane, which is seen as a rapid decrease in the correlation in the figure at high values of \(y^{*}\). To assess the effect of WTBC, in figure 9(b) we show the correlation coefficient for different boundary conditions and three Mach numbers, \(M_{c}\approx 0.23\), \(1.2\), and \(1.9\). As before, we see that \(C_{uv}\) is relatively flat at the lowest \(M_{c}\approx 0.23\) and for distances below \(y^{*}\sim O(1)\), with very little WTBC effect. The same weak dependence on WTBC is observed at \(y^{*}\) beyond, say, \(4\), where \(C_{uv}\) approaches the constant value discussed above. As the Mach number is increased, however, there are observable differences between isothermal, adiabatic and pseudo-adiabatic walls. In particular, we see that isothermal walls (black solid line) create a stronger decorrelation between \(u\) and \(v\) than adiabatic walls (black dashed-dotted line) for \(M_{c}\gtrsim 1.2\) and pseudo-adiabatic (black dashed line) for \(M_{c}\approx 1.2\). In addition, there are differences in the slope for \(C_{uv}\) close to the wall between adiabatic and isothermal cases, especially for \(M_{c}\approx 1.2\) which also seem to contribute to the difference in power-law behavior for these two WTBCs. This is clearly evident in figure 6(b), where \(\gamma_{vv}\) for \(M_{c}\approx 1.2\) seems to have the largest difference between isothermal and adiabatic cases. Moreover, the distance from the wall at which constant region of \(C_{uv}\) starts, is larger for isothermal than adiabatic cases. Finally, in figure 10 we show the wall-normal location where \(R_{vv}\) and \(R_{uv}\) transition from region R1 to region R2, which is denoted as \(y_{tr}\). Consistent with the results in Baranwal et al. (2022) we see in panel (a) that \(y_{tr}\) moves away from the wall as \(M_{c}\) is increased. However, we also observe clear WTBC effects. In particular, we see that for adiabatic walls (dark-filled symbols) the transition moves closer to the wall compared to isothermal (empty symbols) and pseudo-adiabatic (light-filled symbols) walls. For example, for high \(M_{c}\), we see close to order-of-magnitude differences in \(y_{tr}\) between isothermal and adiabatic cases for \(R_{vv}\). As before (figure 8(a)(b)), we can explore the suggestion in Donzis and Panickacheril (2020) that a higher degree of universal behavior will be observed when dilatational motions are used to scale statistics of interest. This is indeed supported by the data in figure 10(b) where we show \(y_{tr}\) as a function of \(\theta^{+}_{w,rms}\). Data for both \(R_{vv}\) and \(R_{uv}\) appear to be closer to exhibiting universal scaling (though not perfect) under this normalization. Figure 10: Transition location of scaling exponents plotted versus (a) centerline Mach number (b) r.m.s dilatation at the wall. Markers in all panels: (\(\square\), \(y^{+}\); \(\triangle\), \(y^{*}\)) Black and Blue colored markers correspond to wall-normal Reynolds stress and shear Reynolds stress respectively for isothermal (empty markers), adiabatic (dark-filled markers) and pseudo-adiabatic (light-filled markers) cases. We can then conclude that by increasing the centerline Mach number or changing any other flow condition which results in enhancing dilatation levels at the wall, an enlarged region close to the wall will develop where compressibility effects are significant. This is also the region where Morkovin's hypothesis is found to be inadequate to collapse Reynolds stresses as shown before. ## 5 Effects of thermal boundary conditions on temperature fluxes ### Temperature fluxes The asymptotic behavior of temperature can be analyzed in a similar way to the velocity field by considering separately isothermal and adiabatic conditions. The Taylor series expansion of temperature fluctuations is given by \[T^{\prime}=a_{T}+b_{T}y+c_{T}y^{2}+\ldots \tag{10}\] For the isothermal case, temperature is fixed at the wall and one has \(a_{T}=0\) (but \(b_{T}\neq 0\)). For the adiabatic case, there are fluctuations at the wall but its normal gradient vanishes, in which case \(b_{T}=0\) (but \(a_{T}\neq 0\)). One would thus expect different near-wall asymptotic behavior based on thermal boundary conditions. In figure 11 (a), we plot the r.m.s. of temperature, \(T_{rms}\), normalized by the mean wall temperature against \(y^{*}\) for all cases. We find that the asymptotic behavior of \(T_{rms}\) is qualitatively different for isothermal and Figure 11: Root-mean-squared temperature fluctuations (a) normalized with wall temperature for isothermal (—), adiabatic (-\(\cdots\)) and pseudo-adiabatic (-\(\cdots\)) cases. (b) normalized with friction temperature for isothermal cases. (b) normalized with r.m.s temperature at the adiabatic wall for adiabatic cases. Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. adiabatic walls. Near the isothermal wall, \(T_{rms}\) follows a power-law increase while the adiabatic cases are flat. Similar asymptotic behavior was observed in incompressible and low-Mach number flows for isothermal and isoflux conditions (Tiselj et al., 2001; Li et al., 2009). The asymptotic power-law scaling for isothermal cases is equal to its theoretical asymptote (\(\gamma_{T}=1\)) for all \(M_{c}\). For adiabatic cases, the profile is constant for most of the viscous sublayer (until \(y^{*}\approx 2\)) and that constant increases with increase in the \(M_{c}\). Interestingly, the pseudo-adiabatic case (black dashed line), which has been extensively used in the literature to model adiabatic walls, exhibits an isothermal-like power-law behavior close to the wall. An alternative normalization for temperature, in analogy with the Reynolds stresses, is through the so-called friction temperature, \(T_{\tau}\equiv-\kappa(\partial\overline{T}/\partial y)_{w}/\overline{\rho_{w }}c_{p}u_{\tau}\), where \(\kappa\) is the thermal conductivity. It is clear, however, that this normalization can only be applied to isothermal walls since adiabatic (and pseudo-adiabatic) walls present zero conductive heat transfer to the wall \((\partial\overline{T}/\partial y|_{w}=0)\). In figure 11 (b), we show all isothermal cases which do, in fact, collapse in the near-wall region following its asymptotic scaling of \(\sim y^{*}\). A collapse of adiabatic cases is also obtained when \(T_{rms}\) is normalized with their respective wall values (\(T_{w,rms}\)) as seen in figure 11 (c). Since \(T_{w,rms}=0\) for isothermal cases, it is clear that neither normalization provides universal scaling across different WTBCs. #### 5.1.1 Streamwise turbulent heat flux The streamwise component of the turbulent heat-flux (\(R_{uT}\)) is an important quantity in wall-bounded flows which needs to be correctly modeled in order to make accurate predictions. In fact, this heat-flux component has been found to be even larger than the wall-normal turbulent heat flux (Huang et al. (2020)). Current Boussinesq or constant \(Pr_{T}\) based RANS models, however, cannot capture its behavior accurately (Bowersox, 2009; Huang et al., 2019; Broslawski et al., 2022). In figure 12 (a) we show the density-scaled streamwise turbulent heat flux, \(\overline{\rho u^{\prime\prime}\overline{T}^{\prime\prime}}/(\rho_{w}u_{\tau} T_{\tau})\) in the near-wall region of an isothermal wall for different Mach numbers against \(y^{*}\). We observe very good collapse for all Mach numbers along the theoretical asymptotic power law given by \(\gamma_{uT}=2\) (table 2). The adiabatic and pseudo-adiabatic cases are included in figure 12 (b) (normalized with wall temperature) along Figure 12: Streamwise turbulent heat flux close to (a) isothermal walls normalized by friction temperature. (b) isothermal (—), adiabatic (- - -) and pseudo-adiabatic (- - -) cases normalized by their respective wall temperature. Inset contains the same data in linear scales upto \(y^{*}\approx 300\). Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. with the isothermal cases for comparison. The temperature fluctuations at the wall for adiabatic cases result in \(\gamma_{uT}=1\) and again conforms to the theoretical behavior. Following \(T_{rms}\), we observe that power-law behavior for pseudo-adiabatic streamwise heat flux follows isothermal-like behavior and thus, also matches with the isothermal theoretical exponent. The streamwise heat flux becomes negative for \(y^{*}\gtrsim 1\) in adiabatic and pseudo-adiabatic cases and thus can not be shown in logarithmic scales. Again, this indicates that fine resolution close to the wall is required to capture correct near-wall asymptotic behavior. In inset of figure 12 (b), we also include streamwise heat-flux along the channel in linear scales. Similar to \(T_{rms}\), we find that any scaling, either by normalization using \(T_{w}\) or \(\overline{T}\) (not shown here), \(R_{uT}\) do not collapse for different \(M_{c}\) and WTBCs in high-speed regime. Similar to the case of \(R_{uv}\) discussed above, the near-wall asymptotic behavior of \(R_{uT}\) will depend not only on the scaling of the r.m.s. of the two variables involved in the flux, but also on their cross-correlation. The excellent agreement seen for \(\gamma_{uT}\) for all cases with their respective theoretical scaling, then, implies that the correlation coefficient, \(C_{uT}\) does not vary in \(y\) in this region and is evident in figure 13. For \(y^{*}\lesssim 1\), \(C_{uT}\) is constant with \(y^{*}\) for all \(M_{c}\) and WTBCs. However, we see interesting differences between different WTBCs. First, the absolute value of \(C_{uT}\) is minimum near the wall for adiabatic cases while for isothermal cases, the absolute value of \(C_{uT}\) is maximum close to the wall. Some \(M_{c}\) effects can be observed for adiabatic cases close to the wall while -\(C_{uT}\) for different \(M_{c}\) collapses to a constant value of \(-1\) near isothermal walls. For adiabatic cases, the decorrelation decreases on moving away from the wall until \(y^{*}\approx 15\), while \(C_{uT}\) for isothermal cases remains constant in this region, with -\(C_{uT}\approx-1\). Interestingly, the pseudo-adiabatic case resembles isothermal-like behavior near the wall and adiabatic-like behavior beyond \(y^{*}\approx 10\). At further distance, \(y^{*}\gtrsim 15\), decorrelation increases for all WTBCs with isothermal case maintaining a positive correlation, while adiabatic and pseudo-adiabatic maintaining negative correlations (\(C_{uT}\)). The correlation approaches zero as one moves towards the centerline. For \(y^{*}\gtrsim 100\), Mach number effects can be seen for isothermal and adiabatic cases. Another interesting observation from figure 13 is that \(C_{uT}\) for adiabatic and pseudo-adiabatic cases, as shown in the inset of figure 13 resemble \(C_{uT}\) profile for a flat-plate boundary layer (Duan et al. (2010)) with isothermal or pseudo-adiabatic walls. Figure 13: Correlation coefficient for \(R_{uT}\) for isothermal (—), adiabatic (- -) and pseudo-adiabatic (- -) cases. Colors as in table 1. Inset contains the same data in linear scales. #### 5.1.2 Wall-normal turbulent heat flux In figure 14 (a), we plot the density averaged wall-normal turbulent heat flux \((R_{vT})\), \(\overline{\rho v^{\prime\prime}T^{\prime\prime}}/(\rho_{w}u_{\tau}T_{w})\) close to the wall for isothermal cases (solid lines) with \(M_{c}\approx 0.23,1.2,1.9\), low Mach adiabatic case with \(M_{c}\approx 0.23\) and pseudo-adiabatic case with \(M_{c}\approx 1.2\). It can be seen that close to the isothermal wall, a Mach number dependent power-law exists for the wall-normal turbulent heat-flux. A detailed study of asymptotic power-law for wall-normal turbulent heat flux close to isothermal walls was performed in Baranwal et al. (2022), where power-law exponents were observed to transition from its theoretical low Mach to high Mach asymptotes. This transition was found to be similar to that of \(\gamma_{uv}\). The asymptotic behavior of heat-flux close to the pseudo-adiabatic wall exhibits a power-law behavior with \(\gamma_{vT}\approx 2.1\) and matches closely to the theoretical limit of isothermal asymptotic power law. This is in line with the behavior of all other statistics close to the pseudo-adiabatic wall which behave like those in isothermal cases. For Mach number in the near-incompressible range \(M_{c}=0.23\), a power-law behavior with exponent equals its theoretical value is observed close to the adiabatic wall. Similar to \(R_{uT}\), \(R_{vT}\) for this adiabatic case (\(M_{c}=0.23\)), and pseudo-adiabatic case changes sign moving away from the wall and therefore can not be shown in logarithmic scales. For adiabatic cases with \(M_{c}>0.23\), we find that a well-defined power-law behavior is not observed close to the wall and hence the data is plotted in Figure 14: Wall-normal turbulent heat flux close to (a) isothermal, (—), adiabatic (- -) and pseudo-adiabatic (- -) walls in logarithmic scale. Inset contains the same data in linear scales upto \(y^{*}\approx 300\). Red, black and magenta correspond to \(M_{c}\approx 0.23\), \(M_{c}\approx 1.2\) and \(M_{c}\approx 1.9\) respectively. (b) adiabatic walls in linear scales. Colors as in table 1. Figure 15: Correlation coefficient for \(R_{vT}\) near (a) isothermal walls (b) adiabatic (- -) and pseudo-adiabatic (- -) walls. Inset has the same data in linear scales. Colors as in table 1. linear scales in figure 14 (b). Similar to \(T_{rms}\) and \(R_{uT}\), we find that any scaling, either by normalization using \(T_{w}\) or \(\overline{T}\) (not shown here), \(R_{vT}\) do not collapse for different \(M_{c}\) and WTBC in high-speed regimes. Finally, we plot \(-C_{vT}\) as a function of \(y^{*}\) in figure 15 (a) and (b). For isothermal cases as shown in figure 15(a), \(C_{vT}\) closely resembles \(C_{uv}\) as shown in figure 9 indicating that increasing \(M_{c}\) has similar effects on \(C_{vT}\) as were observed for \(C_{uv}\). In the inset of figure 15(a), we plot \(C_{vT}\) in linear scales where moving towards the centerline (\(y^{*}\gtrsim 100\)), some Mach number effects can be observed. For adiabatic walls, as is shown in figure 15 (b), a trend with the Mach number close to the wall is observed for \(C_{vT}\). On moving away from the wall, the decorrelation between \(v^{\prime}\) and \(T^{\prime}\) decreases. Furthermore, the effect of mixed boundary condition can be observed close to the channel centerline where \(C_{vT}\) does not vanish. This is because of the finite mean temperature gradients at the channel half-width resulting in the non-zero wall-normal heat flux at \(h\). On comparing figure 15 (a) and (b), we observe that \(C_{vT}\) assumes opposite signs for isothermal and adiabatic cases in regions away from the wall. Like the previous observation for other statistics close to the wall, pseudo adiabatic exhibits isothermal-like near-wall behavior but follows adiabatic in regions away from the wall. ## 6 Conclusions The asymptotic behavior of turbulent stresses and turbulent heat fluxes close to the wall were investigated using a large DNS database of turbulent channel flows with centerline Mach numbers spanning from 0.23 to 2.22. The dataset comprises of simulations with three different wall thermal boundary conditions (WTBC), namely isothermal, adiabatic and pseudo-adiabatic. A distinguishing feature of the present DNS is the near-wall resolution which is much finer than those typically found in the literature. We show this is essential to capture near-wall behavior for different flow and wall boundary conditions. Turbulent stresses containing wall-normal velocity component do not exhibit a universal behavior close to the wall when normalized using either wall or semi-local units. Interestingly, some statistics behave differently for different WTBCs while others behave similarly. Similarities include Mach number effects on statistics close to the wall for isothermal and adiabatic cases. In both cases, turbulent stresses exhibited asymptotic power-law behavior in the near-wall region (which we call R1) for all Mach numbers and WTBCs. With increase in Mach number, smooth transition of asymptotic power-law exponents from the solenoidal limit to the high-speed limit was observed. Consistent with previous findings, a second scaling regime (R2) with a steeper exponent and a weaker Mach number dependence beyond R1 was observed. The transition location between R1 and R2 was dependent on Mach number. A notable difference between cases with different WTBCs is the change in power-law exponents for turbulent stresses with changing WTBC at high Mach numbers. This effect is stronger for \(R_{uv}\) than for \(R_{vv}\). In general, \(R_{uv}\) was found to be more sensitive to changes in \(M_{c}\) or WTBC. This was linked to a decorrelation between \(u^{\prime}\) and \(v^{\prime}\) when \(M_{c}\) is increased or when the WTBC changes from isothermal to adiabatic. Inspired by a recent proposal based on homogeneous flows, we found that universality can be indeed recovered if dilatational motions are incorporated as a governing parameter regardless of the mechanism that generated them. In particular, asymptotic power-law exponents and the transition location between the two scaling regimes R1 and R2 do collapse on a universal curve which depends uniquely on \(\theta_{w,rms}\), the r.m.s. of dilatation at the wall. If one uses the (perhaps more intuitive) centerline Mach number one can clearly see differences in exponents and transition location for different WTBCs. This clearly supports the idea that dilatational levels, regardless of how they are generated, provide the appropriate scaling parameter for near-wall behavior at high speeds furthering the idea of some universality of statistics in compressible wall-bounded flows. This also support the previously found conclusion that Morkovin's hypothesis does not take into consideration all the effects associated with compressibility at higher Mach numbers. We also investigated statistics of temperature fluctuations, wall-normal and streamwise turbulent heat fluxes for varying \(M_{c}\) and WTBC. For isothermal cases we found that \(T_{rms}\) follows a power-law behavior predicted by the analytical form of its Taylor expansion. For adiabatic cases, on the other hand, \(T_{rms}\) remains constant in the viscous sublayer followed by an almost universal increase with \(y^{*}\). The streamwise heat flux, \(R_{uT}\), exhibits a power-law behavior close to the wall with exponents given by theoretical predictions for both isothermal and adiabatic cases. In general, it was found that temperature statistics (\(T_{rms}\), \(R_{uT}\)) can be collapsed separately for isothermal and adiabatic cases by normalizing temperature with \(T_{\tau}\) and \(T_{w,rms}\), respectively. However, no general scaling laws were found that could collapse statistics containing temperature fluctuations for both WTBCs. As with Reynolds stresses, the wall-normal turbulent heat flux (\(R_{vT}\)) for isothermal cases exhibits power-law behavior with exponents that depend on \(M_{c}\). A well-defined power-law behavior cannot be unambiguously identified for adiabatic cases with \(M_{c}>0.23\). Pseudo-adiabatic walls, which are often used to mimic an adiabatic wall by imposing an isothermal condition at the adiabatic temperature, displayed isothermal-like behavior close to the wall as \(M_{c}\) increases. A rich interplay between Mach number and WTBC effects was observed for correlation coefficients between \(v^{\prime}\) and \(T^{\prime}\), and between \(u^{\prime}\) and \(T^{\prime}\) indicating a complex dynamics between velocity and temperature fluctuations. Mach number effects were observed in the viscous sublayer for the correlation between \(v^{\prime}\) and \(T^{\prime}\) for all WTBCs, but only in the adiabatic case for \(u^{\prime}\) and \(T^{\prime}\). The strong WTBC effect is evident by the fact that these correlations possess different signs in most of the region across the channel. In these regions, \(v^{\prime}\) and \(T^{\prime}\) are negatively correlated for isothermal walls while they are positively correlated for adiabatic cases. In contrast, \(u^{\prime}\) and \(T^{\prime}\) are positively correlated for isothermal walls while negatively correlated for adiabatic cases. Moreover, in the region close to the wall, the magnitude of these \(u^{\prime}\) and \(T^{\prime}\) correlations are very different for isothermal and adiabatic cases, with the former being much stronger than the latter. Similar to all other statistics, pseudo-adiabatic case exhibits isothermal-like near-wall behavior but resembles the adiabatic profile away from the wall. We close by pointing out that, overall, Morkovin's hypothesis and semi-local normalizations do not collapse data for all the flow and boundary conditions. Universal scaling laws for wall-bounded compressible flows, thus, requires more general scaling laws. Acknowledgments. The authors acknowledge support from (1) the National Science Foundation (Grant No. 1605914), (2) DoD Vannevar Bush Faculty Fellows (ONR Grant No. N00014-18-1-3020), and (3) the Extreme Science and Engineering Discovery Environment (XSEDE) for computational resources. The opinions, findings, views, conclusions, or recommendations contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government. Competing interests: The authors declare none.
2310.08927
Accuracy Requirements: Assessing the Importance of First Post-Adiabatic Terms for Small-Mass-Ratio Binaries
We investigate the impact of post-adiabatic (1PA) terms on parameter estimation for extreme and intermediate mass-ratio inspirals using state-of-the-art waveform models. Our analysis is the first to employ Bayesian inference to assess systematic errors for 1PA waveforms. We find that neglecting 1PA terms introduces significant biases for the (small) mass ratio $\epsilon \gtrsim 10^{-5}$ for quasi circular orbits in Schwarzschild spacetime, which can be mitigated with resummed 3PN expressions at 1PA order. Moreover, we show that the secondary spin is strongly correlated with the other intrinsic parameters, and it can not be constrained for $\epsilon \lesssim 10^{-5}$. Finally, we highlight the need for addressing eccentric waveform systematics in the small-mass-ratio regime, as they yield stronger biases than the circular limit in both intrinsic and extrinsic parameters.
Ollie Burke, Gabriel Andres Piovano, Niels Warburton, Philip Lynch, Lorenzo Speri, Chris Kavanagh, Barry Wardell, Adam Pound, Leanne Durkan, Jeremy Miller
2023-10-13T07:50:28Z
http://arxiv.org/abs/2310.08927v3
Accuracy Requirements: Assessing the Importance of First Post-Adiabatic Terms for Small-Mass-Ratio Binaries ###### Abstract We investigate the impact of post-adiabatic (1PA) terms on parameter estimation for extreme and intermediate mass-ratio inspirals using state-of-the-art waveform models. Our analysis is the first to employ Bayesian inference to assess systematic errors for 1PA waveforms. We find that neglecting 1PA terms introduces significant biases for the (small) mass ratio \(\epsilon\gtrsim 10^{-5}\) for quasi circular orbits in Schwarzschild spacetime, which can be mitigated with resummed 3PN expressions at 1PA order. Moreover, we show that the secondary spin is strongly correlated with the other intrinsic parameters, and it can not be constrained for \(\epsilon\lesssim 10^{-5}\). Finally, we highlight the need for addressing eccentric waveform systematics in the small-mass-ratio regime, as they yield stronger biases than the circular limit in both intrinsic and extrinsic parameters. Gravitational Waves, Extreme/Intermediate Mass Ratio Inspirals, Gravitational Self-Force, Accuracy Requirements, Waveform Systematics, Bayesian Inference ## I Introduction One of the most promising sources for the Laser Interferometer Space Antenna (LISA), the first planned space-based gravitational-wave detector, are extreme-mass-ratio inspirals (EMRIs). An EMRI is the slow inspiral of a stellar-mass compact object (CO) of mass \(\mu\sim 10^{0-2}M_{\odot}\) into a massive black hole (MBH) with mass \(M\sim 10^{5-7}M_{\odot}\). The successful detection of an EMRI within the LISA data stream will be a ground-breaking achievement, offering outstanding tests of general relativity and unique insights into the fundamental nature of the central MBH [1; 2; 3; 4; 5]. Extracting the maximum information from LISA will only be possible with sufficiently high-fidelity waveform models. The source modeling of the gravitational-wave (GW) signal is exceptionally complicated, requiring sophisticated mathematical techniques arising from black hole perturbation theory and gravitational self-force (GSF) theory [6; 7]. Work in these areas dates back to the late 1950s, but, to this day, the mathematical details for generic orbits (eccentric and precessing inspirals) into rotating BHs, when accounting for GW emission, are still being developed. Recent progress in this waveform-modeling program includes rapid generation of waveforms [8; 9; 10; 11; 12; 13; 14], the inclusion of transient resonances [15; 16; 17; 18] and of the small companion's spin [19; 20; 21; 22; 23; 24; 25], generation of adiabatic waveforms for generic orbits around a Kerr BH [15; 26] and of some specific post-adiabatic corrections on generic orbits [27], and generation of waveforms including all effects at first post-adiabatic order in the case of quasi-circular, nonspinning binaries [28]. Not only are EMRIs challenging to model, but they prove to be one of the hardest problems in LISA data analysis [29; 30; 31]. Due to the rich structure of the waveform and sheer volume of parameter space, grid-based searches such as those used by LIGO will be impossible [32]. Furthermore, EMRI signals are typically long-lived and may remain within the LISA sensitivity band for multiple years. The number of cycles scales with the inverse of the (small) mass ratio, \(\epsilon=\mu/M\), implying that one could observe hundreds of thousands of orbits. Thus, we expect to constrain parameters to sub-percent level precision [33; 34]. The detection and eventual characterization of an EMRI signal require waveform models that are faithful to the GW signal buried within the noise of the instrument. A detailed knowledge of the GSF is required, which involves solving the Einstein field equations order by order in powers of the small perturbative variable \(\epsilon\ll 1\). Using a two-timescale expansion [35; 7], it has been shown that the CO's orbital phase has an expansion of the form1 Footnote 1: In this expansion, we have neglected the effect of resonances that appear at fractional powers of the mass ratio \(\mathcal{O}(\epsilon^{-1/2})\). For the class of orbits considered in this work, resonant orbits do not exist and thus will be neglected. \[\phi=\underbrace{\epsilon^{-1}\phi_{0}}_{\text{0PA}}+\underbrace{\phi_{1}}_{ \text{1PA}}+\underbrace{\epsilon\phi_{2}}_{\text{2PA}}+O(\epsilon^{2}). \tag{1}\] The leading order in this expansion represents the _adiabatic_ (0PA) evolution, which can be understood as a slow evolution of geodesic orbital parameters due to the orbit-averaged dissipative piece of the first-order term in the self-force (arising from the \(\sim\epsilon\) term in the metric perturbation). The forcing functions that drive this evolution have been computed at 0PA order for generic bound orbits [36; 37; 15; 38]. However, it is known that we also require the _first post-adiabatic_ (1PA) contribution, \(\phi_{1}\), to ensure sufficiently accurate waveform models [39; 35; 38]. This 1PA term involves not only the conservative and dissipative-oscillatory corrections at first order in the mass ratio, but also the orbit-averaged dissipative effects of the second-order self-force (arising from the \(\sim\epsilon^{2}\) term in the metric perturbation). Finally, linear effects from the secondary spin and evolving spin and mass of the primary feed into this 1PA term [40; 22]. Since the _second post-adiabatic_ (2PA) corrections induce a contribution to the orbital phase which scales linearly with the mass ratio, they are deemed sufficiently small to be unnecessary for parameter estimation of EMRIs. Thus, knowledge of the second-order self-force corrections (and linear corrections from the secondary spin) is thought to be both necessary _and sufficient_ for EMRI data analysis. In 2021, a major breakthrough in accuracy was achieved: the second-order metric perturbation was computed, allowing the construction of the first complete 1PA waveforms for quasicircular orbits in Schwarzschild spacetimes [28] (building on Refs. [41; 42; 40]). This waveform showed remarkable agreement with numerical relativity (NR) waveforms, even for \(\epsilon\sim 10^{-1}\), far outside the EMRI regime; we refer to Ref. [43] for a thorough accuracy analysis. It is crucial that EMRI waveforms are not only accurate but also fast and computationally efficient, as typical Bayesian analysis requires \(\sim 10^{5-6}\) waveform evaluations to infer the parameters that govern the underlying true model. The vast parameter space of EMRIs (given by fourteen parameters, excluding the small companion spin) further adds to the complexity and computational burden of Bayesian inference. In order to develop approaches to tackle the huge parameter space, and because self-force models were not available, several fast-to-evaluate but approximate "kludge" models [44; 45; 46] were developed. Despite not being as faithful as self-force waveforms, kludge models were essential for scoping out early LISA science. These models have been commonly employed in parameter estimation studies in conjunction with very efficient, but unfortunately non-robust systematic tests. Such systematic studies include the Lindblom criteria [47], mismatches between waveforms, and the requirement that the orbital dephasing between two different trajectories is \(\lesssim 1\) radian. More sophisticated data analysis studies on EMRIs employed Fisher Matrix-based estimates to quote precision measurements of parameters [2; 33; 34; 48; 49; 50; 51; 52] and biases on parameters arising from waveform modeling errors [53; 17]. Fisher matrices are cheap to compute, requiring a tiny number of waveform evaluations compared to Bayesian inference. However, they are prone to severe numerical instabilities [48], and they assume that the underlying probability distribution of the parameters is approximately Gaussian. The use of Bayesian inference is not a silver bullet either: un-converged post-eriors can result in false conclusions, which can turn out to be quite dangerous when forecasting LISA science. However, if performed with care, the results can be interpreted as more robust than other systematic tests discussed in this paragraph. Waveform models based on self-force calculations are now emerging. Typically, these require an expensive offline step involving the calculation of the self-force or other post-adiabatic effects, and a more rapid online step that computes the inspiral trajectory and the associated waveform. The first waveform models that contained partial post-adiabatic phasing information took minutes to hours to evaluate [54; 55; 56], but more recently near-identity averaging [13; 14; 11] and two-timescale approaches [41] have reduced the calculation of the inspiral trajectory to a few seconds. Rapidly generating waveforms that include the full mode content then requires additional optimizations [9; 10]. The work presented here is a first of its kind. It is the first waveform accuracy study to assess the requirement of post-adiabatic terms for LISA-focused studies based entirely on Bayesian inference. We employ the state-of-the-art FastEMRIWaveforms (FEW) framework [9; 10], which exploits the EMRI multiscale structure to quickly generate sub-second waveforms with accuracy suitable for LISA data analysis. We also incorporate state-of-the-art second-order GSF results into the FEW framework. Moreover, our analysis includes the latest LISA response function [57], yielding second-generation Time-Delay Interferometry (TDI) variables with realistic orbits generated by the European Space Agency [58]. Finally, we also use the most recent instrumental noise model given by the LISA mission requirements team [59]. All computations presented in this work exploit Graphical Processing Units (GPUs) to accelerate the waveform and LISA response evalu ation time to \(\lesssim 1\) second, making Bayesian inference possible. Thus, we employ the most accurate EMRI waveforms present in the literature to date. Armed with these tools, our goal is to understand the importance of the first post-adiabatic corrections when performing parameter estimation. Finally, we extended our analysis for a class of intermediate mass-ratio inspirals (IMRIs), in particular assuming the primary is a massive black hole (MBH) with mass \(M\sim 10^{5-7}M_{\odot}\) (like an EMRI) while the smaller companion has mass \(\mu\sim 10^{3}M_{\odot}\)2. Footnote 2: We choose these masses for an IMRI to ensure that the corresponding GW signal is within the LISA band. In general, both components of an intermediate mass-ratio binary may be much lighter [60; 61]. The paper is organized as follows: in Sec. II we describe the approximate and exact waveform models used throughout this work; Sec. III outlines the data analysis schemes. The results are presented in Sec IV, more specifically Sec. IV.1 on mismodeling post-adiabatic templates and Sec. IV.2 on constraining the spin of the smaller companion. Finally, we discuss the impact of mismodeling adiabatic templates for eccentric orbits in Sec. IV.3. A comprehensive summary of the results is given in Sec.V. Finally, we present a discussion alongside the scope for future work in Sec. VI and Sec. VII respectively. Expert readers who do not wish to dig through the details of the paper can instead skip straight to the main plots, namely Fig. 1, circular orbit corner plots Figs. 2-4 and eccentric orbit corner plots Figs. 5-6. We use \(G=c=1\) units throughout the paper unless otherwise stated. ## II Waveform models We assume that the central body is a spinless BH described by the Schwarzschild metric with line element \[ds^{2} =-\bigg{(}1-\frac{2M}{r}\bigg{)}dt^{2}+\bigg{(}1-\frac{2M}{r} \bigg{)}^{-1}dr^{2}\] \[\quad+r^{2}\left(d\theta^{2}+\sin\theta^{2}d\phi^{2}\right)\,. \tag{2}\] The binary's smaller companion is a generic CO endowed with spin; we do not make any assumption on its internal composition. To conform with Ref. [28], we find it convenient in our computations to use the symmetric mass ratio \(\nu=\mu/(M+\mu)^{2}=\epsilon/(1+\epsilon)^{2}=\epsilon+\mathcal{O}(\epsilon^{ 2})\). The symmetric mass ratio ranges from \(0<\nu\leq 1/4\) and has been shown to provide a better agreement for binding energy, fluxes, and waveforms when compared with numerical relativity simulations for comparable mass binaries [28; 41; 42; 62]. However, it does not materially affect accuracy at the mass ratios we consider here. We implement in FEW two 1PA waveforms specialized to quasicircular orbits. The first one is a hybrid, approximate waveform, where the 0PA fluxes were computed exactly (up to negligible numerical error) using linear BH perturbation theory while the post-adiabatic second-order self-force corrections are obtained by resumming a third-Post-Newtonian-order (3PN) expansion. We label this model as cir0PA+1PA-3PN. The second 1PA waveform is a state-of-the-art model that includes all relevant 1PA terms: the second-order self-force fluxes and binding energy corrections, computed in Ref. [42] and Ref. [41], respectively, and the contributions from the secondary spin, given in Refs. [19; 20]. This second waveform model is labeled as cir1PA. Finally, to allow for comparison between 0PA and 1PA waveform models, we implement an adiabatic template that we call cir0PA. This is identical to the cir1PA model but with the post-adiabatic terms removed and setting the spin of the secondary to zero. To assess the potential impact of eccentricity on EMRI data analysis, we additionally compare two other adiabatic models: the fully relativistic 0PA waveform available in the FEW package of the BHPToolkit [63] and an approximate waveform that includes GW fluxes known at 9PN [64]. We label the former as ecc0PA and the latter as ecc0PA-9PN. The two models have the same waveform amplitudes and geodesic orbital frequencies and differ only in their expressions for the energy and angular momentum fluxes that drive the inspiral. For more details on the ecc0PA model, we refer the reader to Refs. [65; 66], while we provide the evolution equations for the ecc0PA-9PN model in Sec. II.2. In the following subsections, we summarize the cir1PA model and describe the approximations introduced in cir0PA+1PA-3PN and ecc0PA-9PN. For all waveform models, we follow the convention adopted in Ref. [66] for the source reference frame, namely the orbital angular momentum is set to be aligned to the z-axis of a Cartesian frame centered on the MBH. In this way, the motion is confined to the equatorial plane \(\theta=\pi/2\), and the orbital plane does not precess. ### Quasicircular inspirals with spinning secondary to 1PA order The equations of motion for EMRIs can be conveniently expanded in the small mass-ratio \(\epsilon\). At zeroth order, the motion of the binary components is approximated by a free-falling point-particle in a fixed background spacetime. Post-geodesic corrections arise from the self-force (see Refs. [7; 67] for a comprehensive review) and the coupling between the curvature and spin of the smaller companion, called spin-curvature force [68; 69; 70; 71]. The model we consider is specifically based on a two-timescale formulation of this small-\(\epsilon\) expansion. See Refs. [35; 40] and Ref. [7] for a comprehensive review. As the name suggests, a two-timescale expansion assumes the existence of two disparate timescales. The short one is the orbital timescale, i.e. the orbital period of the secondary. The long timescale is the radiation-reaction timescale. During the inspiral, orbital quantities like the frequency, radius, and waveform amplitudes evolve on the slow timescale, at a rate of order \(\mathcal{O}(\epsilon)\). By contrast, orbital phases evolve on the fast timescale, at a rate of order \(\mathcal{O}(1)\). In this section we summarize the orbital evolution in this formulation through 1PA order, dividing the discussion into conservative corrections and dissipative evolution. #### ii.1.1 Conservative corrections to orbital motion We first summarize the conservative corrections to the (slowly varying) constants of motion in the case of quasicircular, equatorial orbits in the Schwarzschild spacetime. For more details on the interplay between the self-force and spin-curvature force, see Ref. [22], while for more details on the dynamics of spinning particles in circular equatorial orbits, see [68; 72]. In general, if dissipation is neglected, a spinning particle in Schwarzschild spacetime admits four conserved quantities, which are the (normalized) energy \(\tilde{E}=E/\mu\) and the components of the (normalized) total angular momentum \(\vec{J}/(\mu M)=(J_{x},J_{y},J_{z})/(\mu M)\)[73; 74]. With our choice of reference frame, \(\vec{J}/(\mu M)=(0,0,J_{z})/(\mu M)\), we define accordingly \(\tilde{J}=J_{z}/(\mu M)\). As initial conditions, we choose \(\theta=\pi/2\) and set the secondary spin (anti-)aligned to the \(z\)-axis, which implies that \(\chi>0\) is aligned (\(\chi<0\) is anti-aligned) to \(\vec{J}\), where \(\chi\) is the secondary's dimensionless spin parameter. Such conditions ensure that neither the orbital plane nor the secondary spin precess [68; 75]. Hereafter, hatted quantities refer to dimensionless variables normalized by \(M\) (as opposed to quantities with checks, such as \(\tilde{E}\), which are normalized with \(\mu\)). For example, \(\widehat{\Omega}_{\phi}=M\Omega_{\phi}\) and \(\hat{r}=r/M\). The binding energy \(\tilde{E}\) is the only first integral we need to model the inspiral at 1PA order for our quasicircular orbital configurations. It is convenient to parameterize the orbit in terms of its orbital frequency \(\widehat{\Omega}_{\phi}\). The orbital radius of the particle, \(\hat{r}_{p}\), can then be written to linear order in \(\nu\), at fixed frequency \(\widehat{\Omega}_{\phi}\), as \[\hat{r}_{p}=\hat{r}+\nu\chi\delta\hat{r}^{\chi}(\widehat{\Omega}_{\phi})+\nu \delta\hat{r}^{\rm SF}(\widehat{\Omega}_{\phi}). \tag{3}\] The leading-order term is the _geodesic_ orbital radius, \(\hat{r}=\widehat{\Omega}_{\phi}^{-2/3}\), while the corrections \(\delta\hat{r}^{\chi}(\widehat{\Omega}_{\phi})\) and \(\delta\hat{r}^{\rm SF}(\widehat{\Omega}_{\phi})\) are the linear shifts due to the secondary spin and to the conservative first-order self-force, given by \[\delta\hat{r}^{\chi}=-\frac{1}{\sqrt{\hat{r}}} \tag{4}\] and by Eq. (10) of Ref. [40], respectively. We can similarly expand \(\tilde{E}\) to first order in \(\nu\) at fixed frequency: \[\tilde{E}=\tilde{E}_{0}+\nu\chi\tilde{E}_{1}^{\chi}+\nu\tilde{E}_{1}^{\rm SF} \, \tag{5}\] where \(\tilde{E}_{0}\) is the geodesic binding energy and \(\tilde{E}_{1}^{\chi}\) is the shift induced by the secondary spin [76], given by \[\ddot{E}_{0}=\frac{\hat{r}-2}{\sqrt{\hat{r}}\sqrt{\hat{r}-3}}-1\,\qquad\tilde{E}_{1}^{\chi}=-\frac{1}{ \hat{r}^{2}\sqrt{\hat{r}-3}}. \tag{6}\] Finally, \(\tilde{E}_{1}^{\rm SF}\) is the correction to the binding energy induced by the conservative piece of the first-order self-force [77] (see Refs. [41; 43] for discussion). #### ii.1.2 Evolution equations We now consider the binary evolution through 1PA order, accounting for the slow evolution of due to dissipation. In our models, we neglect the evolution of the primary mass and spin and second-order GSF horizon fluxes, the latter of which are currently unknown. While these effects appear formally at 1PA order, they have been shown to have a numerically small effect compared to the overall 1PA contribution to the inspiral [78; 43; 79], so we can safely neglect them. The azimuthal orbital phase \(\Phi_{\phi}\) and the leading-order orbital radius \(\hat{r}\) evolve according to the following coupled set of equations: \[\frac{\mathrm{d}\Phi_{\phi}}{\mathrm{d}\hat{t}} =\widehat{\Omega}_{\phi}(\hat{r})\, \tag{7}\] \[\frac{\mathrm{d}\hat{r}}{\mathrm{d}\hat{t}} =-\nu\big{[}F_{0}(\hat{r})+\nu F_{1}(\hat{r})\big{]}. \tag{8}\] Here, \(F_{0}(\hat{r})\) is the leading-order, adiabatic forcing term given by \(F_{0}(\hat{r})=\big{(}\partial\hat{E}_{0}/\partial\hat{r}\big{)}^{-1}\mathcal{F }_{0}(\hat{r})\), where \(\mathcal{F}_{0}(\hat{r})\) is the leading-order energy flux. The subleading, 1PA term \(F_{1}(\hat{r})\) is what differs between our two circular orbit models. For the complete 1PA model, cir1PA, we have \[F_{1}(\hat{r})=F_{1}^{\rm SF}(\hat{r})+\chi F_{1}^{\chi}(\hat{r}). \tag{9}\] where \[F_{1}^{\rm SF}(\hat{r}) =a(\hat{r})\mathcal{F}_{1}(\hat{r})+a(\hat{r})^{2}\bigg{(}\frac{ \partial\tilde{E}_{1}^{\rm SF}}{\partial\hat{r}}\bigg{)}\mathcal{F}_{0}(\hat{r} )\, \tag{10}\] \[F_{1}^{\chi}(\hat{r}) =a(\hat{r})\mathcal{F}_{1}^{\chi}(\hat{r})+a(\hat{r})^{2}\bigg{(} \frac{\partial\tilde{E}_{1}^{\chi}}{\partial\hat{r}}\bigg{)}\mathcal{F}_{0}(\hat{r })\, \tag{11}\] and \(a(\hat{r})=\left(\partial\check{E}_{0}/\partial\hat{r}\right)^{-1}\). The post-adiabatic corrections to the fluxes \(\mathcal{F}_{1}^{\rm SF}(\hat{r})\) and \(\mathcal{F}_{1}^{\rm x}(\hat{r})\) are, respectively, generated by nonlinear (quadratic) terms in the field equations [42] and by the secondary spin [20]. In the cir0PA+1PA-3PN model, \(F_{1}(\hat{r})\) is approximated as \[F_{1}(\hat{r})=F_{1}^{\rm 3PN}(\hat{r})+\chi F_{1}^{\rm x}(\hat{r}), \tag{12}\] where \(F_{1}^{\rm 3PN}\) has the same functional form as Eq. (10), but it is constructed using analytic post-Newtonian expressions for the fluxes at 0PA and 1PA order, including the self-force correction to the binding energy, in which we retain relative 3PN order accuracy for each. The adiabatic and post-adiabatic fluxes can be extracted by taking the leading and next-to-leading-order-in-\(\nu\) terms in the 4PN fluxes recently computed in Ref. [80]. The binding energy is computed using post-Newtonian self-force expansions [81]. Explicitly, these are given in App. B. It is crucially important that we treat the 3PN flux terms and binding energy as polynomial approximants when used in Eq. (10), without further expanding the geodesic quantity \(a(\hat{r})\). This is because \(a(\hat{r})\) is divergent at the lightring, rendering its large-radius Taylor expansion extremely inaccurate in the strong field, which would corrupt the entire model. The evolution equations for our adiabatic model are obtained by simply setting the forcing function \(F_{1}(\hat{r})\) to zero in Eqs. (7-8). We label this model as cir0PA. Finally, we remark that for all mass-ratios considered in this work, each inspiral terminates far (in radial coordinate distance) from where the transition to plunge begins [82; 83; 84; 85; 86; 87], \(\hat{r}-\hat{r}_{\rm isco}\sim\epsilon^{2/5}\). This is important since our two-timescale evolution equations cease to accurately describe the trajectory around that point [43]. ### Non-spinning eccentric 0PA inspirals In the eccentric case, the orbital radius \(\hat{r}\) oscillates on the orbital timescale between a maximum value \(\hat{r}_{\rm max}\) and a minimum value \(\hat{r}_{\rm min}\), meaning it is not an ideal quantity to evolve directly. Instead, we parameterize the eccentric orbit using the slowly evolving semi-latus rectum \(p\) and eccentricity \(e\), which are defined in terms of the slowly evolving maximum and minimum values of the orbital radius: \[p=\frac{2\hat{r}_{\rm max}\hat{r}_{\rm min}}{(\hat{r}_{\rm max}+\hat{r}_{\rm min })}\,\qquad e=\frac{\hat{r}_{\rm max}-\hat{r}_{\rm min}}{\hat{r}_{\rm max}+ \hat{r}_{\rm min}}. \tag{13}\] Note that in the circular orbit limit, \(e\to 0\) and \(p\to\hat{r}\). Due to the orbital-timescale radial motion, the waveform picks up an additional phase \(\Phi_{r}\) which evolves with the Boyer-Lindquist fundamental radial frequency \(\widehat{\Omega}_{r}\). Expressions for the fundamental frequencies \(\widehat{\Omega}_{\phi}(p,e)\) and \(\widehat{\Omega}_{r}(p,e)\) for eccentric geodesic orbits in Schwarzschild spacetime can be found in Ref. [89]. In practice we make use of the semi-analytic expressions in terms of elliptic integrals in Refs. [90] which have been implemented in the KerrGeodesics package [91]. The eccentric equations of motion at adiabatic order can be written as \[\frac{\mathrm{d}\Phi_{\phi}}{\mathrm{d}\hat{t}} =\widehat{\Omega}_{\phi}(p,e)\, \tag{14}\] \[\frac{\mathrm{d}\Phi_{r}}{\mathrm{d}\hat{t}} =\widehat{\Omega}_{r}(p,e)\,\] (15) \[\frac{\mathrm{d}p}{\mathrm{d}\hat{t}} =-\epsilon\left(\frac{\partial p}{\partial\check{E}_{0}}\mathcal{F }_{0}^{E}(p,e)+\frac{\partial p}{\partial\check{J}_{0}}\mathcal{F}_{0}^{J}(p,e )\right),\] (16) \[\frac{\mathrm{d}e}{\mathrm{d}\hat{t}} =-\epsilon\left(\frac{\partial e}{\partial\check{E}_{0}}\mathcal{F }_{0}^{E}(p,e)+\frac{\partial e}{\partial\check{J}_{0}}\mathcal{F}_{0}^{J}(p,e )\right), \tag{17}\] where \(\mathcal{F}_{0}^{E}\) and \(\mathcal{F}_{0}^{J}\) are the leading-order total fluxes of energy and angular momentum, respectively. We switch back to the small mass ratio \(\epsilon\) in the above equations since both the vanilla FEW and ecc0PA-9PN models are implemented in terms of \(\epsilon\) instead of the symmetric mass ratio \(\nu\). By employing the geodesic relations for \(\check{E}_{0}(p,e)\) and \(\check{J}_{0}(p,e)\), i.e., [89] \[\check{E}_{0}(p,e) =\sqrt{\frac{(p-2)^{2}-4e^{2}}{p(p-e^{2}-3)}}\, \tag{18}\] \[\check{J}_{0}(p,e) =\frac{p}{\sqrt{p-e^{2}-3}}\, \tag{19}\] one can obtain the various partial derivatives in Eqs. (14 - 17) by constructing a Jacobian and inverting so that \[\begin{pmatrix}\frac{\partial p}{\partial E_{0}}&\frac{\partial p }{\partial\check{J}_{0}}\\ \frac{\partial e}{\partial\check{E}_{0}}&\frac{\partial e}{\partial\check{J}_{0 }}\end{pmatrix} =\begin{pmatrix}\frac{\partial\check{E}_{0}}{\partial p}&\frac{ \partial\check{E}_{0}}{\partial e}\\ \frac{\partial\check{J}_{0}}{\partial p}&\frac{\partial\check{J}_{0}}{ \partial e}\end{pmatrix}^{-1} \tag{20}\] \[=\frac{1}{\frac{\partial\check{E}_{0}}{\partial p}\frac{ \partial\check{J}_{0}}{\partial e}-\frac{\partial\check{E}_{0}}{\partial e} \frac{\partial\check{J}_{0}}{\partial p}}\begin{pmatrix}\frac{\partial\check{J}_ {0}}{\partial e}&-\frac{\partial\check{E}_{0}}{\partial e}\\ -\frac{\partial\check{J}_{0}}{\partial p}&\frac{\partial\check{E}_{0}}{\partial p }\end{pmatrix}. \tag{21}\] Explicit expressions are available in Ref. [26]. Note that this procedure will give a singular expression for the rate of change of \(e\) in the circular orbit limit, \(e=0\). However, it is well established that for adiabatic inspirals [92], quasicircular orbits remain quasicircular and so we can simply use \(de/d\hat{t}=0\) in this special case. The only difference between the two eccentric models is the expressions for the total energy and angular momentum fluxes. For the ecc0PA model, we numerically solve the Teukolsky equations for the energy and angular momentum fluxes to infinity and down the horizon generated by a point particle on a geodesic orbit with a given value of \(p\) and \(e\). We do this for multiple points in the \((p,e)\) parameter space and interpolate using splines so that the fluxes can be rapidly evaluated when solving the eccentric equations of motion. For more details, see Sec. IV A of Ref. [9]. By contrast, the ecc0PA-9PN model uses analytic PN expansions of the fluxes of energy and angular momentum to infinity. These are valid to leading order in the mass ratio and relative 9PN order [64], and are available in the PostNewtonianSelfForce package of the BHP-Toolkit [93]. The PN series contain expansions in eccentricity to at least order \(e^{10}\), with some PN components having even higher order expansions up to \(e^{30}\). We highlight that the cir0PA model and ecc0PA model will not agree in the limit as \(e\to 0\) because the former is expanded in \(\nu\) whereas the latter is expanded in \(\epsilon\). For our purposes, this is not a problem: we will only directly compare waveforms expanded in the same small perturbative variable, e.g., cir0PA with cir1PA (\(\nu\)) and ecc0PA with ecc0PA-9PN (\(\epsilon\)). We checked that recovery of a cir0PA model with an ecc0PA model (at \(e=0\)) yields a bias on the secondary mass \(\mu\). In turn, such bias results in a biased mass ratio \(\epsilon_{\rm bias}\) given by \[\epsilon_{\rm bias}\approx\epsilon_{\rm true}-2\epsilon_{\rm true}^{2}. \tag{22}\] Since \(\nu_{\rm true}=\epsilon_{\rm true}/(1+\epsilon_{\rm true})^{2}\sim\epsilon_ {\rm true}-2\epsilon_{\rm true}^{2}+\mathcal{O}(\epsilon^{3})\), we recovered the symmetric mass-ratio, as expected. If the 1PA components were known for eccentric orbits, then it would not matter whether the equations of motion were expanded in \(\epsilon\) or \(\nu\) (for the range of mass ratios we consider). ### Waveform models In the Teukoslky formalism, the gravitational waveform detected by an observer at infinity in the Schwarzschild spacetime can be written as [26] \[h_{+}-ih_{\times}=\frac{\mu}{D_{\rm S}}\sum_{\ell,m,n}\mathcal{A}_{\ell mn}(p( t),e(t))Y_{\ell m}(\vartheta,\varphi)e^{-i\Phi_{mn}(t)}\,, \tag{23}\] where \(D_{\rm S}\) is the source's luminosity distance from the detector, \(\mathcal{A}_{\ell mn}\equiv 2\hat{Z}_{\ell mn}^{\infty}/\hat{\omega}_{mn}^{2}\), and \(\hat{Z}_{\ell mn}^{\infty}=M^{2}Z_{\ell mn}^{\infty}\), the latter being the inhomogenous solution of the radial Teukoslky equation in the limit \(\hat{r}\to\infty\). Here the GW frequency \(\hat{\omega}_{mn}\) is defined as \(\hat{\omega}_{mn}=m\hat{\Omega}_{\phi}+n\hat{\Omega}_{r}\). In practice, the mode amplitudes \(\mathcal{A}_{\ell mn}\) are interpolated across the \(p-e\) space using a neural network [9; 10]. To determine the waveform mode content, the modes are cumulatively summed in order of decreasing power. This summation is truncated at a threshold value of \((1-\kappa)\) times the total power of the modes, with \(\kappa\) a tunable parameter. Only the modes that pass this threshold are included in the waveform computation (for further details see [9; 10]). For our circular orbit runs, we have checked that the mode content of injected waveforms and model template waveforms are identical. This is to ensure consistency between waveforms and to make sure that biases are not a feature of missing important modes between waveform models in the analysis. When generating waveforms, we use the default mode selection parameter \(\kappa=10^{-5}\) in FEW, resulting in the same 12 \(\ell m\)-modes _for all_ circular orbit-based waveforms present in this work. The number of modes scales considerably with increasing eccentricity (see Fig. 4 in [9]); this will be discussed in Sec. IV.3. Since our waveform represents an evolving source, we do not have a decomposition into discrete frequencies, but a multi-voice decomposition where the evolving "voices" are found by solving the equations of motion for the phases \(\Phi_{\phi}(t)\) and \(\Phi_{r}(t)\) and summing together as \(\Phi_{mn}(t)=m\Phi_{\phi}(t)+n\Phi_{r}(t)\). The functions \(Y_{\ell m}(\vartheta,\varphi)\) appearing in Eq. (23) are the spin-weighted spherical harmonics for spin weight \(s=-2\)[94], while the angles \((\vartheta,\varphi)\) identify the direction of the detector in the source reference frame. Due to the LISA constellation's motion, the sources' sky positions are measured with respect to a suitable solar system barycentric (SSB) frame attached to the ecliptic [95]. We adopt the same convention as Ref. [66] and label the binary's sky position and the direction of binary's angular momentum as \((\theta_{S},\phi_{S})\) and \((\theta_{K},\phi_{K})\), respectively. The viewing angle \(\varphi\) is set to \(\varphi=-\pi/2\), which implies that \(\cos\vartheta\) can then be written in terms of the constant angles \((\theta_{S},\phi_{S})\) and \((\theta_{K},\phi_{K})\) as \[\cos\vartheta=-\cos\theta_{S}\cos\theta_{K}-\sin\theta_{S}\sin\theta_{K}\cos( \phi_{S}-\phi_{K}). \tag{24}\] The initial phases of the waveform are then entirely determined by the angles \(\Phi_{\phi_{0}}\) and \(\Phi_{r_{0}}\). Notice that the angles \((\theta_{S},\phi_{S})\) and \((\theta_{K},\phi_{K})\) also appear in the LISA response function [57] and [9]. #### iv.3.1 1PA waveforms for circular orbits In the circular orbit case, we no longer have a radial phase and its corresponding harmonic contribution, so the waveform model simplifies to \[h_{+}-ih_{\times}=\frac{\mu}{D_{\rm S}}\sum_{\ell,m}\mathcal{A}_{\ell m}(\hat{r }(t))Y_{\ell m}(\vartheta,\varphi)e^{-im\Phi_{\phi}(t)}. \tag{25}\] Note that at 1PA order, there will also be order-mass-ratio corrections to the waveform amplitudes \(\mathcal{A}_{\ell m}=\mathcal{A}_{\ell m}^{(0)}+\epsilon\mathcal{A}_{\ell m}^{( 1)}+\mathcal{O}(\epsilon^{2})\). GW interferometers are much more sensitive to fluctuations in frequency, rather than amplitude, especially for asymmetric binaries with \(\epsilon\ll 1\) As such, including sub-leading corrections to the orbital phase is more important than to the amplitudes. Since the largest mass ratio we consider is \(\epsilon\sim 10^{-3}\), we can safely neglect 1PA corrections to the amplitudes in our parameter estimation studies. For this reason, we employed Eq. (25) for our all circular orbit models: cir1PA, cir0PA+1PA-3PN and cir0PA with amplitudes computed at adiabatic order. ## III LISA data analysis GW data analysis relies on three crucial ingredients: GW waveforms, a description of the time-evolving instrument response to the incoming waves, and an accurate characterization of the noise. In this section, we describe our model for the LISA response function and the noise process, ultimately leading to the Whittle-likelihood. ### The data stream of LISA The projection of the polarised GW signals onto the LISA arms depends on the geometry of the instrument, which changes in time due to the spacecraft's motion. The LISA response function is then a dynamical quantity in both the time and frequency domain, and clearly much more complicated than the "static" response of ground-based detectors. Such a feature introduces severe complexities to the accurate modeling and efficient computation of the LISA response function. By projecting the incoming GWs onto the arm-lengths of the detector, one can model the deformations across the six LISA links between the individual craft. It is then possible to construct a first set of time-shifted second-generation TDI variables, which massively suppress the laser noise (by \(\sim 8\) orders of magnitude). These variables can be linearly combined to construct a further set of TDI variables \((A,E,T)\)[96, 97], which are uncorrelated in their noise properties. In our work, we use the TDI variables \((A,E,T)\)[98, 97]. The data streams can then be written as \[d^{(X)}(t)=h_{\rm e}^{(X)}(t;\mathbf{\theta}_{\rm tr})+n^{(X)}(t)\,,\quad X=\{A,E, T\}, \tag{26}\] where \(h_{\rm e}^{(X)}\) denotes the true deterministic signal with true parameters \(\mathbf{\theta}_{\rm tr}\) for the \(X\) TDI observable. We use lisa-on-gpu, a framework to compute the response of the LISA instrument to the incoming GWs available at [57], which generates the three TDI variables. We employ the most realistic orbits of the LISA crafts generated by the European Space Agency (ESA) [58]. These numerically computed orbits take into account gravitational attractions from relevant celestial bodies in our solar system and are built to minimize the "breathing" of the LISA interferometer arms. Such orbits have _approximately_ equal and constant arm-lengths, resulting in minor correlations between noise components [99] that we neglect in favor of computational efficiency [57]. If these correlations are mismodeled, then the injected mismodeled noise realizations will impact the statistical errors of the recovered parameters [100, 101, 34]. This is not a problem in our analysis because, as we will explain later on, we consider zero-noise injections. Furthermore, the signal-to-noise ratio is weakly impacted when mismodeling the noise process3. Footnote 3: This was demonstrated in the case of gaps, where correlations between noise components are severe and cannot be neglected. For more information, see Chap. 8 in [34]. ### Noise process and likelihood We assume that the noise in each channel is a zero-mean, weakly stationary, ergodic, Gaussian random process with noise covariance matrix (in the domain of positive frequencies) [102, 103] \[\langle\tilde{n}^{(X)}(f)[\tilde{n}^{(X)}(f^{\prime})]^{\star}\rangle=\frac{1 }{2}\delta(f-f^{\prime})S_{n}^{(X)}(f^{\prime})\,, \tag{27}\] with tilded quantities representing Fourier transforms of the time-domain data, i.e., \[\tilde{a}(f)=\int_{-\infty}^{\infty}\mathrm{d}t\,a(t)e^{-2\pi ift}\,. \tag{28}\] In Eq. (27), \(\langle\cdot\rangle\) denotes an ensemble averaging process, \(\delta\) is the Dirac delta function and \(S_{n}^{(X)}(f)\) is the (one-sided) power spectral density (PSD) of the instrumental noise process for each channel \(X=\{A,E,T\}\). In our analysis, we use the latest SciRDv1 model noise PSD [104] and second-generation TDI variables with a python implementation available in [105]. Finally, we assume that the PSDs for each channel \(X=\{A,E,T\}\) are all known and fixed. Furthermore, we neglect noise correlations between each TDI channel, i.e., \(\langle\hat{n}^{X}(f)[\hat{n}^{Y}(f^{\prime})]^{\star}\rangle=0\) for \(X\neq Y\)[96, 57, 98]. Under our assumptions, we can write the Whittle-likelihood for a known PSD \(S_{n}^{(X)}(f)\) as [106, 107] \[\log p(d|\mathbf{\theta})\propto-\frac{1}{2}\sum_{X=\{A,E,T\}}(d-h_{m}|d-h_{m})_{( X)} \tag{29}\] with approximate model templates \(h_{m}^{(X)}\) and noise-weighted inner product \((a|b)_{(X)}\) given by [107] \[(a|b)_{X}=4\mathrm{Re}\int_{0}^{\infty}\frac{\tilde{a}^{(X)}(f)(\tilde{b}^{(X) })^{\star}(f)}{S_{n}^{(X)}(f)}\,\mathrm{d}f. \tag{30}\] Given a model template \(h_{m}\), we define the _effective_ SNR with respect to the true signal \(h_{e}\) as \[\rho_{AET}^{\text{eff}}(\mathbf{\theta})=\left[\sum_{X=\{A,E,T\}}\frac{(h_{e}|h_{m}( \mathbf{\theta}))_{(X)}^{2}}{(h_{m}(\mathbf{\theta})|h_{m}(\mathbf{\theta}))_{(X)}}\right]^{ 1/2}, \tag{31}\] with exact templates \(h_{e}\) evaluated at the true parameters \(\mathbf{\theta}_{\text{tr}}\). The maximum of Eq. (31) is given when \(\mathbf{\theta}=\mathbf{\theta}_{\text{tr}}\) and there are no mismodeling errors, i.e., \(h_{m}\equiv h_{e}\) for all \(\mathbf{\theta}\). This maximum is the _optimal_ matched filtering SNR, given by [108] \[\rho_{AET}^{\text{opt}}=\left[\sum_{X=\{A,E,T\}}(h_{e}|h_{e})_{(X)}\right]^{ 1/2}, \tag{32}\] where \(h_{e}\) is evaluated at the true parameters \(\mathbf{\theta}_{\text{tr}}\). The optimal SNR over each TDI stream \(X=\{A,E,T\}\) denotes the average power of the signal when compared to the root-mean-square average of the noise floor. The greater the SNR of the signal \(h_{e}\) in Eq. (26), the greater the likelihood of claiming detection. Previous works [109, 33] set, rather arbitrarily, the SNR threshold to claim detection as \(\rho_{AET}\gtrsim 20\). The sources selected in this work are well above this threshold, with \(\rho_{AET}\sim 70\) in the EMRI regime and \(\rho_{AET}\sim 340\) in the IMRI regime. ### Bayesian parameter estimation Parameter estimation in GW astronomy is typically performed using Bayesian inference. At the heart of Bayesian statistics lies Bayes' theorem, which, up to a normalization factor, is given by \[\log p(\mathbf{\theta}|d)\propto\log p(d|\mathbf{\theta})+\log p(\mathbf{\theta})\,. \tag{33}\] On the right-hand side, \(p(d|\mathbf{\theta})\) is the likelihood function, a probability distribution that describes the probability of observing the data stream given the parameters. Under our assumptions about the noise, we can use the Whittle-likelihood in Eq. (29). The density \(p(\mathbf{\theta})\) is the prior probability distribution representing a-priori beliefs on the parameter set \(\mathbf{\theta}\) before observing the data stream \(d\). We opt for uninformative, uniform prior distributions for \(\mathbf{\theta}\). Finally, the quantity \(p(\mathbf{\theta}|d)\) is the sought-for posterior distribution reflecting our beliefs on the parameters after our observations. The goal is then to generate auto-correlated samples from the posterior density \(p(\mathbf{\theta}|d)\) in order to compute summary statistics. Our analysis employs Markov-Chain Monte-Carlo (MCMC) techniques to sample from the posterior distribution. In particular, we use the _eryn_ sampler [110, 111], which is based on the emcee[112] code. Our proposal distribution is the default stretch proposal [113]. MCMC algorithms for EMRI inference are non-trivial, so we describe in more details our codes in App. A. In Sec. A.1 we give a brief overview of the _eryn_ and emcee samplers, discussing their strengths and weaknesses for the simulations presented in this work. The initial samples are chosen so that \(\mathbf{\theta}^{(0)}\approx\mathbf{\theta}_{\text{tr}}\) since our goal is to identify potential biases close to source parameters, rather than perform a search. More details on our choice of starting coordinates and prior choices (see Eq. (16)) are given in App. A.2. Given a chain of samples \(\mathbf{\theta}^{(i)}\sim p(\mathbf{\theta}|d)\), we define our "best-fit parameters" as the _maximum a posteriori_ (MAP) point estimate \(\mathbf{\theta}_{\text{bf}}=\text{argmax}_{\mathbf{\theta}}\{p(\mathbf{\theta}|d)\}\). The _best-fit_ parameters give the best "match" between the model template and the observed data stream \(d\). In other words, when using non-informative uniform priors, \(\mathbf{\theta}_{\text{bf}}\) maximizes both the likelihood function and posterior density. If the recovered parameters \(\mathbf{\theta}_{\text{bf}}\neq\mathbf{\theta}_{\text{tr}}\) then the MAP estimate \(\mathbf{\theta}\) gives a biased point estimate of the true source parameters \(\mathbf{\theta}_{\text{tr}}\). We neglect the instrumental noise \(n^{(X)}(t)\) present in Eq. (26), which implies that the Whittle-likelihood reduces to \[\log p(d|\mathbf{\theta})\propto-\frac{1}{2}\sum_{X=\{A,E,T\}}(h_{e}-h_{m}|h_{e}-h _{m})_{(X)}. \tag{34}\] In this way, we focus on the impact of biases on the parameters arising due to waveform mismodeling, which may otherwise be obfuscated by nuisance statistical fluctuations given by noise realizations. We also neglect the confusion noise sourced by galactic dwarf binaries [114] since it has not been implemented yet in the latest PSD \(S_{n}^{(X)}\)[104, 105]. If the white-dwarf background were included, the SNR of the signals would be lower, resulting in wider posteriors on the parameters. Nevertheless, the impact of the white-dwarf background on the SNR is marginal, hence it would not significantly affect our key results. ### Waveform systematics and detection We now outline our systematic tests used to compare waveform models. We begin first by describing our main systematic test defined in a Bayesian framework. We will then describe alternate statistical tests that are present in the literature. For a probability density \(p(\mathbf{\theta}|d)\), we define the 68% credible set, \(C_{p(\mathbf{\theta}|d)}\), of the samples as the probability \(P(\mathbf{\theta}\in C_{p(\mathbf{\theta}|d)})=0.68\). Let \(\tilde{p}(\mathbf{\theta}|d)\) represent an approximate model posterior, generated through inference using approximate model waveforms. We then define the systematic test \[\mathcal{C}[i]=\begin{cases}1&\theta^{i}_{\text{tr}}\in\hat{C}_{\bar{p}(\theta^{i} |d)},\\ 0&\text{otherwise},\end{cases} \tag{35}\] where \(\hat{C}_{\bar{p}(\mathbf{\theta}^{i}|d)}\) is the estimated _marginalized_ approximate posterior credible interval for parameter \(\theta^{i}\). Equation (35) generalizes the Cutler-Vallisneri CV criterion [53], since it accounts for non-Gaussian features in the posterior that are not captured by a Fisher Matrix-based approach. If \(\mathcal{C}[i]=(0)1\) for all recovered parameters \(\theta^{i}_{\text{bf}}\), then a waveform model is (un)suitable for parameter estimation. Put simply, if the true parameter is not contained within the 68% credible interval generated using approximate waveforms, then the waveform model is not suitable for statistical inference. An example is given in Fig. 7 in App. A. We stress that Eq. (35) is SNR dependent, since the size of the interval will increase (decrease) with a decrease (increase) of SNR. Since these analyses are computationally expensive, we adopted astrophysically motivated SNRs. One can deduce, as a very rough approximation, how the criterion (35) changes for brighter or dimmer sources by reducing the size of the credible interval as the SNR increases. We now describe further quantities that are used to draw comparisons between waveform models. The overlap function between two waveforms \(h_{1}^{(X)}\) and \(h_{2}^{(X)}\) is defined as \[\mathcal{O}(h_{1}^{(X)},h_{2}^{(X)})=\frac{(h_{1}|h_{2})_{(X)}}{\sqrt{(h_{1}| h_{1})_{(X)}(h_{2}|h_{2})_{(X)}}}. \tag{36}\] Then, over all channels \(X=\{A,E,T\}\), the total mismatch between two waveform models \(h_{1}\) and \(h_{2}\) is \[\mathcal{M}_{AET}(h_{1},h_{2})=1-\sqrt{\frac{1}{3}\sum_{X=\{A,E,T\}}\mathcal{ O}^{2}(h_{1},h_{2})}\,,\] where \(\mathcal{M}=0\) (\(\mathcal{M}=1\)) indicates that \(h_{1}\) and \(h_{2}\) are identical (orthogonal). In a similar way, we define \[\mathcal{M}^{\text{(inj)}} =\mathcal{M}_{AET}(h_{e}(\mathbf{\theta}_{\text{tr}}),h_{m}(\mathbf{ \theta}_{\text{tr}}))\, \tag{37}\] \[\mathcal{M}^{\text{(bf)}} =\mathcal{M}_{AET}(h_{e}(\mathbf{\theta}_{\text{tr}}),h_{m}(\mathbf{ \theta}_{\text{bf}})). \tag{38}\] We remark here that (38) is the usual _fitting factor_, computed after stochastically identifying parameters \(\mathbf{\theta}_{\text{bf}}\) that minimize the mismatch function. It is usual to perform systematic studies between waveform models by analyzing the difference of orbital phase, called dephasing, between two trajectories of the CO. We define two types of dephasing in \(\Phi_{\phi}\): one between two trajectories at the injected parameters \(\mathbf{\theta}_{\text{tr}}\) (Eq. (39)); and one between the two trajectories at inferred parameters \(\mathbf{\theta}_{\text{bf}}\) (Eq. (40)): \[\Delta\Phi^{\text{(inj)}} =\text{Max}\{\Phi_{\phi}^{\text{exact}}\}_{\mathbf{\theta}=\mathbf{ \theta}_{\text{tr}}}-\text{Max}\{\Phi_{\phi}^{\text{model}}\}_{\mathbf{\theta}= \mathbf{\theta}_{\text{tr}}}\,, \tag{39}\] \[\Delta\Phi^{\text{(bf)}} =\text{Max}\{\Phi_{\phi}^{\text{exact}}\}_{\mathbf{\theta}=\mathbf{ \theta}_{\text{tr}}}-\text{Max}\{\Phi_{\phi}^{\text{model}}\}_{\mathbf{\theta}= \mathbf{\theta}_{\text{bf}}}\,. \tag{40}\] Eq. (39) is the usual quantity used to estimate the accuracy requirements of EMRI waveforms. We comment that the two equations above are evaluated over the same time of observation. Finally, we refer to the accumulated SNR normalised by the optimal SNR as the quantity \[\rho^{\text{(inj)}}/\rho^{\text{(opt)}} =\rho^{\text{eff}}_{AET}(\mathbf{\theta}_{\text{tr}})/\rho^{\text{ opt}}_{AET}\,, \tag{41}\] \[\rho^{\text{(bf)}}/\rho^{\text{(opt)}} =\rho^{\text{eff}}_{AET}(\mathbf{\theta}_{\text{bf}})/\rho^{\text{( opt)}}_{AET}\,, \tag{42}\] for \(\rho^{\text{eff}}_{AET}(\mathbf{\theta})\) and \(\rho^{\text{opt}}\) defined in Eq. (31) and Eq. (32), respectively. The equations above represent the fraction of SNR (normalised between 0 and 1) accumulated throughout the inspiral. The quantity in Eq. (42) is useful to determine whether a reference true waveform model is detectable by an approximate model. The quantities in Eqs. (37 - 42) will be useful to compare against our Bayesian inference results. ## IV Results We now present our results on the Bayesian parameter inference with complete circular 1PA and mismodeled eccentric 0PA waveforms. In Sec. IV.1, we will investigate the impact of mismodeling EMRI templates by neglecting some (or all) of the post-adiabatic corrections. In Sec. IV.2, we will then focus our attention on constraining the secondary spin parameter. Finally, in Sec. IV.3 we will describe the impact of mismodeling 0PA eccentric templates. The inferred parameters are the following: the redshifted masses \(M\) and \(\mu\), the initial radial coordinate and initial phase \(r_{0}/M\) and \(\Phi_{\phi_{0}}\), respectively (both defined at time \(t=0\)), the luminosity distance to the source \(D_{\text{S}}\), the source sky position \((\theta_{\text{S}},\phi_{\text{S}})\) and the orientation of the orbital angular momentum \((\theta_{\text{K}},\phi_{\text{K}})\). Waveforms are generated in the source frame and transformed to the SSB frame via the response function. For the sake of clarity, if a model includes the spin of the secondary as a parameter it will be denoted "w/ spin" and otherwise "w/o spin". In Sec. IV.1, we will only infer the secondary spin \(\chi\) using the cir1PA w/ spin model, whereas in Sec. IV.2 we will infer the secondary spin with all approximate models. For sections IV.1 and IV.2, configurations for each of the mass ratios and the relative SNR can be found in Table 1. Details of our sampling algorithm can be found in App. A, including starting points and prior bounds. In this analysis, we will use the emcee sampler due to the simplicity of the likelihood structure in the case of circular orbits. ### Systematic biases -- missing 1PA terms for quasicircular orbits We first consider the impact of using mismodeled waveform templates on LISA science when attempting to extract full 1PA waveforms within the data stream. For each mass ratio \(\epsilon=\{10^{-5},10^{-4},10^{-3}\}\) with respective parameters given by Table 1, we perform four parameter estimation simulations. We inject a true reference signal cir1PA with (w) spin and recover with the following models each without (w/o) secondary spin \[h_{m}=\begin{cases}\text{cir1PA w/o spin},\\ \text{cir0PA+1PA-3PN w/o spin},\\ \text{cir0PA w/o spin}.\end{cases} \tag{43}\] The results for each studied mass ratio \(\epsilon=\{10^{-5},10^{-4},10^{-3}\}\) are displayed (from top to bottom respectively) in Fig. 1. In each of the three panels, the top rows (blue posteriors) are exact marginalized posteriors, generated when injecting and recovering with the exact model cir1PA w/ spin where the spin on the secondary is sampled over. We do not present the posteriors on the extrinsic parameters as they display near-to-zero bias with respect to the true parameters. The non-Gaussian features and shifts to the true posterior are a feature of the secondary spin, which will be discussed in Sec. IV.2. We begin by discussing the case with the smallest mass ratio, \(\epsilon=10^{-5}\). Referring to the top panel of Fig. 1, we see that both the approximate models cir1PA w/o spin (green) and cir0PA + 1PA-3PN w/o spin (red) are suitable for parameter estimation of full 1PA waveforms. Each posterior shows statistically insignificant biases at \(\rho_{AET}\sim 70\) with Eq. (35) resulting in \(\mathcal{C}=1\) for all parameters. Remarkably, the parameters of the exact cir1PA w/ spin model can be correctly inferred, with statistically insignificant biases, by the cir0PA+1PA-3PN model. We remind the reader that the latter contains 0PA information with a 1PA term approximated by a resummed 3PN expansion. The last row of the top panel in Fig. 1 shows a cir1PA w/ spin model recovered with our least faithful model, cir0PA w/o spin. The intrinsic parameters show statistically significant biases with \(\mathcal{C}=0\), but the recovered parameters are very similar to the true ones. For example, given the true primary mass, \(M=10^{6}M_{\odot}\), our best-fit parameter is only \(\sim 10M_{\odot}\) away in magnitude. This quantitatively confirms that adiabatic models would be fine for detection purposes, but not for statistical inference. We now discuss our simulations for \(\epsilon\sim 10^{-4}\), given by the middle panel in Fig. 1. The marginalized Gaussians in the top row of the middle panel all exhibit heavy tails. This is due to correlations between the intrinsic parameters and secondary spin, which will be discussed in Sec. IV.2. Since the approximate models do not contain spin, there are no such correlations and the resultant posteriors resemble their familiar Gaussian shapes for high SNRs. With all approximate model templates, we see significant biases (\(\mathcal{C}=0\)) across the intrinsic parameters when neglecting post-adiabatic terms at \(\rho_{AET}\sim 65\). The most dramatic bias arises when using the cir0PA w/o spin model to recover the exact cir1PA w/ spin reference model. We conclude here that our approximate models summarized in Eq. (43), without secondary spin, are unsuitable for parameter estimation. Finally, we considered an IMRI with \(\epsilon=10^{-3}\), given by the bottom panel in Fig. 1. There are clear differences between the top row (inject/recover with exact model) and the bottom three rows where we recover the reference model with approximations (43) absent of spin. In the top row of the bottom panel, the marginalized posteriors are not Gaussian and are non-trivially skewed. This is a result of strong correlations between the intrinsic parameters and the secondary spin. The Gaussian posteriors on the second and third rows, representing model templates cir1PA w/o spin and cir0PA + 1PA-3PN w/o spin, show significant deviations from the true parameters with respect to their own statistical uncertainty (\(1\sigma\) width). From the approximate model distributions, the uncertainties on the recovered parameters are exceptionally small, reflected by the tight marginalized distributions. This is a consequence of neglecting the important correlations with the secondary spin. The recovered parameters are undoubtedly biased: the true intrinsic parameters do not lie within their 68% credible interval. Interestingly, the recovered parameters are contained within the 68% credible interval of the marginalized true posterior distribution, implying that they are consistent with the true parameter distribution. \begin{table} \begin{tabular}{c|c|c|c|c|c|c} Config. & \(\epsilon\) & \(M\,[M_{\odot}]\) & \(r_{0}/M\) & \(D_{\text{S}}\) [Gpc] & \(T_{\text{obs}}\) [yrs] & \(\rho_{AET}\) \\ \hline (1) & \(10^{-5}\) & \(10^{6}\) & 10.6025 & 1.0 & 2.0 & 70 \\ (2) & \(10^{-4}\) & \(10^{6}\) & 15.7905 & 2.0 & 1.5 & 65 \\ (3) & \(10^{-3}\) & \(5\cdot 10^{6}\) & 16.8123 & 1.0 & 1.0 & 340 \\ \end{tabular} \end{table} Table 1: Here we tabulate injection EMRI/IMRI parameters for all waveforms in sections IV.1 and IV.2. The IMRI configuration is given by the last row in the table. For each case, we use identical extrinsic parameters: \(\theta_{\text{S}}=\pi/3,\ \phi_{\text{S}}=\pi/4,\ \theta_{\text{K}}=2,\phi_{ \text{K}}=5,\Phi_{\phi_{0}}=1.5\). The secondary spin is fixed to the fiducial value \(\chi=0.5\). Figure 1: Marginalized posteriors with shaded 68% credible intervals generated by injecting a true reference model cir1PA and recovering using different models with different mass ratios. The top panel is with \(\epsilon=10^{-5}\), middle panel \(\epsilon=10^{-4}\) and bottom panel \(\epsilon=10^{-3}\). (**Blue:**) recovery with the true reference model cir1PA and sampling over the secondary spin parameter. (**Green:**) recovery using the cir1PA without spin. (**Red:**) recovery with cir0PA+1PA-3PN without spin. (**Purple:**) recovery with cir0PA (purely adiabatic) waveform. The black vertical dashed line indicates the true parameters. These simulations used parameters given by Table 1 for Configs. (1), (2) and (3) are given by the top row, middle row, and bottom row respectively. Orbital dephasings, mismatches, accumulated SNRs, and maximum log-likelihood values can be found in Table 2. This is alarming: not only are the incorrect parameters recovered, but our confidence that they are the "correct" ones is largely inflated due to the tightness of the posteriors. Finally, we see that the cirOPA w/o spin model features much stronger biases and constraints than both the cir0PA + 1PA-3PN w/o spin and cir1PA w/o spin models. In light of Eq. (35), we conclude that all models are unsuitable for parameter inference of the cir1PA w/ spin model at \(\rho_{AET}\sim 340\). In Table 2, we give a summary of details regarding the individual MCMC simulations for each small-mass-ratio binary configuration presented in Table 1. The details of the specific computations can be found in the caption of the table. One of the main features of this table is the small mismatch, and accumulated SNR normalized by the optimal SNR. In the worst case, \(\mathcal{M}\sim 10^{-3}\) and \(\rho^{\rm(bf)}/\rho^{\rm(opt)}\sim 99.1\%\) for \(\epsilon=10^{-3}\), between the injected cir1PA waveform and adiabatic cir0PA w/o spin model evaluated at the recovered parameters. This approximate template nearly matches the optimal matched filtering SNR, the SNR that would be attained if the exact model was used during inference. This is further evidence that, for quasicircular binaries, adiabatic models could be used for detection purposes. Finally, we remark from Table 2 that all unbiased results _satisfy_ the condition \(\Delta\Phi^{\rm(ini)}\lesssim 1\) radian. ### Constraining the secondary spin We now focus our attention on constraining the spin \(\chi\) of the CO. Similar to Sec. IV.1, we study each mass ratio \(\epsilon=\{10^{-5},10^{-4},10^{-3}\}\) with parameters given by Table 1, and perform three parameter estimation simulations. The injection is a cir1PA w/ spin model, and approximate waveforms are similar to (43) but with spin included: \[h_{m}=\begin{cases}\text{cir0PA+1PA-3PN w/ spin},\\ \text{cir0PA w/ spin}.\end{cases} \tag{44}\] Our corner plots for each of the \(\epsilon=\{10^{-5},10^{-4},10^{-3}\}\) are displayed in Figs. 2, 3 and 4, respectively. We will first discuss the \(\epsilon=10^{-5}\) case. From Fig. 2, we see that the parameter \(\chi\)_cannot_ be constrained for the \(\epsilon=10^{-5}\) case at \(\rho_{AET}\sim 70\). The marginalized posterior distribution for \(\chi\) is almost flat. This implies that our posterior information is not dominated by the likelihood (a function of the data), but instead dominated by the prior (a function of the parameters, irrespective of the data). We have tested various values of \(\chi=\{-1,0.5,0,0.5,1\}\), and in no situation can the secondary spin be constrained. The exact model cir1PA w/ spin and approximate cir0PA + 1PA-3PN w/ spin model are indistinguishable. When recovering the exact cir1PA w/ spin with the exact model \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c} \(\epsilon\) & **Model Waveform** & \(\Delta\Phi^{\rm(ini)}\) & \(\Delta\Phi^{\rm(bf)}\) & \(\mathcal{M}^{\rm(ini)}\) & \(\mathcal{M}^{\rm(bf)}\) & \(\rho^{\rm(in)}/\rho^{\rm(opt)}\) & \(\rho^{\rm(bf)}/\rho^{\rm(opt)}\) & \(\log\mathcal{L}^{\rm(ini)}\) & \(\log\mathcal{L}^{\rm(bf)}\) \\ \hline \hline \multirow{5}{*}{\(10^{-5}\)} & \multirow{3}{*}{Cir1PA w/o spin} & \multirow{3}{*}{0.779} & \multirow{3}{*}{0.0165} & \multirow{3}{*}{0.143} & \multirow{3}{*}{\(4.497\times 10^{-5}\)} & \multirow{3}{*}{83.4\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{\(-\)846} & \multirow{3}{*}{-0.250} \\ & & & & & & & & \\ \cline{1-1} & \multirow{3}{*}{Cir0PA 1PA-3PN w/o spin} & \multirow{3}{*}{0.786} & \multirow{3}{*}{0.00179} & \multirow{3}{*}{0.163} & \multirow{3}{*}{\(4.293\times 10^{-6}\)} & \multirow{3}{*}{81.5\%} & \multirow{3}{*}{99.8\%} & \multirow{3}{*}{\(-\)943} & \multirow{3}{*}{-0.0324} \\ & & & & & & & & \\ \cline{1-1} & \multirow{3}{*}{Cir0PA w/o spin} & \multirow{3}{*}{3.002} & \multirow{3}{*}{0.00532} & \multirow{3}{*}{0.889} & \multirow{3}{*}{\(2.412\times 10^{-6}\)} & \multirow{3}{*}{6.4\%} & \multirow{3}{*}{99.8\%} & \multirow{3}{*}{\(-\)4800} & \multirow{3}{*}{-0.0234} \\ \cline{1-1} \hline \hline \multirow{5}{*}{\(10^{-4}\)} & \multirow{3}{*}{Cir1PA w/o spin} & \multirow{3}{*}{3.994} & \multirow{3}{*}{0.00702} & \multirow{3}{*}{0.511} & \multirow{3}{*}{8.601\times 10^{-6}\)} & \multirow{3}{*}{30.3\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{-5019} & \multirow{3}{*}{-0.336} \\ & & & & & & & & & \\ \cline{1-1} & \multirow{3}{*}{Cir0PA 1PA-3PN w/o spin} & \multirow{3}{*}{4.310} & \multirow{3}{*}{0.0179} & \multirow{3}{*}{0.486} & \multirow{3}{*}{\(1.26\times 10^{-4}\)} & \multirow{3}{*}{34.2\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{-4799} & \multirow{3}{*}{-0.441} \\ \cline{1-1} & \multirow{3}{*}{Cir0PA w/o spin} & \multirow{3}{*}{13.093} & \multirow{3}{*}{0.0354} & \multirow{3}{*}{0.653} & \multirow{3}{*}{\(2.573\times 10^{-5}\)} & \multirow{3}{*}{19.0\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{-5506} & \multirow{3}{*}{-0.122} \\ \cline{1-1} \hline \hline \multirow{5}{*}{\(10^{-3}\)} & \multirow{3}{*}{Cir1PA w/o spin} & \multirow{3}{*}{4.518} & \multirow{3}{*}{0.00559} & \multirow{3}{*}{0.922} & \multirow{3}{*}{\(3.643\times 10^{-6}\)} & \multirow{3}{*}{3.3\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{-112938} & \multirow{3}{*}{-0.226} \\ \cline{1-1} & \multirow{3}{*}{Cir0PA 1PA-3PN w/o spin} & \multirow{3}{*}{4.882} & \multirow{3}{*}{0.0218} & \multirow{3}{*}{0.949} & \multirow{3}{*}{\(3.443\times 10^{-5}\)} & \multirow{3}{*}{3.4\%} & \multirow{3}{*}{99.9\%} & \multirow{3}{*}{-112827} & \multirow{3}{*}{-2.132} \\ \cline{1-1} & \multirow{3}{*}{Cir0PA w/o spin} & \multirow{3}{*}{14.958} & \multirow{3}{*}{0.153} & \multirow{3}{*}{0.938} & \multirow{3}{*}{\(6.854\times 10^{-3}\)} & \multirow{3}{*}{4.9\%} & \multirow{3}{*}{99.1\%} & \multirow{3}{*}{-122173} & \multirow{3}{*}{-524.798} \\ \end{tabular} \end{table} Table 2: Here we present a summary of computed statistics for various mass ratios \(\epsilon=\{10^{-5},10^{-4},10^{-3}\}\) (first column) when comparing an injected cir1PA waveform and approximate model templates (second column). We compute the orbital dephasings (Eqs. 39-40) (third and fourth columns); mismatch (Eqs. 37-38) (fifth and sixth columns); accumulated SNRs (Eqs. 41-42) (seventh and eighth columns); and, finally the log-likelihood function, Eq. (34), at the injected/recovered parameters. The top, middle, and bottom panels of this table correspond to the top, middle, and bottom panels of Fig. 1, respectively. itself and the approximate cir0PA + 1PA-3PN w/ spin model, statistically insignificant biases to the intrinsic parameters are observed. For the case with mass ratio \(\epsilon=10^{-5}\), the spin on the secondary can be compensated by the minor tweaking of the intrinsic parameters. Finally, we report statistically significant biases when employing the cir0PA w/ spin model to extract parameters from a cir1PA waveform. These biases are consistent with the top panel and fourth row of Fig. 1. To conclude, neither the exact model nor approximate model templates are able to detect the presence of the spin on the smaller companion for a mass ratio \(\epsilon=10^{-5}\) and \(\rho_{AET}\sim 70\). Our results for mass ratio \(\epsilon=10^{-4}\) with \(\rho_{AET}\sim 65\) are given in Fig. 3. The secondary spin has a more noticeable impact, and it can be constrained when using model templates given by cir1PA w/ spin and cir0PA + 1PA-3PN w/ spin. From the 2D marginalized posteriors, it is evident that the distributions on the intrinsic parameters and secondary spin are not Gaussian due to the presence of heavy tails, highlighting strong correlations between these parameters. Recall the red posterior in the middle panel of Fig. 1, where biases on parameters were observed if we recovered a cir1PA w/ spin template using a cir0PA + 1PA-3PN w/o spin model. We see from the red posterior in Fig. 3 that including the spin on the cir0PA + 1PA-3PN model eliminates the biases on parameters, making it completely indistinguishable from the true cir1PA w/ spin model. This indicates that the cir0PA + 1PA-3PN would be suitable for parameter estimation, _only if_ the secondary spin parameter was included in the approximate model [115]. Finally, we see that the cir0PA w/ spin model fails to constrain the secondary spin, with biases consistent with the second panel and fourth row of Fig. 1. Thus, neglecting 1PA components of the GSF will have a detrimental effect on recovering the spin of the smaller companion. To conclude this section, we now discuss the impact of the secondary spin on IMRIs with a mass ratio \(\epsilon=10^{-3}\) with \(\rho_{AET}\sim 340\). Our results are displayed in Fig. 4 for each of the various model templates. In contrast to the previous results for smaller mass-ratios, here we find the only waveform suitable for parameter estimation is the exact model cir1PA w/ spin. The cir0PA + 1PA-3PN w/spin model yields biases on the intrinsic parameters and the secondary spin, although it accounts for correlations between the parameters. By contrast, the cir0PA w/ spin model exhibits significantly stronger biases because it does not correctly represent such correlations due to the lack of 1PA information. This can be seen in the 2D marginalized posteriors. The "hard cut-offs" observed in the two posteriors for cir1PA w/ spin and cir1PA + 1PA-3PN w/ spin are due to the spin on the secondary reaching the prior bounds. These cut-offs are not physical but merely a sampling artifact. We remark here that had we chosen a uniform prior with more support, say \(\chi\in[-2,2]\), then we could have made a wrong conclusion on the nature of the spinning companion using the cir0PA + 1PA-3PN w/ spin model. This analysis suggests that the bias on the secondary spin is unacceptable for all approximate models and one must be careful, in this case, of using PN results to approximate the 1PA components of the self-force. For IMRIs, it is essential that we have full access to 1PA waveforms when performing parameter estimation. We conclude this section by briefly discussing the impact of first post-adiabatic effects on small-mass-ratio binaries with \(\epsilon<10^{-5}\). Although the main sources considered in this work are small-mass-ratio binaries with \(\epsilon\in[10^{-5},10^{-3}]\), we have also studied a strong-field EMRI with \(\epsilon=10^{-6}\) at an SNR \(\rho_{AET}\sim 23\). Our analysis indicates that the secondary parameter cannot be constrained and that parameter estimation studies can be conducted with 0PA waveforms. That is, the 1PA components of the self-force are negligible for such small mass-ratios. We do not present our posterior results here, but the result can be extrapolated from 0PA results of Fig. 1. The primary mass \(M\) shows the strongest level of bias, with value approximately \(\sim\epsilon M\). For a mass ratio \(\epsilon=10^{-6}\), the bias on the primary mass \(M\) is \(\mathcal{O}(1M_{\odot})\). This bias is well contained within the statistical error given by the approximate posterior, making 0PA waveforms suitable for parameter estimation. This implies that the true parameter is contained within the 68% credible interval of the 0PA approximate posterior, satisfying Eq. (35) with \(\mathcal{C}=1\). We conclude that adiabatic waveforms are suitable for _both_ search and characterizing 1PA waveforms at \(\epsilon=10^{-6}\), at least for quasicircular systems with nonspinning primaries. In the analysis above we have tested three specific EMRI/IMRI configurations at mass-ratios \(\epsilon\in\{10^{-6},10^{-5},10^{-4},10^{-3}\}\) with SNR \(\rho_{AET}\in\{22,70,65,340\}\) respectively. We remind the reader that the conclusions in the four paragraphs above are governed by two quantities: (1) the specific configuration of parameters describing the system, the resultant SNR and (2) the _geometry_ of the inspiral and resultant waveform. Our Schwarzschild inspirals are strong-field orbits with trajectories evolved to within \(\dot{r}\sim 6.27\). However, the orbits could evolve much closer to the horizon for a spinning primary. The increase in the number of orbits, compounded with closer-horizon geometry, would increase significantly the precision in parameters _and_ the SNR. This is strongly supported by work in [33], see Fig. 6 and Fig. 11. See also Ref. [116] for a detailed discussion on enhancements on parameter precision for circular orbits into rotating MBHs. For more complex orbits, it is then expected that one could achieve SNRs exceeding those of what we present here. As a result, the accuracy requirements on EMRIs would become more stringent. On the other hand, some might view our choices of luminosity distances as overly optimistic. We assume a flat-\(\Lambda\)CDM cosmological model with matter density \(\Omega_{\rm m}=0.274\), dark energy density \(\Omega_{\Lambda}=0.726\) and Hubble constant \(H_{0}=70.5{\rm km\ s}^{-1}\,{\rm Mpc}\). Our choices for the luminosity distances \(D_{\rm S}\in\{1,2\}\)Gpc correspond to small redshifts \(z_{\rm S}=\{0.203,0.371\}\), less than \(z_{\rm S}=1\). By comparison, \(z_{\rm S}=1\) corresponds to a luminosity distance of \(D_{\rm S}(z_{\rm S}=1)=6.716\,{\rm Gpc}\). Fig. 9 in Ref. [33] represents the redshift distributions of detected EMRI events assuming 12 distinct astrophysical models. All such distributions are peaked between \(z_{\rm S}\in[1,2]\). Thus, our choices represent _golden_ EMRIs: strong-field in their orbital characteristics and placed at low redshifts \(z_{S}\ll 1\). Assuming an astrophysically relevant luminosity distance \(D_{\rm S}=6.67\), the SNR of our sources would decrease significantly since \(\rho\sim 1/D_{L}\). In fact, our \(\epsilon=10^{-5}\) case in Table 1 would only reach an SNR \(\rho_{AET}\sim 11\), which is lower than the detection threshold for EMRIs. This highlights a fundamental point. Since the statistical error on the parameters scales with the SNR, one can choose an SNR \(\sim 20\) for \(\epsilon=10^{-5}\) such that 0PA waveforms are suitable for parameter estimation of 1PA waveforms. For a more relevant, but still quite small, luminosity distance \(D_{S}\sim 3\,{\rm Gpc}\), corresponding to \(z_{\rm S}=0.52\), \(\rho_{AET}\sim 20\) for \(\epsilon=10^{-5}\). The true parameter could then be within the 68% credible interval of the purple posterior in Fig. 1. For such a system we could then conclude that adiabatic waveforms are suitable for _both_ detection and parameter extraction of 1PA waveforms. Similarly, one could place the source at exceptionally low redshift \(z_{\rm S}=2.5\times 10^{-4}\), giving a luminosity distance \(D_{\rm S}=0.01\,\) Gpc and increasing the SNR by a factor of 100 in the \(\epsilon=10^{-5}\) case. The spinning secondary may be observable in such a situation, but the probability that such an EMRI will be observed is essentially zero according to [33]. In conclusion, we have chosen parameter sets such that all of our orbits are astrophysically sound, allowing us to draw reasonable conclusions. Within the range of realistic sources, we have focused on ones that are loud enough to be of most interest for high-precision science. The high-accuracy models built by the self-force community naturally aim for these golden EMRIs (and IMRIs), which have high SNRs and are in the strong-field regime. Figure 2: **(Mass ratio \(\epsilon=10^{-5}\)):** Here we inject an EMRI waveform with _spinning_ CO on a circular orbit with parameters \(M=10^{6}M_{\odot}\), \(\mu=10M_{\odot}\), \(r_{0}/M=10.6025\), \(D_{S}=1\,\)Gpc and extrinsic parameters given in the caption of Table I. The magnitude of the spinning secondary is \(\chi=0.5\), the SNR is \(\rho_{AET}\sim 70\) and the time of observation is \(T_{\rm obs}=2\) years. The blue, red, and purple parameter posteriors are generated when recovering with a cir1PA w/ spin model, cir0PA + 1PA-3PN w/ spin model, and cir0PA w/ spin model, respectively. The black vertical lines indicate the true parameters. The take-home message is that the spin of the secondary _cannot_ be constrained for \(\rho_{AET}=70\) and the models considered here. Figure 3: **(Mass ratio \(\epsilon=10^{-4}\)):** Here we inject an EMRI waveform with _spinning_ CO on a circular orbit with parameters \(M=10^{6}M_{\odot}\), \(\mu=100M_{\odot}\), \(r_{0}/M=15.7905\), \(D_{S}=2\) Gpc and extrinsic parameters given in the caption of Table I. The magnitude of the secondary spin is \(\chi=0.5\), the SNR is \(\rho_{AET}\sim 65\) and the time of observation is \(T_{\rm obs}=1.5\) years. The blue, red, and purple parameter posteriors are generated when recovering with a cir1PA w/ spin model, cir0PA + 1PA-3PN w/ spin model, and cir0PA w/ spin models respectively. The black vertical lines indicate the true parameters. The take-home message here is that the spin of the secondary _can_ be constrained using either the cir1PA w/ spin or cir1PA + 1PA-3PN w/ spin model. The cir0PA w/ spin model yields significant biases and _cannot_ constrain the spin of the secondary. Figure 4: **(Mass ratio \(\epsilon=10^{-3}\)):** Here we inject an EMRI waveform with _spinning_ CO on a circular orbit with parameters \(M=5\cdot 10^{6}M_{\odot}\), \(\mu=5000M_{\odot}\), \(r_{0}/M=16.81230\), \(D_{S}=1\,\)Gpc and extrinsic parameters given in the caption of Table I. The magnitude of the secondary spin is \(\chi=0.5\), the SNR is \(\rho_{AET}\sim 340\) and the time of observation is \(T_{\rm obs}=1\) year. The blue, red, and purple parameter posteriors are generated when recovering with a cir1PA w/ spin model, cir0PA + 1PA-3PN w/ spin model, and cir0PA w/ spin model, respectively. The black vertical lines indicate the true parameters. The take-home message here is that all 1PA terms (including the spin on the secondary) must be included to perform parameter estimation. The PN approximated waveform cir0PA + 1PA-3PN w/ spin shows significant biases and yields biased results on the secondary spin. The cir0PA w/ spin model is unable to account for correlations and constrain the spin on the secondary body. ### Systematic biases -- mismodeling evolution of eccentric orbits Our previous analyses focus on a very restricted class of binary configurations, and it is not obvious how our results will extend to more generic systems. In this section we begin to explore that question by assessing the potential impact of mismodeling EMRI waveforms for eccentric orbits. As 1PA models do not yet exist for eccentric orbits, we inject waveforms with our adiabatic nec0PA model (see Sec. II.2) and attempt to recover them with our approximate ecc0PA-9PN model. We consider injected waveforms with various values of eccentricity \(e=\{0,0.01,0.1,0.2\}\). The trajectories are evolved using a low-eccentricity 9PN expansion, exhibiting slow convergence as the eccentricity increases. We do not consider eccentricities \(e>0.2\) for two reasons. The first is the potential for the PN expansion to break down, yielding artificially inflated biases that could be less severe in reality. The second reason is the enormous difficulty for the sampler to converge to the point of highest likelihood. For \(e=0.3\), we noticed that the number of secondary maxima in the likelihood surface grew significantly in comparison to \(e\in\{0.1,0.2\}\). This means that the individual chains of the sampler got stuck for \(\sim\mathcal{O}(10^{3})\) iterations even with \(\sim 10\) temperatures (see App. A for more details). Whether this is an artifact of eccentricity itself, or the PN expansion breaking down, is unclear. For the aforementioned reasons, we consider weak-field orbits with low eccentricities to avoid exaggerating the impact of eccentricity when mismodeling templates. For all cases presented in this section, the injected waveforms' parameters are set to \(M=10^{6}M_{\odot}\), \(\mu=10M_{\odot}\), and initial semi-latus rectum \(p_{0}=9.86\). The trajectories are evolved for \(T_{\rm obs}=1\) year until a final semi-latus rectum of \(p\sim 8.1\) is reached. We do not evolve the trajectories further than this point due to the breakdown of the PN expansions in the strong-field regime. We choose the same extrinsic parameters as presented in Table 1, but with luminosity distance \(D_{L}=0.7\,\)Gpc and initial radial phase \(\Phi_{r_{0}}=3\) for all cases. For each eccentricity \(e=\{0,0.01,0.1,0.2\}\), Eq. (39) gives a dephasing on the order \(\Delta\Phi^{\rm(ini)}\approx\{14,15,19,35\}\) radians, respectively, between the two models. Finally, the number of modes in the waveform for both models, #modes, for each chosen eccentricity \(e\) is \((e,\#\text{modes})=\{(0,12),(0.01,12),(0.1,60),(0.2,94)\}\) respectively. Finally, both circular and eccentric waveform models yield similar SNRs on the order of \(\rho_{AET}\sim 70\). The details of how we used \(\mathtt{eryn}\) for our eccentric parameter estimation simulations are presented in App. A. We begin with the circular orbit case with the result shown in Fig. 5. The details on the individual runs can be found in the caption. We treat the parameters \(e=0\) and \(\Phi_{r_{0}}=3\) as known, and we do not sample over them. The intrinsic parameters exhibit statistically significant biases, whereas the extrinsic parameters are unbiased. Notice that the "directions" of the biases are similar to those presented when recovering (circular) post-adiabatic waveforms with adiabatic templates in Fig. 1. Figure 5 will be our reference figure when making direct comparisons with eccentric orbits. We perform a series of similar simulations of recovering an exact adiabatic model (ecc0PA), with an approximate (ecc0PA-9PN) model template with small to moderate eccentricities \(e=\{0.01,0.1,0.2\}\). The case with \(e=0.01\) is qualitatively similar to Fig. 5, so we will not present it here. The results for \(e=0.2\) are shown in Fig. 6. All the intrinsic parameters show severe levels of biases, stronger compared to the circular orbits. Furthermore, unlike the circular case, the angular parameters \(\{\theta_{S},\phi_{S},\theta_{K},\phi_{K}\}\) and initial phases \(\{\Phi_{\phi_{0}},\Phi_{r_{0}}\}\) show statistically significant levels of bias. It should be noted that the magnitude of the biases on all parameters increases significantly between the \(e=0\) and \(e=0.2\) cases presented in Fig. 5 and Fig. 6, respectively. The biases on the extrinsic parameters for eccentric orbits stem from two effects. The first is the correlations between the parameters, which are more pronounced compared to circular orbits. Unlike the circular orbit case, the intrinsic and extrinsic parameter spaces are not orthogonal: minor tweaks to the intrinsic parameters can be compensated by minor tweaks in the extrinsic parameters. The second reason is that the orbital evolution is more complex, resulting in a LISA-responsed waveform with a richer structure in comparison to its circular counterpart. A biased result on the intrinsic parameters (mainly eccentricity) will induce modulations to the approximate model template. Minor shifts to the angular parameters will induce further modulations due to the presence of the LISA response function. The tweaking of the angular parameters in response to the biases in the intrinsic parameters appears to minimize the mismatch between the two signals. This can be explored by comparing the ecc0PA waveform at the true parameters and ecc0PA-9PN waveform at the recovered parameters, but fixing the angular parameters \(\{\theta_{S},\phi_{S},\theta_{K},\phi_{K}\}\) to their true values. We obtain mismatches \(\mathcal{M}\sim 0.283\) and accumulated SNR \(\rho^{\rm(bf\;w\;/\;inj\;\text{angle})}/\rho^{\rm(opt)}\sim 68\%\), indicating that the two waveforms quickly go out of phase. Comparing the true ecc0PA signal with ecc0PA-9PN evaluated at the recovered parameters, including the recovered angular parameters, yields \(\mathcal{M}\sim 10^{-2}\) and accumulated SNR \(\rho^{\rm(bf)}/\rho^{\rm(opt)}\sim 98\%\), indicating that the approximate model template remains in phase with the true reference signal for a much longer duration. For circular orbits, there are weak correlations between the intrinsic and extrinsic parameters, demonstrated by the marginalized 2D distributions in Fig. 5. Minor shifts to the intrinsic parameters will not affect the harmonic structure, and thus there will be no biases across the extrinsic parameters. Figure 5: **(Circular Orbits:)** Here we consider as injection an ecc0PA model (vanilla adiabatic FEW) with parameters \(M=10^{6}M_{\odot}\), \(\mu=10M_{\odot}\), \(r_{0}/M=9.6\), \(e=0\). The trajectory evolves for one year and terminates at a radial coordinate of \(r_{0}/M\approx 8.1\), giving an SNR \(\rho_{AET}\sim 70\). We treat parameters related to eccentricity \(\{e=0,\Phi_{r_{0}}=3\}\) as known. The blue posterior is generated through recovery using the injected model and the red posterior is built using the approximate eec0PA-9PN model. The black vertical lines indicate the location of the true parameters. Figure 6: **(Eccentric orbits:)** The same parameters and set up as Fig. 5 but with \(e=0.2\) and \(\Phi_{r_{0}}=3\). Compared to Fig. 5, the biases on the intrinsic parameters are more severe across the parameter space. We also observe severe biases on the extrinsic parameters, a feature not seen for the circular orbit case presented in Fig. 5. Summary In this section, we summarize the results presented in Sec. IV. For the configuration of parameters in this work (see Table 1), adiabatic templates _are not suitable_ for EMRI parameter estimation, where the exact model contains full post-adiabatic information. Adiabatic templates though, are suitable for detection purposes in all cases. We also highlight the performance of suitably re-summed third-order PN expansions when approximating the 1PA components of the self-force. For mass ratios \(\epsilon=10^{-5}\) and \(\epsilon=10^{-4}\), such approximate PN-based models are suitable for EMRI data analysis (at least for the simple binary configurations we consider), assuming that the spin on the smaller companion is included. We also found that such PN-based approximate waveforms break down within the IMRI regime \(\epsilon=10^{-3}\), where full post-adiabatic information on the model templates is required. Neglecting the spin on the secondary and/or the 1PA components of the self-force results in both significant biases and fictitiously tighter constraints on parameters. This is due to neglecting the significant correlations between the secondary spin and intrinsic parameters. We conclude that the spin of the secondary should always be retained and only post-adiabatic templates should be used to characterize post-adiabatic models. The feasibility of constraining the secondary spin was then discussed. For \(\epsilon=10^{-5}\), the secondary spin cannot be constrained for our choice of SNR \(\rho_{AET}\sim 70\), whereas, for \(\epsilon=10^{-4}\) and \(\epsilon=10^{-3}\), it could be constrained at \(\rho_{AET}\sim 65\) and \(\rho_{AET}\sim 340\), respectively. In the IMRI regime, the secondary spin and intrinsic parameters exhibit strong correlations, which significantly degrade the precision measurement of the intrinsic parameters. Finally, we investigated the impact of mismodeling eccentric binaries. By injecting an adiabatic eccentric waveform and recovering with a ninth-order PN-based waveform, we observed severe biases across _both_ the intrinsic and extrinsic parameters, whereas only the intrinsic parameters are biased for purely circular orbits. It is impossible to say, for now, how this will translate when using 0PA waveforms for inference on 1PA waveforms. However, we expect that the severity of the biases with respect to circular orbits will increase across the full parameter space. We conclude here that care must be taken for both the modeling and parameter inference of eccentric 1PA waveforms. ## VI Discussion This paper presented, for the first time, a detailed Bayesian study using MCMC with state-of-the-art _first post-adiabatic_ EMRI and IMRI waveforms for circular orbits in the Schwarzschild spacetime. We showed that neglecting 1PA terms will induce statistically significant biases in the parameters. However, we can still detect and characterize the first post-adiabatic waveform in the data stream using adiabatic waveform models with biased parameters. We have confirmed that adiabatic waveforms are only suited for detection purposes (at least for \(10^{-5}\lesssim\epsilon\lesssim 10^{-3}\)). The systematic errors are subjectively quite small for adiabatic waveforms and small mass-ratios4, and might not be relevant for some astrophysical applications like population studies [117; 118]. However, applications within fundamental physics (like testing alternative/modified theories of gravity [119; 120; 121; 122], investigating the nature of massive black holes [123; 124; 125], the presence of additional fields [126; 4; 127], and so on) crucially rely on both precise and accurate measurements of the binary parameters. Even relatively small statistically significant systematic errors could spoil the enormous scientific potential of EMRIs. Thus, first post-adiabatic waveforms are _essential_ in order to reap the full scientific rewards of EMRI data analysis. Footnote 4: For example, the top panel of Fig. 1 shows that we can recover the primary mass \(M=10^{6}M_{\odot}\) with a bias on the order of \(\sim 10M_{\odot}\). Bayesian methods are the gold standard technique when studying waveform systematics because they do not introduce any approximations from a statistical point of view. Fisher matrices, the Lindblom criterion, overlaps/mismatches, and orbital dephasings must be used with caution. Such systematic tests are suitable for exploration and gaining insight into the accuracy of the waveform models, but conclusions must be taken lightly with respect to their accurate Bayesian counterpart. Fisher matrices are hard to accurately compute for EMRIs (see the works of Refs. [116; 4; 17; 72]), potentially leading to false conclusions on biases and precision statements of parameters. Mismatches give no information about potential biases on parameters, and fully optimized fitting factors such as Eq. (38) can only be calculated through stochastic sampling algorithms. Similarly, the Lindblom criterion is overly conservative [128], and if taken at face value could force unreasonably stringent accuracy requirements on waveform templates. Finally, comparisons of the orbital dephasings between two models are performed at the trajectory level, and so neglect the waveform structure, SNR of the source or correlations between parameters. These systematic studies tell an important story, but proper Bayesian inference completes the picture. For example, we have shown that the posterior densities of EMRIs and IMRIs can yield non-Gaussian features when the secondary spin is included, becoming ever more dramatic as \(\epsilon\gg 10^{-5}\). No other systematic test can reveal such interesting features. This importance of Bayesian methods can be seen in Sec. IV.1, which presents the parameter distributions from mis-modelling in Fig. 1 and the associated summary statistics in Table 2. We highlight from this analysis that statistically significant biases _are not observed_ when the orbital dephasing \(\Delta\Phi^{\rm(ini)}\lesssim 1\) radians (see Eq. (39)). From the top panel and bottom row of Fig. 1, for \(\epsilon=10^{-5}\), the orbital dephasing between the 0PA and 1PA waveforms is \(\sim 3\) radians. We see that strong biases are observed, yet a concrete detection is made with accumulated SNR at the best-fit parameters \(\rho^{\rm(bf)}/\rho^{\rm(opt)}\sim 99.8\%\). This is further exemplified in the bottom panel, where there is a \(\sim 15\) radian difference between the 0PA and 1PA waveforms and \(\rho^{\rm(bf)}/\rho^{\rm(opt)}\sim 99.1\%\), resulting in a clear detection. This is comforting to see, as it implies that the requirements often used by the modelling community (e.g., that orbital phase errors should be \(\Delta\phi\lesssim 1/{\rm SNR}\) rad [129]) may be too stringent. However, the orbital trajectories completely neglect the SNR of the system, a _crucial_ ingredient for both detection and parameter estimation. Relying on only orbital dephasing should be taken with caution. Similarly, large mismatches between the approximate and exact waveform model (for identical parameter values) might suggest that the waveform is not detectable with the approximate model. But this is not the case: Table 2 contradicts it, giving small mismatches \(\mathcal{M}\sim 10^{-3}\) at the recovered parameters for the largest of mass ratios \(\epsilon=10^{-3}\) even though there are large mismatches \(\mathcal{M}\sim 0.938\) at the injected parameters. In conclusion, our results reinforce that Bayesian inference is key to making significant claims about approximations of waveforms. Another outcome of our work is that we have shown the importance of including the secondary spin parameter in our waveform models. Although it appears impossible to constrain at mass ratios \(\epsilon\leq 10^{-5}\), a measurement can be made at higher mass ratios \(\epsilon\geq 10^{-4}\). Correlations between the intrinsic parameters and the secondary spin are significant, leading to degraded measurements for IMRIs only if the spin parameter is included. We have demonstrated that neglecting the secondary spin causes not only a bias but fictitiously tight constraints on the other intrinsic parameters. This becomes more prominent as the mass ratio increases. Furthermore, we have shown that knowledge of the 1PA components of the GSF is essential when trying to measure the secondary spin. It may not possible to make definite conclusions on the potential constraints on the secondary spin for genetic orbits without including other post-adiabatic terms. Finally, we assessed the importance of eccentricity when mismodeling eccentric templates. At the moment, only some self-force contributions at 1PA are known for eccentric orbits, therefore it is not possible yet to make tests similar to Sec. IV.1 and Sec. IV.2. Instead, we injected an adiabatic waveform and attempted to recover with an approximate adiabatic waveform with trajectories evolved through 9PN fluxes expanded in eccentricity. For low to moderate eccentricities, \(e\in\{0,0.01,0.1,0.2\}\), it is clear that the severity of the biases worsens in comparison to the circular orbit case as \(e\) increases (cf. Figs. 5 and 6). For moderate eccentricities, biases are observed across the entire parameter space, notably in the angular parameters and initial phases. Such biases in the sky position would be unacceptable for studies within cosmology, where the construction of galaxy catalogs allows one to infer cosmological parameters, such as the Hubble constant [130]. Our analysis show that the inclusion of eccentricity will complicate the picture. More work is required on the self-force and data analysis front to understand the impact of these orbits on EMRI parameter estimation. ## VII Future work The work presented here has only scratched the surface of EMRI accuracy requirements. Clearly, it is essential to repeat this analysis once second-order self-force results become available for more generic orbits. Indeed, generic orbits may break potential degeneracies with the secondary spin parameter, leading to improved constraints at smaller mass ratios \(\epsilon\sim 10^{-5}\). We have also shown remarkable success with the use of resummed PN expansions when approximating the 1PA components in the context of parameter estimation. For mass ratios \(\epsilon\gtrsim 10^{-4}\), one could perform preliminary studies on the secondary spin parameter, assuming suitable 1PA-\(n\)PN results were available for more general orbits. Moreover, we observed that, due to correlations, the secondary spin parameter deteriorates the precision with which the intrinsic parameters can be recovered. It would then be interesting to understand whether there exist degeneracies between the secondary spin and the scalar charge due to extra scalar fields, which may spoil the precision measurements on the latter [127, 4, 5]. The field of EMRI systematics can now answer some crucial practical questions for modeling EMRI waveforms. For instance, calculating the first-order self-force for generic Kerr inspirals is very expensive with just a single point in the parameter space taking \(\mathcal{O}(10^{4})\) CPU hours [27]. One then has to repeat this calculation many times across the 4-dimensional generic Kerr parameter space to produce interpolants for the 0PA equations of motion, and then repeat the process at 1PA. It is important to estimate the required accuracy at each point, the minimum number of points, and their optimal placement, for both the 0PA and 1PA contribu tions in order to avoid biasing parameters significantly. The choice of a suitable interpolation method is important as well. It is still up for debate whether to use Chebychev-interpolation methods (which have favorable convergence properties) or less expensive splines. We conclude by discussing one last topic that is still relatively untouched: the EMRI search problem. The search problem has been "solved" in extremely simplified circumstances by various groups within the LISA community [131; 132; 133; 134]. The underlying noise properties were well understood and tight priors were placed on the parameters to recover. Furthermore, many of these groups exploited the analytical features of the injected model (the self-inconsistent "Analytical Kluudge" waveforms from Ref. [95]), and the known structure of the likelihood local maxima to reach the global maximum indicating the true parameters. It is unknown whether fully relativistic waveforms will simplify or complicate the search problem. EMRI search is a difficult open problem, the authors hope that work can restart on this vital research topic thanks to the advent of FEW and access to accurate waveforms. ###### Acknowledgements. O. Burke thanks both the University College Dublin Relativity Group and the Albert Einstein Institute for hosting him during the preparation of this work. He also gives thanks to Alvin Chua, Christopher Chapman-Bird, Leor Barack, Maarten Van de Meent, Sylvain Marsat, Michael Katz and Jonathan Gair for various insightful discussions. This project has received financial support from the CNRS through the MITI interdisciplinary programs. GAP acknowledges support from an Irish Research Council Fellowship under grant number GOIPD/2022/496. He also thanks Alvin Chua and Enrico Barausse for insightful discussions. NW acknowledges support from a Royal Society - Science Foundation Ireland University Research Fellowship. This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 22/RS-URF-R/3825. AP acknowledges the support of a Royal Society University Research Fellowship and a UKRI Frontier Research Grant under the Horizon Europe Guarantee scheme [grant number EP/Y008251/1]. He also thanks Alvin Chua and Leor Barack for helpful discussions. CK acknowledges support from Science Foundation Ireland under Grant number 21/PATH-S/9610. This work has used the following python packages:emcee, eryn, lisa-on-gpu, scipy, numpy, matplotlib, corner, cupy, astropy, chainconsumer and listools [105; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134]. This work makes use of the following packages of the Black Hole Perturbation toolkit [63]: FastEMRIWaveforms [141], KerrGeodesics [91] and PostNewtonianSelfForce[93]. ## Author contributions OB: Conceptualization, Data curation, Formal analysis: MCMC simulations, Investigation, Methodology, Project administration, Resources, Software: All data analysis techniques, Supervision, Validation, Visualization: All plots, Writing - original draft. GP: Conceptualization, Data curation: secondary spin fluxes, Methodology, Validation: Fisher based estimates, Investigation, Writing - original draft. NW: Conceptualization, Data curation: second-order self-force results, Methodology, Software: implementation of the waveform models, Writing - Review & Editing. PL: Conceptualization, Methodology, Software: implementation of the waveform models, Writing - Review & Editing. LS: Conceptualization, Methodology, Software: implementation of the waveform models, Visualization, Writing - Review & Editing. CK: Conceptualization, Methodology, Software: PN expansions, Writing - Review & Editing. BW: Conceptualization, Data curation: second-order self-force results, Methodology, Writing - Review & Editing. AP: Conceptualization, Data curation: second-order self-force results, Methodology, Writing - Review & Editing. LD: Data curation: second-order self force results JM: Data curation: second-order self force results
2303.01185
A polynomial time algorithm for calculating Fourier-Dedekind sums
We solve an open problem proposed in the book ``Computing the continuous discretely" written by Matthias Beck and Sinai Robins. That is, we proposed a polynomial time algorithm for calculating Fourier-Dedekind sums. The algorithm is simple modular Barvinok's simplicial cone decomposition. It can be easily adapted into De Leora et. al.'s LattE package, which gives a nice implimentation of Barvinok's polynomial time algorithm.
Guoce Xin, Xinyu Xu
2023-03-02T11:56:37Z
http://arxiv.org/abs/2303.01185v1
# A polynomial time algorithm for calculating Fourier-Dedekind sums ###### Abstract. We solve an open problem proposed in the book "Computing the continuous discretely" written by Matthias Beck and Sinai Robins. That is, we proposed a polynomial time algorithm for calculating Fourier-Dedekind sums. The algorithm is simple modular Barvinok's simplicial cone decomposition. It can be easily adapted into De Leora et. al.'s LattE package, which gives a nice implimentation of Barvinok's polynomial time algorithm. _Mathematic subject classification_: Primary 11F20; Secondary 11Y16, 05A15, 11L03. _Keywords_: Dedekind sum; Fourier-Dedekind sum; Barvinok's algorithm; constant terms. ## 1. Introduction This draft is an announcement. A complete version will be finished soon. Dedekind sums are important number-theoretical objects that arise in many areas of mathematics, including number theory, geometry, topology, algorithmic complexity, etc. See, e.g., [3] for details and further references. Fourier-Dedekind sums unify many variations of the Dedekind sums that have appeared in the literature, and form the building blocks of Ehrhart quasipolynomials. The Fourier-Dedekind sum is defined by \[s_{n}(a_{1},a_{2},\ldots,a_{d};b)=\frac{1}{b}\sum_{k=1}^{b-1}\frac{\xi^{kn}}{ (1-\xi_{b}^{ka_{1}})\cdot(1-\xi_{b}^{ka_{2}})\cdots(1-\xi_{b}^{ka_{d}})}, \tag{1.1}\] where \(a_{1},a_{2},\ldots,a_{d},b\in\mathbb{N}\) and \(b>1\) is relatively prime to each \(a_{i}\) and \(\xi_{b}=e^{\frac{2\pi i}{b}}\). The following open problem about Fourier-Dedekind sum was proposed by Matthias Beck and Sinai Robins in [3]. **Problem 1** (Open Problem).: _It is known [2] that the Fourier-Dedekind sums are efficiently computable. Find a fast algorithm that can be implemented in practice._ We solve this open problem by giving a desired polynomial time algorithm using a constant term concept in [5] and a simple application of Barvinok's algorithm. The algorithm can be easily adapted into the package LattE by De Loera et al. [4]. ## 2. The polynomial time algorithm Throughout this section,, we assume that \(a_{1},a_{2},\ldots,a_{d}\in\mathbb{N}\) are coprime to \(b\) unless specified otherwise. ### A brief introduction Here we need to write an Elliott rational function \(E\) in the following form. \[E=\frac{L(\lambda)}{\prod_{i=1}^{n}(1-u_{i}\lambda^{a_{i}})} \tag{2.1}\] where \(L(\lambda)\) is a Laurent polynomial, \(u_{i}\) are free of \(\lambda\) and \(a_{i}\) are positive integers for all \(i\). The algorithm mainly relies on the following known results. **Proposition 1**.: _Suppose the partial fraction decomposition of \(E\) is given by_ \[E=P(\lambda)+\frac{p(\lambda)}{\lambda^{k}}+\sum_{i=1}^{n}\frac{A_{i}(\lambda) }{1-u_{i}\lambda^{a_{i}}}, \tag{2.2}\] _where the \(u_{i}\)'s are free of \(\lambda\), \(P(\lambda),p(\lambda),\) and the \(A_{i}(\lambda)\)'s are all polynomials, \(\deg p(\lambda)<k\), and \(\deg A_{i}(\lambda)<a_{i}\) for all \(i\). Then we have_ \[\operatorname*{CT}_{\lambda}E=P(0)+\sum_{u_{i}\lambda^{a_{i}}<1}A_{i}(0).\] **Definition 2**.: _We denote_ \[\operatorname*{CT}_{\lambda}\frac{1}{\frac{1-u_{s}\lambda^{a_{s}}}{1-u_{s} \lambda^{a_{s}}}}E(1-u_{s}\lambda^{a_{s}}):=A_{s}(0).\] _In the general case, for any \(\varnothing\neq I\subseteq[n]\), we denote_ \[\operatorname*{CT}_{\lambda}\frac{1}{\frac{\prod_{s\in I}(1-u_{s}\lambda^{a_{s }})}{\sum_{s\in I}(1-u_{s}\lambda^{a_{s}})}}E\cdot\prod_{s\in I}(1-u_{s}\lambda^ {a_{s}}):=\sum_{s\in I}A_{s}(0).\] Our algorithm is based on the following observation. **Proposition 3**.: _Suppose \(F(\lambda)\) is a rational function and \(F(\xi_{b}^{k})\) exists for \(k=1,2,\ldots,b\), where \(\xi_{b}=e^{\frac{2\pi i}{b}}\). Then we have_ \[\frac{1}{b}\sum_{k=1}^{b-1}F(\xi_{b}^{k})=\operatorname*{CT}_{\lambda}\frac{1 }{\frac{1-\lambda^{b}}{b}}F(\lambda)-\frac{1}{b}F(1).\] Proof.: The proposition follows by the following known identity. \[\operatorname*{CT}_{\lambda}\frac{1}{\frac{1-\lambda^{b}}{1-\lambda^{b}}F( \lambda)}=\operatorname*{CT}_{\lambda}\frac{1}{\frac{(1-\lambda)\cdot(1-\xi_{ b}^{-1}\lambda)\cdots(1-\xi_{b}^{1-b}\lambda)}{(1-\lambda)\cdot(1-\xi_{b}^{1-b} \lambda)}}F(\lambda)=\frac{1}{b}\sum_{k=1}^{b-1}F(\xi_{b}^{k})+\frac{1}{b}F(1).\] **Corollary 4**.: _Let \(d\geq 1\). Then \(s_{n}(a_{1},a_{2},\ldots,a_{d};b)\) can be written as_ \[s_{n}(a_{1},a_{2},\ldots,a_{d};b)=\Big{(}Q_{z}-\frac{1}{b(1-z_{1})\cdots(1-z_{ d})}\Big{)}\Big{|}_{z_{i}=1}.\] _where \(Q_{z}=\operatorname*{CT}_{\lambda}\frac{\lambda^{n}}{\frac{(1-\lambda^{b}) \cdot(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a_{d}}z_{d})}{(1-\lambda^{a_{ 1}}z_{1})\cdots(1-\lambda^{a_{d}}z_{d})}}.\)_ Proof.: Apply Proposition 3 to \(F(\lambda)=\frac{\lambda^{n}}{(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a_{d}} z_{d})}.\) We obtain \[\frac{1}{b}\sum_{k=1}^{b-1}\frac{\xi_{b}^{kn}}{(1-\xi_{b}^{ka_{1}}z_{1})\cdots (1-\xi_{b}^{ka_{d}}z_{d})}=\operatorname*{CT}_{\lambda}\frac{1}{\frac{1- \lambda^{b}}{b(1-z_{1})\cdots(1-z_{d})}}.\] Taking limits at \(z_{i}=1\) for all \(i\) gives \[s_{n}(a_{1},a_{2},\dots,a_{d};b)=\Big{(}\operatorname*{CT}_{\lambda}\frac{ \lambda^{n}}{(1-\lambda^{b})\cdot(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a_{d} }z_{d})}-\frac{1}{b(1-z_{1})\cdots(1-z_{d})}\Big{)}\Big{|}_{z_{i}=1},\] as desired. ### Steps of the algorithm We use the package LattE to compute \(s_{n}(a_{1},a_{2},\dots,a_{d};b)\). First, by adding a slack variable \(z_{0}\) we can write \[Q_{z}=\Big{(}\operatorname*{CT}_{\lambda}\frac{\lambda^{n}}{(1-\lambda^{b}z_{ 0})\cdot(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a_{d}}z_{d})}\Big{)}\Big{|}_ {z_{0}=1}.\] For convenience, we let \[\widetilde{Q_{z}}:=\operatorname*{CT}_{\lambda}\frac{\lambda^{n}}{(1- \lambda^{b}z_{0})\cdot(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a_{d}}z_{d})}.\] Observe that \[\widetilde{Q_{z}}=\sum_{\alpha\in P\cap\mathbb{Z}^{d+1}}z^{\alpha}\] enumerate lattice points in the vertex simplicial cone \(P\) defined by the vertex \(v=(-\frac{n}{b},0,\dots,0)^{t}\) and generators the column vectors of \[H=\left(\begin{array}{cccc}-a_{1}&-a_{2}&\dots&-a_{d}\\ b&0&\dots&0\\ 0&b&\dots&0\\ \vdots&\vdots&\dots&\vdots\\ 0&0&\dots&b\end{array}\right).\] Then we use LattE to write \[\widetilde{Q_{z}}=\sum_{i}\widetilde{Q_{i}}(z_{0},z_{1},\cdots,z_{d})\] as a short sum of simple rational functions, and compute the limit \[\Big{(}\sum_{i}\widetilde{Q_{i}}(z_{0},z_{1},\cdots,z_{d})-\frac{1}{b(1-z_{1} )\cdots(1-z_{d})}\Big{)}\Big{|}_{z_{j}=1}.\] This is equal to the desired \(s_{n}(a_{1},a_{2},\dots,a_{d};b)\). **Algorithm 5**.: _Now we will give an algorithm for computing the Fourier-Dedekind sum \(s_{n}(a_{1},a_{2},\dots,a_{d};b)\)._ 1. _Add slack variable_ \(z_{0}\) _to_ \(Q_{z}\) _and get_ \(\widetilde{Q_{z}}=\operatorname*{CT}_{\lambda}\frac{\lambda^{n}}{(1-\lambda^{ b}z_{0})\cdot(1-\lambda^{a_{1}}z_{1})\cdots(1-\lambda^{a}z_{d})}\)_._ 2. _We can write_ \(\widetilde{Q_{z}}=\sum_{i}\widetilde{Q_{i}}(z_{0},\dots,z_{d})\) _by the LattE package._ 3. _Eliminate slack variables_ \(z_{j}\) _by using either LattE or CTEuclid to give the output._ We illustrate the basic idea by using the (elementary) CTEuclid algorithm for a replacement of Step 2.3. **Example 6**.: _Compute \(s_{4}(4,3,5;7)\)._ _By definition of Fourier-Dedekind sum, we have \(s_{4}(4,3,5;7)=\frac{1}{7}\sum\limits_{k=1}^{6}\frac{\xi^{4k}}{(1-\xi_{7}^{4k}) (1-\xi_{7}^{5k})(1-\xi_{7}^{3k})}\), where \(\xi_{7}=e^{\frac{2\pi i}{7}}\)._ \[Q_{z} =\operatorname{CT}\frac{\lambda^{4}}{(1-\lambda^{7})\cdot(1- \lambda^{4}z_{1})(1-\lambda^{5}z_{2})(1-\lambda^{3}z_{3})}\] \[=\frac{{z_{1}}^{9}}{({z_{1}}^{3}-z_{2})\left({z_{1}}{z_{3}}-1 \right)({z_{1}}^{7}-1)}-\frac{z_{3}}{(z_{1}z_{3}-1)\left({z_{2}}{z_{3}}^{3}-1 \right)({z_{3}}^{7}-1)}\] \[-\frac{{z_{1}}^{3}{z_{2}}^{2}}{({z_{1}}{z_{3}}-1)\left({z_{1}}{z _{2}}^{2}-1\right)({z_{1}}^{3}-z_{2})}-\frac{{z_{2}}^{2}{z_{3}}}{(z_{1}z_{3}-1 )\left({z_{2}}^{2}-z_{3}\right)({z_{2}}{z_{3}}^{3}-1)}\] \[+\frac{{z_{2}}^{4}}{({z_{1}}{z_{2}}^{2}-1)\left({z_{2}}^{2}-z_{3} \right)({z_{2}}^{7}-1)}.\] _Then_ \[s_{4}(4,3,5;7) =\left(Q_{z}-\frac{1}{(1-z_{1})(1-z_{2})(1-z_{3})}\right)\Big{|} _{z_{i}=1}\] \[=\left(\frac{{z_{1}}^{9}}{({z_{1}}^{3}-z_{2})\left({z_{1}}{z_{3} }-1\right)({z_{1}}^{7}-1)}-\frac{z_{3}}{(z_{1}z_{3}-1)\left({z_{2}}{z_{3}}^{3} -1\right)({z_{3}}^{7}-1)}\right.\] \[-\frac{{z_{1}}^{3}{z_{2}}^{2}}{({z_{1}}{z_{3}}-1)\left({z_{1}}{z _{2}}^{2}-1\right)({z_{1}}^{3}-z_{2})}-\frac{{z_{2}}^{2}{z_{3}}}{(z_{1}z_{3}-1 )\left({z_{2}}^{2}-z_{3}\right)({z_{2}}{z_{3}}^{3}-1)}\] \[+\frac{{z_{2}}^{4}}{({z_{1}}{z_{2}}^{2}-1)\left({z_{2}}^{2}-z_{3} \right)({z_{2}}^{7}-1)}-\frac{1}{(1-z_{1})(1-z_{2})(1-z_{3})}\Big{)}\Big{|}_{z _{i}=1}\] \[=\frac{1}{7}.\]
2304.13526
A more general framework than the delta-primary hyperideals
In this paper we aim to study the notion of (t,n)-absorbing delta-semiprimary hyperideal in a Krasner (m,n)-hyperring.
Mahdi Anbarloei
2023-04-26T13:01:45Z
http://arxiv.org/abs/2304.13526v1
# A more general framework than the \(\delta\)-primary hyperideals ###### Abstract. The \(\delta\)-primary hyperideal is a concept unifying the \(n\)-ary prime and \(n\)-ary primary hyperideals under one frame where \(\delta\) is a function which assigns to each hyperideal \(Q\) of \(G\) a hyperideal \(\delta(Q)\) of the same hyperring with specific properties. In this paper, for a commutative Krasner \((m,n)\)-hyperring \(G\) with scalar identity \(1\), we aim to introduce and study the notion of \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals which is a more general structure than \(\delta\)-primary hyperideals. We say that a proper hyperideal \(Q\) of \(G\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal if whenever \(k(a_{1}^{tn-t+1})\in Q\) for \(a_{1}^{tn-t+1}\in G\), then there exist \((t-1)n-t+2\) of the \(a_{i}\)s whose \(k\)-product is in \(\delta(Q)\). Furthermore, we extend the concept to weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals. Several properties and characterizations of these classes of hyperideals are determined. In particular, after defining strongly weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals, we present the condition in which a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal is strongly. Moreover, we show that \(k(Q^{(tn-t+1)})=0\) where the weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal \(Q\) is not \((t,n)\)-absorbing \(\delta\)-semiprimary. Also, we investigate the stability of the concepts under intersection, homomorphism and cartesian product of hyperrings. Key words and phrases:\((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal, weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal, \(\delta\)-\((t,n)\)-zero 2 Introduction Let \(G\) be a graph with vertex set \(V\) and let \(\mathcal{H}(G)\) be the set of all \(n\)-ary prime hyperideals containing \(G\). Let \(\mathcal{H}(G)\) denote the set of all \(n\)-ary prime hyperideals containing \(G\). then \(k(Q^{(tn-t+1)})=0\). Let \(Q\) be a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(0\neq k(Q_{1}^{tn-t+1})\subseteq Q\) for some hyperideals \(Q_{1}^{tn-t+1}\) of \(G\). It is shown (Theorem 2.25) that if \(Q\) is a free \(\delta\)-\((t,n)\)-zero with respect to \(k(Q_{1}^{tn-t+1})\), then \(k\)-product of \((t-1)n-t+2\) of the \(Q_{i}\) is a subset of \(\delta(Q)\). Moreover, the stability of these concepts are examined under intersection, homomorphism and cartesian product of hyperrings. ## 2. (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimay hyperideals Throughout this section, \(G\) is a commutative Krasner \((m,n)\)-hyperring with scalar identity \(1\). Initially, we give the definition of \((t,n)\)-absorbing \(\delta\)-semiprimay hyperideals of \(G\). **Definition 2.1**.: Let \(\delta\) be a hyperideal expansion of \(G\) and \(t\) be a positive integer. A proper hyperideal \(Q\) of \(G\) is called an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal if whenever \(k(a_{1}^{tn-t+1})\in Q\) for \(a_{1}^{tn-t+1}\in G\), then there exist \((t-1)n-t+2\) of the \(a_{i}\)s whose \(k\)-product is in \(\delta(Q)\). **Example 2.2**.: For all \(n\)-ary prime hyperideal of \(G\), we have \(n\)-ary prime \(\implies\)\(n\)-ary \(\delta\)-primary \(\implies\)\((t,n)\)-absorbing \(\delta\)-primary \(\implies\)\((t,n)\)-absorbing \(\delta\)-semiprimary The next example shows that the inverse of "\(\implies\)".s in Example 2.2, is not true, in general. **Example 2.3**.: In the Krasner \((2,2)\)-hyperring \((G=[0,1],\boxplus,\circ)\) that "\(\circ\)" is the usual multiplication on real numbers and \(2\)-ary hyperoperation "\(\boxplus\)" is defined by \[a\boxplus b=\begin{cases}\{max\{a,b\}\},&\text{if $a\neq b$}\\ \text{[0,a]},&\text{if $a=b$},\end{cases}\] the hyperideal \(Q=[0,0.5]\) is a \((2,2)\)-absorbing \(\delta_{1}\)-semiprimary hyperideal of \(G\) but it is not \(2\)-ary prime. **Theorem 2.4**.: _Let \(Q\) be an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) with \(rad(\delta(Q))\subseteq\delta(rad(Q))\). Then \(rad(Q)\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\)._ Proof.: Let \(k(a_{1}^{tn-t+1})\in rad(Q)\) for \(a_{1}^{tn-t+1}\in G\) such that all products of \((t-1)n-t+2\) of the \(a_{i}\)s, other than \(k(a_{1}^{(t-1)n-t+2})\), are not in \(\delta(rad(Q))\). By the assumption, we conclude that none of the \(k\)-products of the \(a_{i}^{\prime}\)s are in \(rad(\delta(Q))\). From \(k(a_{1}^{tn-t+1})\in rad(Q)\), it follows that there exists \(s\in\mathbb{N}\) with \(k(k(a_{1}^{tn-t+1})^{(s)},1^{(n-s)})\in Q\), for \(s\leq n\) or \(k_{(l)}(k(a_{1}^{tn-t+1})^{(s)})\in Q\), for \(s>n\) and \(s=l(n-1)+1\). In the first possibility, we get \(k(k(a_{1})^{(s)},k(a_{2})^{(s)},\cdots,k(a_{tn-t+1})^{(s)},1^{(n-s)})\in Q\). Since \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), we conclude that \(k(k(a_{1})^{(s)},k(a_{2})^{(s)},\cdots,k(a_{(t-1)n-t+2})^{(s)},1^{(n-s)})\) \(=k(k(a_{1}^{(t-1)n-t+2})^{(s)},1^{(n-s)})\in\delta(Q)\) because none of the \(k\)-products of the \(a_{i}\)s are in \(rad(\delta(Q))\). Since \(k(a_{1}^{(t-1)n-t+2})\in rad(\delta(Q))\) and \(rad(\delta(Q))\subseteq\delta(rad(Q))\), then we have \(k(a_{1}^{(t-1)n-t+2})\in\delta(rad(Q))\). If \(k_{(l)}(k(a_{1}^{tn-t+1})^{(s)})\in Q\), for \(s>n\) and \(s=l(n-1)+1\), then we are done similarly. Thus \(rad(Q)\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) The following result is a direct consequence of the previous theorem. **Corollary 2.5**.: If \(Q\) is an \((t,n)\)-absorbing \(\delta_{1}\)-semiprimary hyperideal of \(G\), then \(rad(Q)\) is an \((t,n)\)-absorbing hyperideal of \(G\). From [1], the hyperideal generated by an element \(g\) in \(G\) is defined by \(<g>=k(G,g,1^{(n-2)})=\{k(r,g,1^{(n-2)})\ |\ r\in G\}.\) The following theorem will give us a characterization of \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals. **Theorem 2.6**.: _Every proper hyperideal is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) if and only if every proper principal hyperideal is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\)._ Proof.: \(\Longrightarrow\) It is obvious. \(\Longleftarrow\) Assume that \(Q\) is a proper hyperideal of \(G\) and \(k(a_{1}^{tn-t+1})\in Q\) for \(a_{1}^{tn-t+1}\in G\). Therefore \(k(a_{1}^{tn-t+1})\in<k(a_{1}^{tn-t+1})>\). Since every proper principal hyperideal is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), there exist \((t-1)n-t+2\) of the \(a_{i}^{\prime}\)s whose \(k\)-product is in \(\delta(<k(a_{1}^{tn-t+1})>)\subseteq\delta(Q)\). Hence \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Recall from [2] that a hyperideal expansion \(\delta\) of \(G\) is called intersection preserving if it satisfies \(\delta(P\cap Q)=\delta(P)\cap\delta(Q)\), for all hyperideals \(P\) and \(Q\) of \(G\). For example, hyperideal expansion \(\delta_{1}\) of \(G\) is intersection preserving. **Theorem 2.7**.: _Let the hyperideal expansion \(\delta\) of \(G\) be intersection preserving. If \(Q_{1}^{s}\) are \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals of \(G\) Such that \(\delta(Q_{i})=P\) for each \(1\leq i\leq s\), then \(Q=\bigcap_{i=1}^{s}Q_{i}\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) with \(\delta(Q)=P\)._ Proof.: Assume that \(k(a_{1}^{tn-t+1})\in Q\) for \(a_{1}^{tn-t+1}\in G\) such that \(k(a_{1}^{(t-1)n-t+2})\notin\delta(Q)\). Since \(\delta(Q)=\delta(\cap_{i=1}^{s}Q_{i})=\cap_{i=1}^{s}\delta(Q_{i})=P\), then there exists \(1\leq u\leq s\) such that \(k(a_{1}^{(t-1)n-t+2})\notin\delta(Q_{u})\). Since \(Q_{u}\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(k(a_{1}^{tn-t+1})\in Q_{u}\), then there is a \(k\)-product of \((t-1)n-t+2\) of the \(a_{i}^{\prime}\)s is in \(\delta(Q_{u})=P=\delta(Q)\). Thus \(Q=\bigcap_{i=1}^{s}Q_{i}\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) with \(\delta(Q)=P\). Let \((G_{1},h_{1},k_{1})\) and \((G_{2},h_{2},k_{2})\) be two Krasner \((m,n)\)-hyperrings such that \(1_{G_{1}}\) and \(1_{G_{2}}\) be scalar identities of \(G_{1}\) and \(G_{2}\), respectively. Then \((G_{1}\times G_{2},h=h_{1}\times h_{2},k=k_{1}\times k_{2})\) is a Krasner \((m,n)\)-hyperring where \(h_{1}\times h_{2}((a_{1},b_{1}),\cdots,(a_{m},b_{m}))=\{(a,b)\ |\ a\in h_{1}(a_{1}^{m}),b\in h_{2}(b_{1}^{m})\}\), \(k_{1}\times k_{2}((a_{1},b_{1}),\cdots,(a_{n},b_{n}))=(k_{1}(a_{1}^{n}),k_{2} (b_{1}^{n}))\), for all \(a_{i}\in G_{1}\) and \(b_{i}\in G_{2}\)[6]. **Theorem 2.8**.: _Let \(\delta_{1}\) and \(\delta_{2}\) be two hyperideal expansions of Krasner \((m,n)\)-hyperrings \(G_{1}\) and \(G_{2}\), respectively, such that \(\delta(Q_{1}\times Q_{2})=\delta_{1}(Q_{1})\times\delta_{2}(Q_{2})\) for hyperideals \(Q_{1}\) and \(Q_{2}\) of \(G_{1}\) and \(G_{2}\), respectively. If \(Q=Q_{1}\times Q_{2}\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times G_{2}\), then either \(Q_{1}\) is an \((t+1,n)\)-absorbing \(\delta_{1}\)-semiprimary hyperideal of \(G_{1}\) and \(\delta_{2}(Q_{2})=G_{2}\) or \(Q_{2}\) is an \((t+1,n)\)-absorbing \(\delta_{2}\)-semiprimary hyperideal of \(G_{2}\) and \(\delta_{1}(Q_{1})=G_{1}\) or \(Q_{i}\) is an \((t,n)\)-absorbing \(\delta_{i}\)-semiprimary hyperideal of \(G_{i}\) for each \(i\in\{1,2\}\)._ Proof.: Let \(Q=Q_{1}\times Q_{2}\) be an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times G_{2}\). Assume that \(\delta_{1}(Q_{1})\neq G_{1}\) and \(\delta_{2}(Q_{2})=G_{2}\). Let us suppose that \(k_{1}(a_{1}^{(t+1)n-t})\in Q_{1}\) for some \(a_{1}^{(t+1)n-t}\in G_{1}\) such that all products of \(tn-t+1\) of the \(a_{i}\)'s except \(k_{1}(a_{1}^{tn-t+1})\) are not in \(\delta(Q_{1})\). Note that \(k((a_{1},0),\cdots,(a_{(t+1)n-t},0))\in Q\) and all products of \(tn-t+1\) of the \((a_{i},0)\)'s are not in \(\delta(Q)\). Since \(Q\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), we get \(k((a_{1},0),\cdots,(a_{tn-t+1},0))\in\delta(Q)=\delta_{1}(Q_{1})\times\delta_{ 2}(Q_{2})\) which means \(k_{1}(a_{1}^{tn-t+1})\in\delta(Q_{1})\). Thus \(Q_{1}\) is an \((t+1,n)\)-absorbing \(\delta_{1}\)-semiprimary hyperideal of \(G_{1}\). Similiar for the second assertion. For the third assertion, assume \(\delta_{1}(Q_{1})\neq G_{1}\) and \(\delta_{2}(Q_{2})\neq G_{2}\). Moreover, let us suppose that \(Q_{1}\) is not an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\) and \(k_{1}(a_{1}^{tn-t+1})\in Q_{1}\). We define the following elements of \(G\): \(x_{1}=(a_{1},1_{G_{2}}),x_{2}=(a_{2},1_{G_{2}}),\cdots,x_{tn-t+1}=(a_{tn-t+1}, 1_{G_{2}}),x_{(t-1)n-t+2}=(1_{G_{1}},0)\). Therefore we have \(k(x_{1}^{(t-1)n-t+2})=(k_{1}(a_{1}^{tn-t+1}),0)\in Q\), \(k(x_{1}^{tn-t+1})=(k_{1}(a_{1}^{tn-t+1}),1_{G_{2}})\notin\delta(Q)\) and \(k(x_{1},\cdots,\hat{x}_{i},\cdots,x_{(t-1)n-t+2})=(k_{1}(a_{1},\cdots,\hat{a}_ {i},\cdots,a_{(t-1)n-t+2}),0)\notin\delta(Q)\) for some \(1\leq i\leq tn-t+1\), a contradiction. Thus \(Q_{1}\) is an \((t,n)\)-absorbing \(\delta_{1}\)-semiprimary hyperideal of \(G_{1}\). Similarly, we conclude that \(Q_{2}\) is an \((t,n)\)-absorbing \(\delta_{2}\)-semiprimary hyperideal of \(G_{2}\) **Theorem 2.9**.: _Let \(\delta_{1},\cdots,\delta_{tn-t+1}\) be hyperideal expansions of Krasner \((m,n)\)-hyperrings \(G_{1},\cdots,G_{tn-t+1}\) such that \(\delta(Q_{1}\times\cdots\times Q_{tn-t+1})=\delta_{1}(Q_{1})\times\cdots\times \delta_{tn-t+1}(Q_{tn-t+1})\) for hyperideals \(Q_{1},\cdots,Q_{tn-t+1}\) of \(G_{1},\cdots,G_{tn-t+1}\), respectively. If \(Q=Q_{1}\times\cdots\times Q_{tn-t+1}\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\), then either \(Q_{u}\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{u}\) for some \(1\leq u\leq tn-t+1\) and \(\delta_{i}(Q_{i})=G_{i}\) for each \(1\leq i\leq tn-t+1\) and \(i\neq u\) or \(Q_{u}\) and \(Q_{v}\) are \((t,n)\)-absorbing \(\delta_{u,v}\)-semiprimary hyperideals of \(G_{u}\) and \(G_{v}\), respectively, for some \(u,v\in\{1,\cdots,tn-t+1\}\) and \(\delta_{i}(Q_{i})=G_{i}\) for all \(1\leq i\leq tn-t+1\) but \(i\neq u,v\)._ Proof.: It can be seen that the idea is true in a similar manner to the proof of Theorem 2.8. Now, we want to extend the notion of \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals to weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal. Although different from each other in many aspects, they share quite a number of similar properties as well. **Definition 2.10**.: Let \(\delta\) be a hyperideal expansion of \(G\) and \(t\) be a positive integer. A proper hyperideal \(Q\) of \(G\) refers to a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal if \(a_{1}^{tn-t+1}\in G\) and \(0\neq k(a_{1}^{tn-t+1})\in Q\), then there exist \((t-1)n-t+2\) of the \(a_{i}\)'s whose \(k\)-product is in \(\delta(Q)\). **Example 2.11**.: Suppose that \(\mathbb{Z}_{12}\) is the set of all congruence classes of integers modulo \(12\) and \(H=\{1,5,7,11\}\) is multiplicative subgroup of units \(\mathbb{Z}_{12}\). Construct \(G\) as \(\mathbb{Z}_{12}/H\). Then we have \(G=\{\bar{0},\bar{1},\bar{2},\bar{3},\bar{4},\bar{6}\}\) in which \(\bar{0}=\{0\}\), \(\bar{1}=\{1,5,7,11\}\), \(\bar{2}=\bar{10}=\{2,10\}\), \(\bar{3}=\bar{9}=\{3,9\}\), \(\bar{4}=\bar{8}=\{4,8\}\), \(\bar{6}=\{6\}\). Consider Krasner hyperring \((G,\oplus,\star)\) that for all \(\bar{a},\bar{b}\in G\), \(\bar{a}\star\bar{b}=\overline{ab}\) and \(2\)-ary hyperoperation \(\oplus\) is defined as follows: It is easy to see that the hyperideal \(Q=\{\bar{0},\bar{2},\bar{4},\bar{6}\}\) of \(G\) is a \((2,2)\)-absorbing \(\delta_{1}\)-semiprimary. **Theorem 2.12**.: _If \(Q\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), then \(Q\) is (weakly) \((v,n)\)-absorbing \(\delta\)-semiprimary for all \(v>n\)._ Proof.: By using an argument similar to that in the proof of Theorem 4.4 in [8], one can complete the proof. **Theorem 2.13**.: _Let \(Q\) be a proper hyperideal of \(G\). If \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing hyperideal of \(G\), then \(Q\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\)._ Proof.: Let \((0\neq k(a_{1}^{tn-t+1})\in Q)\;k(a_{1}^{tn-t+1})\in Q\) such that all products of \((t-1)n-t+2\) of the \(a_{i}\)'s, other than \(k(a_{1}^{(t-1)n-t+2})\), are not in \(\delta(Q)\). Since \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing hyperideal of \(G\) and \(Q\subseteq\delta(Q)\), we conclude that \(k(a_{1}^{(t-1)n-t+2})\in\delta(Q)\). This shows that \(Q\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). **Theorem 2.14**.: _Let \(Q\) be a proper hyperideal of \(G\) such that \(\delta(\delta(Q))=\delta(Q)\). Then \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing hyperideal of \(G\) if and only if \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\)._ Proof.: \(\Longrightarrow\) Assume that \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing hyperideal of \(G\). Since \(\delta(\delta(Q))=\delta(Q)\), we are done by Theorem 2.13. \(\Longleftarrow\) Let \(\delta(Q)\) be a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Suppose that \((0\neq k(a_{1}^{tn-n+1})\in\delta(Q))\;k(a_{1}^{tn-n+1})\in\delta(Q)\). Since \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), then there exist \((t-1)n-t+2\) of the \(a_{i}\)'s whose \(k\)-product is in \(\delta(\delta(Q))\). Since \(\delta(\delta(Q))=\delta(Q)\), then the \(k\)-product of the \((t-1)n-t+2\) of the \(a_{i}\)'s is in \(\delta(Q)\) which means \(\delta(Q)\) is a (weakly) \((t,n)\)-absorbing hyperideal of \(G\). **Theorem 2.15**.: _Let \(Q\) be a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(P\) be a proper hyperideal of \(G\) such that \(P\subseteq Q\). If \(\delta(Q)=\delta(P)\), then \(P\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\)._ Proof.: Assume that \((0\neq k(a_{1}^{tn-t+1})\in P)\;k(a_{1}^{tn-t+1})\in P\) for \(a_{1}^{tn-t+1}\in G\). By the assumption, we get \((0\neq k(a_{1}^{tn-t+1})\in Q)\;k(a_{1}^{tn-t+1})\in Q\) which implies there exist \((t-1)n-t+2\) of the \(a_{i}\)'s whose \(k\)-product is in \(\delta(Q)\) because \(Q\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). From \(\delta(Q)=\delta(P)\), it follows that the \(k\)-product of \((t-1)n-t+2\) of the \(a_{i}\)'s is in \(\delta(P)\) which means \(P\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) **Definition 2.16**.: Let \(Q\) be a proper hyperideal of \(G\). \(Q\) refers to a strongly (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal if \((0\neq k(Q_{1}^{tn-t+1})\subseteq Q)\)\(k(Q_{1}^{tn-t+1})\subseteq Q\) for some hyperideals \(Q_{1}^{tn-t+1}\) of \(G\), then there exist \((t-1)n-t+2\) of \(Q_{i}\)s whose \(k\)-product is a subset of \(\delta(Q)\). **Definition 2.17**.: Assume that \(G\) is a commutative Krasner \((m,2)\)-hyperring and \(Q\) is a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Then \((x,y,z)\) is said to be an \(\delta\)-\((2,2)\)-zero of \(Q\) for some \(x,y,z\in G\) if \(k(x,y,z)=0\), \(k(x,y)\notin\delta(Q)\), \(k(y,z)\notin\delta(Q)\) and \(k(x,z)\notin\delta(Q)\). **Theorem 2.18**.: _Let \(G\) be a commutative Krasner \((m,2)\)-hyperring, \(Q\) a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(k(Q_{1},x,y)\subseteq Q\) for some \(x,y\in G\) and a hyperideal \(Q_{1}\) of \(G\). If \((q,x,y)\) is not a \(\delta\)-\((2,2)\)-zero of \(Q\) for all \(q\in Q_{1}\) and \(k(x,y)\notin\delta(Q)\), then \(k(Q_{1},x)\subseteq\delta(Q)\) or \(k(Q_{1},y)\subseteq\delta(Q)\)._ Proof.: Let \(k(Q_{1},x,y)\subseteq Q\) for some \(x,y\in G\) and a hyperideal \(Q_{1}\) of \(G\) but \(k(x,y)\notin\delta(Q)\), \(k(Q_{1},x)\nsubseteq\delta(Q)\) and \(k(Q_{1},y)\nsubseteq\delta(Q)\). Then we have \(k(q_{1},x)\nsubseteq\delta(Q)\) and \(k(q_{2},y)\nsubseteq\delta(Q)\) for some \(q_{1},q_{2}\in Q_{1}\). Since \((q_{1},x,y)\) is not a \(\delta\)-\((2,2)\)-zero of \(Q\) and \(k(q_{1},x,y)\in Q\), we get \(k(q_{1},y)\in\delta(Q)\). Similarly, we have \(k(q_{2},x)\in\delta(Q)\). Note that \(k(h(q_{1},q_{2},0^{(m-2)}),x,y)=h(k(q_{1},x,y),k(q_{2},x,y),0^{(m-2)})\subseteq Q\). Then we obtain \(k(h(q_{1},q_{2},0^{(m-2)}),x)=h(k(q_{1},x),k(q_{2},x),0^{(m-2)})\subseteq\delta (Q)\) or \(k(h(q_{1},q_{2},0^{(m-2)}),y)=h(k(q_{1},y),k(q_{2},y),0^{(m-2)})\subseteq\delta (Q)\). This follows that \(k(q_{1},x)\in h(-k(q_{2},x),0^{(m-1)})\subseteq\delta(Q)\) or \(k(q_{2},y)\in h(-k(q_{1},y),0^{(m-1)})\subseteq\delta(Q)\) which both of them are a contradiction. Consequently, \(k(Q_{1},x)\subseteq\delta(Q)\) or \(k(Q_{1},y)\subseteq\delta(Q)\). **Theorem 2.19**.: _Let \(G\) be a commutative Krasner \((m,2)\)-hyperring, \(Q\) a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(k(Q_{1},Q_{2},x)\subseteq Q\) for some \(x\in G\) and two hyperideals \(Q_{1},Q_{2}\) of \(G\). If \((q_{1},q_{2},x)\) is not a \(\delta\)-\((2,2)\)-zero of \(Q\) for all \(q_{1}\in Q_{1}\) and \(q_{2}\in Q_{2}\), then \(k(Q_{1},x)\subseteq\delta(Q)\) or \(k(Q_{2},x)\subseteq\delta(Q)\) or \(k(Q_{1},Q_{2})\subseteq\delta(Q)\)._ Proof.: Let \(k(Q_{1},Q_{2},x)\subseteq Q\), \(k(Q_{1},x)\nsubseteq\delta(Q)\), \(k(Q_{2},x)\nsubseteq\delta(Q)\) and \(k(Q_{1},Q_{2})\nsubseteq\delta(Q)\). Then we get \(k(q,x)\notin\delta(Q)\) and \(k(q_{1},Q_{2})\nsubseteq\delta(Q)\) for some \(q,q_{1}\in Q_{1}\). By Theorem 2.18, we conclude that \(k(q,Q_{2})\subseteq\delta(Q)\) because \(k(q,Q_{2},x)\subseteq Q\), \(k(q,x)\notin\delta(Q)\) and \(k(Q_{2},x)\nsubseteq\delta(Q)\). Also, from Theorem 2.18, we obtain \(k(q_{1},x)\in\delta(Q)\). Note that \(k(h(q,q_{1},0^{(m-2)}),Q_{2},x)=h(k(q,Q_{2},x),k(q_{1},Q_{2},x),0^{(m-2)}) \subseteq Q\). Then we have \(k(h(q,q_{1},0^{(m-2)}),Q_{2})=h(k(q,Q_{2}),k(q_{1},Q_{2}),0^{(m-2)})\subseteq\delta (Q)\) which means \(k(q_{1},Q_{2})\subseteq h(-k(q,Q_{2}),0^{(m-1)})\subseteq\delta(Q)\) or \(k(h(q,q_{1},0^{(m-2)}),x)=h(k(q,x),k(q_{1},x),0^{(m-2)})\subseteq\delta(Q)\) which implies \(k(q,x)\in h(-k(q_{1},x),0^{(m-1)})\subseteq\delta(Q)\). This is a contradiction. Hence \(k(Q_{1},x)\subseteq\delta(Q)\) or \(k(Q_{2},x)\subseteq\delta(Q)\) or \(k(Q_{1},Q_{2})\subseteq\delta(Q)\). **Definition 2.20**.: Suppose that \(G\) is a commutative Krasner \((m,2)\)-hyperring and \(Q_{1}^{3},Q\) be some proper hyperideals of \(G\) such that \(Q\) is a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). \(Q\) is said to be a free \(\delta\)-\((2,2)\)-zero with respect to \(k(Q_{1}^{3})\) if \((q_{1},q_{2},q_{3})\) is not a \(\delta\)-\((2,2)\)-zero of \(Q\) for every \(q_{1}\in Q_{1}\), \(q_{2}\in Q_{2}\) and \(q_{3}\in Q_{3}\). **Theorem 2.21**.: _Let \(G\) be a commutative Krasner \((m,2)\)-hyperring, \(Q\) a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(0\neq k(Q_{1},Q_{2},Q_{3})\subseteq Q\) for some hyperideals \(Q_{1}^{3}\) of \(G\). If \(Q\) is a free \(\delta\)-\((2,2)\)-zero with respect to \(k(Q_{1}^{3})\), then \(k(Q_{1}^{2})\subseteq\delta(Q)\) or \(k(Q_{2}^{3})\subseteq\delta(Q)\) or \(k(Q_{1},Q_{3})\subseteq\delta(Q)\)._ Proof.: Suppose that \(k(Q_{1}^{3})\subseteq Q\) but \(k(Q_{1}^{2})\nsubseteq\delta(Q)\) or \(k(Q_{2}^{3})\nsubseteq\delta(Q)\) or \(k(Q_{1},Q_{3})\nsubseteq\delta(Q)\). This implies that \(k(q,Q_{2})\nsubseteq\delta(Q)\) and \(k(q_{1},Q_{3})\nsubseteq\delta(Q)\) for some \(q,q_{1}\in Q_{1}\). By Theorem 2.19, we get \(k(q,Q_{3})\subseteq\delta(Q)\) because \(k(q,Q_{2}^{3})\subseteq Q\), \(k(Q_{2}^{3})\nsubseteq\delta(Q)\) and \(k(q,Q_{2})\nsubseteq\delta(Q)\). Also, from Theorem 2.19, we obtain \(k(q_{1},Q_{2})\nsubseteq\delta(Q)\) as \(k(q_{1},Q_{2}^{3})\subseteq Q\), \(k(Q_{2}^{3})\nsubseteq\delta(Q)\) and \(k(q_{1},Q_{2})\nsubseteq\delta(Q)\). Since \(k(h(q,q_{1}),Q_{2}^{3})\subseteq Q\) and \(k(Q_{2}^{3})\nsubseteq\delta(Q)\), we have \(k(h(q,q_{1},0^{(m-2)}),Q_{2})=h(k(q,Q_{2}),k(q_{1},Q_{2}),0^{(m-2)})\subseteq \delta(Q)\) or \(k(h(q,q_{1},0^{(m-2)}),Q_{3})=h(k(q,Q_{3}),k(q_{1},Q_{3}),0^{(m-2)})\subseteq \delta(Q)\). In the first case, we conclude that \(k(q,Q_{2})\in h(-k(q_{1},Q_{2}),0^{(m-1)})\subseteq\delta(Q)\), a contradiction. Moreover, the second case leads to a contradiction because \(k(q_{1},Q_{3})\in h(-k(q,Q_{3}),0^{(m-1)})\subseteq\delta(Q)\). Thus \(k(Q_{1}^{2})\subseteq\delta(Q)\) or \(k(Q_{2}^{3})\subseteq\delta(Q)\) or \(k(Q_{1},Q_{3})\subseteq\delta(Q)\). **Definition 2.22**.: Assume that \(Q\) is a weakly \((k,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Then \((a_{1},\cdots,a_{tn-t+1})\) is called \(\delta\)-\((t,n)\)-zero of \(Q\) if \(k(a_{1}^{tn-t+1})=0\) and none \(k\)-product of the terms \((t-1)n-t+2\) of \(a_{i}^{\prime}\)s is in \(\delta(Q)\). **Theorem 2.23**.: _Assume that \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q_{1}^{s})\subseteq Q\) for some \(a_{1}^{tn-t+1}\in G\) and some hyperideals \(Q_{1},\cdots Q_{s}\) of \(G\) such that \(1\leq i_{1},\cdots,i_{s}\leq tn-t+1\) and \(1\leq s\leq(t-1)n-t+2\). If \((a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},q_{1}^{s})\) is not a \(\delta\)-\((t,n)\)-zero of \(Q\) for all \(q_{i}\in Q_{i}\), then \(k\)-product of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q_{1}^{s}\) including at least one of the \(Q_{i}\)s is in \(\delta(Q)\)._ Proof.: We prove it with induction on \(s\). Let us consider \(s=1\). In this case we show that \(k\)-product of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,a_{tn-t+1},Q_{1}\) including \(Q_{1}\) is in \(\delta(Q)\). Assume that all products of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,a_{tn-t+1},Q_{1}\) are not in \(\delta(Q)\). We consider \(k(a_{2}^{(t-1)n-t+2},Q_{1})\notin\delta(Q)\). Since \((a_{2}^{tn-t+1},q_{1})\) is not a \(\delta\)-\((t,n)\)-zero of \(Q\) for all \(q_{1}\in Q_{1}\), then we conclude that \(k\)-product of the \((t-1)n-t+2\) of \(a_{i}\)s with \(q_{1}\) is in \(\delta(Q)\). By a similar argument given in the proof of Theorem 2.18, we have \(k(a_{3}^{tn-t+1},h(a_{1},q_{1},0^{(m-2)}))=h(k(a_{3}^{tn-t+1},a_{1}),k(a_{3}^ {tn-t+1},q_{1}),0^{(m-2)})\subseteq\delta(Q)\) which implies \(k(a_{3}^{tn-t+1},a_{1})\in h(-k(a_{3}^{tn-t+1},q_{1}),0^{(m-1)})\subseteq \delta(Q)\), a contradiction. This implies that \(k\)-product of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,a_{tn-t+1},Q_{1}\) including \(Q_{1}\) is in \(\delta(Q)\). Now, we suppose that the claim holds for all positive integers which are less than \(s\). Let \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q_{1}^{s})\subseteq Q\) but all products of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q_{1}^{s}\) including at least one of the \(Q_{i}\)s are not in \(\delta(Q)\). We may assume that \(k(a_{s+1}^{tn-t+1},Q_{1}^{s})\notin\delta(Q)\). Note that \((a_{s+1}^{tn-t+1},q_{1}^{s})\) is not a \(\delta\)-\((t,n)\)-zero of \(Q\) for all \(q_{1}^{s}\in Q\). We get \(k(a_{s+1}^{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\cdots,h(a_{s},q_{s},0^{(m-2)})) \subseteq\delta(Q)\) by induction hypothesis and Theorem 2.19. Then we conclude that \[\begin{array}{c}k(a_{s+1}^{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\cdots,h(a_{1}, \hat{q}_{1},0^{(m-2)})_{i_{1}},\cdots,h(a_{2},\hat{q}_{2},0^{(m-2)})_{i_{2}}, \\ \cdots,h(a_{n-1},\hat{q}_{n-1},0^{(m-2)})_{i_{n-1}},\cdots,h(a_{s},q_{s},0^{(m -2)}))\subseteq\delta(Q)\end{array}\] or \[\begin{array}{c}k(a_{s+1},\cdots,\hat{a}_{i_{s+1}},\cdots,\hat{a}_{i_{s+2}}, \cdots,\hat{a}_{i_{s+n-1}},\cdots,a_{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\\ \cdots,h(a_{s},q_{s},0^{(m-2)}))\subseteq\delta(Q)\end{array}\] for some \(i\in\{1,\cdots,s\}\). This implies that \(k(a_{s+1},\cdots,a_{tn-t+1},\cdots,a_{n},\cdots,a_{s})\in\delta(Q)\) or \(k(a_{s+n},\cdots,a_{tn-t+1},\cdots,a_{s}^{s})\in\delta(Q)\), a contradiction. Then we conclude that \(k(a_{s+1},\cdots,a_{tn-t+1},\cdots,a_{n-t+1},Q_{1}^{s})\subseteq\delta(Q)\). that \(k\)-product of \((t-1)n-t+2\) of \(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}},\cdots, a_{tn-t+1},Q_{1}^{s}\) including at least one of the \(Q_{i}\)s is in \(\delta(Q)\). **Definition 2.24**.: Suppose that \(Q_{1}^{n},Q\) be some proper hyperideals of \(G\) such that \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(k(Q_{1}^{tn-t+1})\subseteq Q\). \(Q\) is called a free \(\delta\)-\((t,n)\)-zero with respect to \(k(Q_{1}^{tn-t+1})\) if \((q_{1},\cdots,q_{tn-t+1})\) is not a \(\delta\)-\((t,n)\)-zero of \(Q\) for every \(q_{i}\in Q_{i}\) with \(1\leq i\leq tn-t+1\). **Theorem 2.25**.: _Assume that \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \(0\neq k(Q_{1}^{tn-t+1})\subseteq Q\) for some hyperideals \(Q_{1}^{tn-t+1}\) of \(G\). If \(Q\) is a free \(\delta\)-\((t,n)\)-zero with respect to \(k(Q_{1}^{tn-t+1})\), then \(k\)-product of \((t-1)n-t+2\) of the \(Q_{i}\) is a subset of \(\delta(Q)\)._ Proof.: This can be proved by Theorem 2.23, in a very similar manner to the way in which Theorem 2.21 was proved. **Theorem 2.26**.: _Let \(G\) be a commutative Krasner \((m,2)\)-hyperring and \(Q\) be a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). If \((x,y,z)\) is an \(\delta\)-\((2,2)\)-zero of \(Q\) for some \(x,y,z\in G\), then_ * \(k(x,y,Q)=k(y,z,Q)=k(x,z,Q)=0\)__ * \(k(x,Q^{(2)})=k(y,Q^{(2)})=k(z,Q^{(2)})=0\)__ Proof.: (i) Let \(Q\) be a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \((x,y,z)\) be an \(\delta\)-\((2,2)\)-zero of \(Q\). Let us assume that \(k(x,y,Q)\neq 0\). This means that \(k(x,y,q)\neq 0\) for some \(q\in Q\). So we have \(0\neq k(x,h(z,q,0^{(m-2)}),y)=h(k(x,z,y),k(x,q,y),0^{(m-2)})\subseteq Q\). Since \(Q\) is weakly \((2,2)\)-absorbing \(\delta\)-semiprimary and \(k(x,y)\notin\delta(Q)\), we get \(k(x,h(z,q,0^{(m-2)}))=h(k(x,z),k(x,q),0^{(m-2)})\subseteq\delta(Q)\) or \(k(h(z,q,0^{(m-2)}),y)=h(k(z,y),k(q,y),0^{(m-2)})\subseteq\delta(Q)\). In the first case, we have \(k(x,z)\in h(-k(x,q),0^{(m-1)})\subseteq\delta(Q)\) which is a contradiction. The second case leads to a contradiction because \(k(z,y)\in h(-k(q,y),0^{(m-1)})\subseteq\delta(Q)\). Thus \(k(x,y,Q)=0\). Similiar for the other cases. (ii) Let \(k(x,Q^{(2)})\neq 0\). This implies that \(k(x,q_{1}^{2})\neq 0\) for some \(q_{1},q_{2}\in Q\). Therefore \[\begin{array}{rl}0&\neq k(x,h(y,q_{1},0^{(m-2)}),h(z,q_{2},0^{(m-2)}))\\ &=h(k(x,y,z),k(x,y,q_{2}),k(x,q_{1},z),k(x,q_{1}^{2}),0^{(m-4)})\\ &\subseteq Q.\end{array}\] Since \(Q\) is a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), we obtain the following cases: Case 1. \(k(x,h(y,q_{1},0^{(m-2)}))\subseteq\delta(Q)\) which implies \(h(k(x,y),k(x,q_{1}),0^{(m-2)})\subseteq\delta(Q)\). Then we have \(k(x,y)\in h(-k(x,q_{1}),0^{(m-1)})\subseteq\delta(Q)\), a contradiction. Case 2. \(k(x,h(z,q_{2},0^{(m-2)}))\subseteq\delta(Q)\) which means \(h(k(x,z),k(x,q_{2}),0^{(m-2)})\subseteq\delta(Q)\). This follows that \(k(x,z)\in h(-k(x,q_{2}),0^{(m-1)})\subseteq\delta(Q)\), a contradiction. Case 3. \(k(h(y,q_{1},0^{(m-2)}),h(z,q_{2},0^{(m-2)}))\subseteq\delta(Q)\) and so \(h(k(y,z),k(q_{1},z),k(y,q_{2}),k(q_{1}^{2}),0^{(m-4)})\subseteq\delta(Q)\). This implies that \(k(y,z)\in h(-k(q_{1},z),-k(y,q_{2}),-k(q_{1}^{2}),0^{(m-2)})\subseteq\delta(Q)\) which is a contradiction. Therefore \(k(x,Q^{(2)})=0\). Similiar for the other cases. **Theorem 2.27**.: _Let \(G\) be a commutative Krasner \((m,2)\)-hyerring and \(Q\) be a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) but is not \((2,2)\)-absorbing \(\delta\)-semiprimary. Then \(k(Q^{(3)})=0\)._ Proof.: Let \(Q\) be a weakly \((2,2)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) but is not \((2,2)\)-absorbing \(\delta\)-semiprimary. This implies that we have an \(\delta\)-\((2,2)\)-zero of \(Q\) for some \(x,y,z\in G\). Let us assume that \(k(Q^{(3)})\neq 0\). Then \(k(q_{1}^{3})\neq 0\) for some \(q_{1}^{3}\in Q\). Therefore we have \(k(h(x,q_{1},0^{(m-2)}),h(y,q_{2},0^{(m-2)}),h(z,q_{3},0^{(m-2)}))\) \(=h(h(h(k(x,y,z),k(q_{1},y,z),0^{(m-2)}),h(k(x,y,q_{3}),k(q_{1},y,q_{3}),0^{(m-2 )})),\) \(h(h(k(q_{2},x,z),h(q_{1}^{2},z),0^{(m-2)})),h(h(k(q_{1}^{3}),k(x,q_{2}^{3})),0)\). From \(k(q_{1}^{3})\neq 0\), it follows that \(0\neq k(h(x,q_{1},0^{(m-2)}),h(y,q_{2},0^{(m-2)}),h(z,q_{3},0^{(m-2)}))\subseteq Q\) by Theorem 2.26. Since \(Q\) is weakly \((2,2)\)-absorbing \(\delta\)-semiprimary, we have \(k(h(x,q_{1},0^{(m-2)}),h(y,q_{2},0^{(m-2)}))\subseteq\delta(Q)\) or \(k(h(x,q_{1},0^{(m-2)}),h(z,q_{3},0^{(m-2)}))\subseteq\delta(Q)\). In the first possibilty, we obtain \(h(k(x,y),k(x,q_{2}),k(q_{1},y),k(q_{1}^{2}),0^{(m-4)})\subseteq\delta(Q)\) which means \(k(x,y)\in h(-k(x,q_{2}),-k(q_{1},y),-k(q_{1}^{2}),0^{(m-3)})\subseteq\delta(Q)\) which is a contradiction. Moreover, the other possibilities lead to a contradiction. Thus \(k(Q^{(3)})=0\). **Definition 2.28**.: Assume that \(Q\) is a weakly \((k,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Then \((a_{1},\cdots,a_{tn-t+1})\) is called \(\delta\)-\((t,n)\)-zero of \(Q\) if \(k(a_{1}^{tn-t+1})=0\) and none \(k\)-product of the terms \((t-1)n-t+2\) of \(a_{i}\)s is in \(\delta(Q)\). **Theorem 2.29**.: _If \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) and \((a_{1},\cdots,a_{tn-t+1})\) is a \(\delta\)-\((t,n)\)-zero of \(Q\), then for \(1\leq i_{1},\cdots,i_{s}\leq tn-t+1\) and \(1\leq s\leq(t-1)n-t+2\),_ \[k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q^{(s)})=0.\] Proof.: We use the induction on \(s\). Assume that \(s=1\). Let us suppose that \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,a_{tn-t+1},Q)\neq 0\). We may assume that \(k(a_{2}^{tn-t+1},Q)\neq 0\). Therefore \(k(a_{2}^{tn-t+1},q)\neq 0\) for some \(q\in Q\). Hence every \(k\)-product of the \((t-1)n-t+2\) of \(a_{i}\)s including \(q\) is in \(\delta(Q)\). By the same argument given in Theorem 2.26, we have \(k(a_{3}^{tn-t+1},h(a_{1},q,0^{(m-2)}))=h(k(a_{3}^{tn-t+1},a_{1}),k(a_{3}^{tn-t +1},q),0^{(m-2)})\subseteq\delta(Q)\) which implies \(k(a_{3}^{tn-t+1},a_{1})\in h(-k(a_{3}^{tn-t+1},q),0^{(m-1)})\subseteq\delta(Q)\), a contradiction. This means that \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,a_{tn-t+1},Q)=0\). Now, let us suppose that \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q^{(s)})\neq 0\). We may assume that \(k(a_{s+1}^{tn-t+1},Q^{(s)})\neq 0\). Hence \(0\neq k(a_{s+1}^{tn-t+1},q_{1}^{s})\in Q\) for some \(q_{1}^{s}\in Q\). It follows that \(0\neq k(a_{s+1}^{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\cdots,h(a_{s},q_{s},0^{(m-2 )}))\subseteq Q\) by Theorem 2.26 and induction hypothesis. Then we conclude that \(k(a_{s+1}^{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\cdots,h(a_{1},\hat{q}_{1},0^{(m-2 )})_{i_{1}},\cdots,h(a_{2},\hat{q}_{2},0^{(m-2)})_{i_{2}}\), \(\cdots,h(a_{n-1},\hat{q}_{n-1},0^{(m-2)})_{i_{n-1}},\cdots,h(a_{s},q_{s},0^{(m -2)}))\subseteq\delta(Q)\) or \(k(a_{s+1},\cdots,\hat{a}_{i_{s+1}},\cdots,\hat{a}_{i_{s+2}},\cdots,\hat{a}_{i_{ s+n-1}},\cdots,a_{tn-t+1},h(a_{1},q_{1},0^{(m-2)}),\) \(\cdots,h(a_{s},q_{s},0^{(m-2)}))\subseteq\delta(Q)\) for some \(i\in\{1,\cdots,s\}\). This implies that \(k(a_{s+1},\cdots,a_{tn-t+1},\cdots,a_{n},\cdots,a_{s})\in\delta(Q)\) or \(k(a_{s+n},\cdots,a_{tn-t+1},\cdots,a_{1}^{s})\in\delta(Q)\), a contradiction. Thus we conclude that \(k(a_{1},\cdots,\hat{a}_{i_{1}},\cdots,\hat{a}_{i_{2}},\cdots,\hat{a}_{i_{s}}, \cdots,a_{tn-t+1},Q^{(s)})=0\) **Theorem 2.30**.: _Let \(Q\) be a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) but is not \((t,n)\)-absorbing \(\delta\)-semiprimary. Then \(k(Q^{(tn-t+1)})=0\)._ Proof.: Assume that \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) but is not \((t,n)\)-absorbing \(\delta\)-semiprimary. Then there exists a \(\delta\)-\((t,n)\)-zero \((a_{1},\cdots,a_{tn-t+1})\) of \(Q\). Now, the claim follows by using Theorem 2.29, in a very similar manner to the way in which Theorem 2.27 was proved. As an instant consequence of the previous theorem, we have the following explicit results. **Corollary 2.31**.: Let \(Q\) be a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) but is not \((t,n)\)-absorbing \(\delta\)-semiprimary. Then \(Q\subseteq rad(0)\). **Corollary 2.32**.: Assume that the commutative Krasner \((m,n)\)-hyperring \(G\) has no non-zero nilpotent elements. If \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), then \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). The next theorem provides us how to determine weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal to be \((t,n)\)-absorbing \(\delta\)-semiprimary. **Theorem 2.33**.: _Let \(Q\) be a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) such that \(\delta(Q)=\delta(0)\). Then \(Q\) is not \((t,n)\)-absorbing \(\delta\)-semiprimary if and only if there exists a \(\delta\)-\((t,n)\)-zero of \(0\)._ Proof.: \(\Longrightarrow\) Assume that \(Q\) is not an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). This implies that \(k(a_{1}^{tn-t+1})=0\) and none \(k\)-product of the terms \((t-1)n-t+2\) of \(a_{i}^{\cdot}\)s is in \(\delta(Q)\) for some \(a_{1}^{tn-t+1}\in G\). From \(\delta(Q)=\delta(0)\), it follows that \((a_{1}^{tn-t+1})\) is a \(\delta\)-\((t,n)\)-zero of \(0\). \(\Longleftarrow\) Straightforward Let \((G_{1},h_{1},k_{1})\) and \((G_{2},h_{2},k_{2})\) be two commutative Krasner \((m,n)\)-hyperrings. Recall from [14] that a mapping \(f:G_{1}\longrightarrow G_{2}\) is called a homomorphism if we have \(f(h_{1}(a_{1}^{m}))=h_{2}(f(a_{1}),\cdots,f(a_{m}))\) and \(f(k_{1}(b_{1}^{n}))=k_{2}(f(b_{1}),...,f(b_{n}))\) for all \(a_{1}^{m}\in G_{1}\) and \(b_{1}^{n}\in G_{1}\). Let \(\delta\) and \(\delta^{\prime}\) be hyperideal expansions of \(G_{1}\) and \(G_{2}\), respectively. Recall from [2] that \(f:G_{1}\longrightarrow G_{2}\) is called a \(\delta\delta^{\prime}\)-homomorphism if \(\delta(f^{-1}(Q_{2}))=f^{-1}(\delta^{\prime}(Q_{2}))\) for hyperideal \(Q_{2}\) of \(G_{2}\). Note that \(\delta^{\prime}(h(Q_{1})=h(\delta(Q_{1})\) for \(\delta\delta^{\prime}\)-epimorphism \(f\) and for hyperideal \(Q_{1}\) of \(G_{1}\) with \(Ker(f)\subseteq Q_{1}\). **Theorem 2.34**.: _Let \((G_{1},h_{1},k_{1})\) and \((G_{2},h_{2},k_{2})\) be two Krasner \((m,n)\)-hyperrings and \(f:G_{1}\longrightarrow G_{2}\) be a \(\delta\delta^{\prime}\)-homomorphism. Then the followings hold:_ * _If_ \(Q_{2}\) _is an_ \((t,n)\)_-absorbing_ \(\delta^{\prime}\)_-semiprimary hyperideal of_ \(G_{2}\)_, then_ \(f^{-1}(Q_{2})\) _is an_ \((t,n)\)_-absorbing_ \(\delta\)_-semiprimary hyperideal of_ \(G_{1}\)_._ * _If_ \(Q_{2}\) _is a weakly_ \((t,n)\)_-absorbing_ \(\delta^{\prime}\)_-semiprimary hyperideal of_ \(G_{2}\) _and_ \(Kerf\) _is a weakly_ \((t,n)\)_-absorbing_ \(\delta\)_-semiprimary hyperideal of_ \(G_{1}\)_, then_ \(f^{-1}(Q_{2})\) _is a weakly_ \((t,n)\)_-absorbing_ \(\delta\)_-semiprimary hyperideal of_ \(G_{1}\)_._ * _Let_ \(f\) _be an epimorphism and_ \(Q_{1}\) _be a proper hyperideal of_ \(G_{1}\) _containing_ \(Kerf\)_. If_ \(Q_{1}\) _is a (weakly)_ \((t,n)\)_-absorbing_ \(\delta\)_-semiprimary hyperideal of_ \(G_{1}\)_, then_ \(f(Q_{1})\) _is a (weakly)_ \(\delta^{\prime}\)_-semiprimary hyperideal of_ \(G_{2}\)_._ Proof.: (i) Let \(k_{1}(a_{1}^{tn-t+1})\in f^{-1}(Q_{2})\) for \(a_{1}^{kn-k+1}\in G_{1}\). Then we get \(f(k_{1}(a_{1}^{tn-t+1}))=k_{2}(f(a_{1}),...,f(a_{tn-t+1}))\in Q_{2}\). Since \(Q_{2}\) is an \((t,n)\)-absorbing \(\delta^{\prime}\)-semiprimary hyperideal of \(G_{2}\), then there exist \((t-1)n-t+2\) of \(f(a_{i})\)'s whose \(k_{2}\)-product is an element in \(\delta^{\prime}(Q_{2})\). It follows that the image \(f\) of \((t-1)n-t+2\) of \(a_{i}^{\cdot}\) whose \(k_{2}\)-product is in \(\delta^{\prime}(Q_{2})\) which means there exist \((t-1)n-t+2\) of \(a_{i}^{\cdot}\) whose \(k_{1}\)-product is in \(f^{-1}(\delta^{\prime}(Q_{2}))=\delta(f^{-1}(Q_{2}))\). Thus \(f^{-1}(Q_{2})\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\). \((ii)\) Assume that \(k_{1}(a_{1}^{tn-t+1})\in f^{-1}(Q_{2})\) for \(a_{1}^{kn-k+1}\in G_{1}\). Therefore \(f(k_{1}(a_{1}^{tn-t+1}))=k_{2}(f(a_{1}),...,f(a_{tn-t+1}))\in Q_{2}\). If \(0\neq f(k_{1}(a_{1}^{tn-t+1}))\), then it can be proved by using an argument similar to that in the proof of the part (i). Let us assume that \(f(k_{1}(a_{1}^{tn-t+1}))=0\). Then we obtain \(k_{1}(a_{1}^{tn-t+1})\in Kerf\). Since \(Kerf\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\), then there exist \((t-1)n-t+2\) of \(a_{i}\)s whose \(k_{1}\)-product is an element in \(\delta(Kerf)\). From \(\delta(Kerf)=\delta(f^{-1}(0))\subseteq\delta(f^{-1}(Q_{2}))\), it follows that \(f^{-1}(Q_{2})\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\). \((iii)\) Let \((0\neq k_{2}(b_{1}^{tn-t+1})\in f(Q_{1}))\)\(k_{2}(b_{1}^{tn-t+1})\in f(Q_{1})\) for some \(b_{1}^{tn-t+1}\in G_{2}\). Since \(f\) be an epimorphism, then there exist \(a_{i}\in G_{1}\) for each \(1\leq i\leq tn-t+1\) such that \(f(a_{i})=b_{i}\). Hence \(k_{2}(b_{1}^{tn-t+1})=k_{2}(f(a_{1}),\cdots,f(a_{tn-t+1}))=f(k_{1}(a_{1}^{tn-t +1}))\in f(Q_{1})\). Since \(Q_{1}\) containing \(Kerf\), we conclude that \((0\neq k_{1}(a_{1}^{tn-t+1})\in Q_{1})\)\(k_{1}(a_{1}^{tn-t+1})\in Q_{1}\). As \(Q_{1}\) is a (weakly) \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\), then there exist \((t-1)n-t+2\) of \(a_{i}^{\cdot}\)s whose \(k_{1}\)-product is in \(\delta(Q_{1})\). Now, since \(f\) is a homomorphism and \(f(\delta(Q_{1}))=\delta^{\prime}(f(Q_{1}))\), the proof is completed. Let \(P\) be a hyperideal of \((G,h,k)\). Then the set \(G/P=\{h(g_{1}^{i-1},P,g_{i+1}^{m})\mid g_{1}^{i-1},g_{i+1}^{m}\in G\}\) with \(h\) and \(k\) which are defined by \[h(h(g_{11}^{1(i-1)},P,g_{1(i+1)}^{1m}),...,h(g_{m1}^{m(i-1)},P,g_{m(i+1)}^{mm}))\] \[=h(h(g_{11}^{m1}),...,h(g_{1(i-1)}^{m(i-1)}),P,h(g_{1(i+1)}^{m(i+1)}),...,h(g_{ 1m}^{mm}))\] and \[k(h(g_{11}^{1(i-1)},P,g_{1(i+1)}^{1m}),...,h(g_{n1}^{n(i-1)},P,g_{n(i+1)}^{nm}))\] \[=h(k(g_{11}^{n1}),...,k(g_{1(i-1)}^{n(i-1)}),P,k(g_{1(i+1)}^{n(i+1)}),...,k(g_{ 1m}^{nm}))\] for all \(g_{11}^{1m},...,g_{m1}^{mm}\in G\) and \(g_{11}^{1m},...,g_{n1}^{nm}\in G\), construct a Krasner \((m,n)\)-hyperring [1]. **Theorem 2.35**.: _Let \(P\) and \(Q\) be two proper hyperideals of \(G\) with \(P\subseteq Q\). If \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\), then \(Q/P\) is an \((t,n)\)-absorbing \(\delta_{q}\)-semiprimary hyperideal of \(G/P\)._ Proof.: By considering the natural homomorphism \(\pi:G\longrightarrow G/P\), defined by \(\pi(a)=f(a,P,0^{(m-2)})\) and using Theorem 2.34, we are done. **Theorem 2.36**.: _Let \(Q\) be an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). If \(G^{\prime}\) is a subhyperring of \(G\) such that \(G^{\prime}\nsubseteq Q\), then \(Q\cap G^{\prime}\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G^{\prime}\)._ Proof.: It follows by Theorem 2.34. **Theorem 2.37**.: _Let \(\delta_{1}\) and \(\delta_{2}\) be two hyperideal expansions of Krasner \((m,n)\)-hyperrings \(G_{1}\) and \(G_{2}\), respectively, such that \(\delta(Q_{1}\times Q_{2})=\delta_{1}(Q_{1})\times\delta_{2}(Q_{2})\) for hyperideals \(Q_{1}\) and \(Q_{2}\) of \(G_{1}\) and \(G_{2}\), respectively. If \(Q=Q_{1}\times G_{2}\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\times G_{2}\), then it is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\times G_{2}\)._ Proof.: Assume that \(Q_{1}\times G_{2}\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\times G_{2}\). Since \(k(Q^{(tn-t+1)})\neq 0\), we conclude that \(Q=Q_{1}\times G_{2}\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{1}\times G_{2}\) by Theorem 2.30. We say that \(\delta\) has \((\mathfrak{P})\) property if it satisfies the condition: \(\delta(Q)=G\) if and only if \(Q=G\) for all hyperideals \(Q\) of \(G\). **Theorem 2.38**.: _Let \(\delta_{1},\cdots,\delta_{tn-t+1}\) be hyperideal expansions of Krasner \((m,n)\)-hyperrings \(G_{1},\cdots,G_{tn-t+1}\) such that each \(\delta_{i}\) has \((\mathfrak{P})\) property and \(\delta(Q_{1}\times\cdots\times Q_{tn-t+1})=\delta_{1}(Q_{1})\times\cdots\times \delta_{tn-t+1}(Q_{tn-t+1})\) for hyperideals \(Q_{1},\cdots,Q_{tn-t+1}\) of \(G_{1},\cdots,G_{tn-t+1}\), respectively. If \(Q=Q_{1}\times\cdots\times Q_{tn-t+1}\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\), then \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\)._ Proof.: Let \(Q\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\). Let us consider the following elements of \(G\): \(x_{i}=(1_{G_{1}}\cdots,1_{G_{i-1}},a_{i},1_{G_{i+1}},\cdots,1_{G_{tn-t+1}})\) for all \(1\leq i\leq tn-t+1\). Then we have \(0\neq k(x_{1}^{tn-t+1})\in Q\). Since \(Q=Q_{1}\times\cdots\times Q_{tn-t+1}\) is a weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\), then there exists \((t-1)n-t+2\) of the \(x_{i}\)s whose \(k\)-product is in \(\delta(Q)=\delta_{1}(Q_{1})\times\cdots\times\delta_{tn-t+1}(Q_{tn-t+1})\). This implies that there exists some \(1\leq j\leq tn-t+1\) such that \(1_{G_{j}}\in\delta_{j}(Q_{j})\) which means \(\delta_{j}(Q_{j})=G_{j}\). Since \(\delta_{j}\) has \((\mathfrak{P})\) property, then \(Q_{j}=G_{j}\). Hence we conclude that \(k(Q^{(tn-t+1)})\neq 0\) which implies \(Q\) is an \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) by Theorem 2.30. **Theorem 2.39**.: _Let \(\delta_{1},\cdots,\delta_{tn-t+1}\) be hyperideal expansions of Krasner \((m,n)\)-hyperrings \(G_{1},\cdots,G_{tn-t+1}\) such that each \(\delta_{i}\) has \((\mathfrak{P})\) property and \(\delta(Q_{1}\times\cdots\times Q_{tn-t+1})=\delta_{1}(Q_{1})\times\cdots\times \delta_{tn-t+1}(Q_{tn-t+1})\) for hyperideals \(Q_{1},\cdots,Q_{tn-t+1}\) of \(G_{1},\cdots,G_{tn-t+1}\), respectively. If \(Q=Q_{1}\times\cdots\times Q_{tn-t+1}\) is a weakly \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\), then either there exists \(1\leq u\leq tn-t+1\) such that \(Q_{u}\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G_{u}\) and \(Q_{i}=G_{i}\) for each \(1\leq i\leq tn-t+1\) and \(i\neq u\) or \(Q_{u}\) and \(Q_{v}\) are \((t,n)\)-absorbing \(\delta_{u,v}\)-semiprimary hyperideals of \(G_{u}\) and \(G_{v}\), respectively, for some \(u,v\in\{1,\cdots,tn-t+1\}\) and \(Q_{i}=G_{i}\) for all \(1\leq i\leq tn-t+1\) but \(i\neq u,v\)._ Proof.: Let \(Q=Q_{1}\times\cdots\times Q_{tn-t+1}\) be a weakly \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G=G_{1}\times...\times G_{tn-t+1}\). Therefore we conclude that \(Q\) is an \((t+1,n)\)-absorbing \(\delta\)-semiprimary hyperideal of \(G\) by Theorem 2.38. Now, by using Theorem 2.9, we are done. ## 3. conclusion In this paper, our purpose was to study the structure of \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals which is more general than \(\delta\)-primary hyperideals. Additionally, we generalized the notion to weakly \((t,n)\)-absorbing \(\delta\)-semiprimary hyperideals. We gave many special results illustrating the structures. Indeed, this paper makes a major contribution to classify hyperideals in Krasner \((m,n)\)-hyperrings. **Conflicts of Interest** The authors declare that they have no conflicts of interest.
2301.10829
TranSOP: Transformer-based Multimodal Classification for Stroke Treatment Outcome Prediction
Acute ischaemic stroke, caused by an interruption in blood flow to brain tissue, is a leading cause of disability and mortality worldwide. The selection of patients for the most optimal ischaemic stroke treatment is a crucial step for a successful outcome, as the effect of treatment highly depends on the time to treatment. We propose a transformer-based multimodal network (TranSOP) for a classification approach that employs clinical metadata and imaging information, acquired on hospital admission, to predict the functional outcome of stroke treatment based on the modified Rankin Scale (mRS). This includes a fusion module to efficiently combine 3D non-contrast computed tomography (NCCT) features and clinical information. In comparative experiments using unimodal and multimodal data on the MRCLEAN dataset, we achieve a state-of-the-art AUC score of 0.85.
Zeynel A. Samak, Philip Clatworthy, Majid Mirmehdi
2023-01-25T21:05:10Z
http://arxiv.org/abs/2301.10829v1
# Transop: Transformer-based Multimodal Classification for Stroke Treatment Outcome Prediction ###### Abstract Acute ischaemic stroke, caused by an interruption in blood flow to brain tissue, is a leading cause of disability and mortality worldwide. The selection of patients for the most optimal ischaemic stroke treatment is a crucial step for a successful outcome, as the effect of treatment highly depends on the time to treatment. We propose a transformer-based multimodal network (TranSOP) for a classification approach that employs clinical metadata and imaging information, acquired on hospital admission, to predict the functional outcome of stroke treatment based on the modified Rankin Scale (mRS). This includes a fusion module to efficiently combine 3D non-contrast computed tomography (NCCT) features and clinical information. In comparative experiments using unimodal and multimodal data on the MRCLEAN dataset, we achieve a state-of-the-art AUC score of 0.85. Zeynel A. Samak\({}^{1}\) Philip Clatworthy\({}^{2,3}\) Majid Mirmehdi\({}^{1}\)\({}^{1}\) Department of Computer Science, University of Bristol, Bristol, UK \({}^{2}\) Translational Health Sciences, University of Bristol, Bristol, UK \({}^{3}\) Stroke Neurology, Southmead Hospital, North Bristol NHS Trust, Bristol, UK Transformer, Multimodal, Stroke, Ischaemic, NCCT, Outcome. ## 1 Introduction Acute ischaemic stroke is the most common type of stroke and a leading cause of disability and mortality worldwide [1]. It is a condition caused by the formation of clips, following interruption of blood flow to the brain. If the blockage is not resolved, the extent of dead tissue increases and the irreversible ischaemic core expands over time. As Saver [2] stated, "Time is brain" for stroke diagnosis and treatment, and it is essential to carry out the appropriate treatment in a timely manner. Although thrombectomy is the most effective treatment for ischaemic stroke cases, there is a risk of brain haemorrhage and death. Therefore, determining if a patient just admitted can benefit from mechanical thrombectomy leading to a good functional outcome, is an important step towards reducing risk and improving the quality of life for stroke patients. Methods for automatic outcome prediction of stroke treatment have been proposed using logistic regression [3, 4], random forests [5, 6], support vector machines [4, 7], and recently, convolutional neural networks (CNNs) [8, 9, 10]. Some use clinical records [3, 5, 4], imaging information [8, 7, 6], or a combination of both [9, 11, 12]. The CNN-based models have been applied to various imaging modalities, _e.g._ magnetic resonance imaging (MRI), NCCT and CT angiography (CTA). While such deep learning models perform well in medical image analysis, 3D CNN models that exploit 3D brain volumes require numerous parameters and computational resources. Furthermore, they cannot learn long-range relationships due to their limited receptive field. In contrast, more recently, transformers have achieved outstanding results in various applications thanks to their big data and model size scalability and better longer-range attention-based modelling capability [13, 14]. However, pure transformer-based methods have not been widely applied in medical image classification due to their limited performance on small datasets [15]. In this paper, we introduce TranSOP, a transformer-based multimodal architecture to predict functional outcomes of ischaemic stroke patients 90 days after treatment (see Fig 1). We combine clinical metadata (_e.g._ gender, age, hypertension, glucose level) and 3D NCCT obtained at the point of hospital admission for 500 ischaemic stroke patients. We also explore different strategies for this multimodal fusion and conduct extensive experiments on various architectures, including ViT, ViT with CNN, pre-trained ViT (from DeiT [16]) and Swin transformer (SwinT) [17] in our TranSOP model. ## 2 Related Works There are only a few studies that have employed CNN-based Figure 1: TranSOP predicts functional outcome of ischaemic stroke treatment leveraging only the baseline NCCT scan and clinical records available on hospital admission. multimodal networks to predict the functional outcome of stroke treatments, _e.g._ for thrombolysis [9] and for thrombectomy [11, 10]. Bacchi et al. [9] applied a CNN model to 3D NCCT images and clinical records of patients who underwent thrombolysis treatment. Samak et al. [11] also proposed a multimodal CNN architecture with channel-wise and spatial attentional blocks to predict dichotomised mRS scores from baseline 3D NCCT scans and clinical records of MR CLEAN [18] dataset. Further, in [10], Samak et al. additionally incorporate 1-week follow-up scans during their model training to encode stroke changes over time for better mRS score prediction. Transformers have shown significant success in natural language processing, _e.g._ machine translation [19], and computer vision, _e.g._ medical imaging tasks [20, 21]. They facilitate a mechanism of self-attention that can model the long-range dependency of sequences and focus on important features. Dosovitskiy et al. [13] proposed the first pure vision transformer (ViT), applied directly to sequences of image patches for image classification. ViTs have obtained comparable and even better results in some tasks than CNNs, _e.g._ for object detection [22, 23]. Since its introduction ViT has been deployed in medical image segmentation using different imaging modalities. UNETR [24] adapts the commonly deployed and successful U-Net architecture [25], by replacing its convolutional encoder with a transformer encoder and modifying its convolutional decoder based on the output of the transformer encoder for image segmentation. Similarly, other studies [26, 21, 27] also replace the convolutional encoder with a transformer encoder, while some integrate the transformer encoder into the bottleneck of a U-Net-like model [28, 29, 30, 31] or use hybrid blocks that combine the convolutional and transformer layers [32, 33]. Such works have been applied to NCCT [28], MRI [29, 32, 21, 33] and microscope [30] images. In another recent work, Amador et al. [34] propose a hybrid model that performs segmentation of the final lesion outcome of ischaemic stroke from baseline spatio-temporal CT perfusion (CTP) images using a transformer encoder embedded in the U-Net bottleneck. Although most transformer-based models in medical image analysis are in the _segmentation_ domain, there are some studies that have employed them on medical image _classification_, _e.g._ for COVID-19 [35, 36], retinal disease [37], cell analysis [38, 39], brain tumour [20, 40], Alzheimer's disease [41, 15] classification and age estimation [40, 42]. These methods are based on a pure transformer [35, 43, 44, 45, 46] or a hybrid model that uses ResNet [20, 41, 37], DenseNet [36] or a CNN module [38, 47, 39, 42, 15] followed by a transformer encoder on 2D imaging modalities like X-Rays [46, 48], microscope images [38, 39] and 3D MRI volumes [20, 41, 15, 40]. To the best of our knowledge, there are no studies using the transformer in 3D NCCT classification and prediction of functional stroke outcomes from unimodal or multimodal data. ## 3 Proposed Method An overview of the proposed architecture, TranSOP, is shown in Fig. 2, which includes a transformer encoder and a multimodal fusion module to predict mRS scores. Transformers can process 1D input sequences, as originally used in the NLP domain where each word is embedded in a 1D vector as a token. Similarly, we split a 3D NCCT volume, \(X_{i}^{nect}\in\mathbb{R}^{1\times D\times W\times H}\), into 1D vectors via patch embedding where \(D\), \(W\), and \(H\) are depth, width and height, and a volume is divided into non-overlapping patches of size \(P^{3}\), which generate a sequence of 1D patch vectors of length \(L=[\frac{D}{P}]\times[\frac{W}{P}]\times[\frac{H}{P}]\). We use a convolutional layer to project each patch into a \(K\) dimensional embedding space [16, 21]. We add a learnable parameter \([CLS]\in\mathbb{R}^{1\times K}\), to the patch embedding sequence to represent the entire volume for classification. In addition, a learnable positional encoding, (\(PE\in\mathbb{R}^{(L+1)\times K}\)) is added to the sequences, so that the spatial information of Figure 2: Overview of our proposed transformer-based multimodal architecture, TranSOP. PE: positional encoding, CLS: a token/vector that represents the input volume for classification, MHSA: Multi-head self-attention, MLP: multi-layer perceptron, FC: fully connected layer. the patches can be preserved (see Fig. 2). Next, a series of transformer blocks, each including a normalisation layer followed by multi-head self-attention (MHSA), a normalisation layer, and a multi-layer perceptron (MLP) head are utilised in the transformer encoder. Then, an MLP head is applied to the classification token to extract NCCT volume features \(z^{nect}\) for the fusion process. Clinical metadata features \(z^{clinic}\) are computed by a fully connected layer (FC) (orange box in Fig. 2). In the multimodal fusion module, a stack of two FCs with a dropout layer in-between prepare the input scan's \(z^{nect}\) and \(z^{clinic}\) for fusion (see right box in Fig. 2). We use two methods for the fusion of these image volume and clinical features, (i) concatenation where both features are joined to make a larger 1D vector and (ii) addition where both features are added element-wise with each feature vector multiplied by a learnable weight. Finally, another stack of FC, Dropout, and FC layers is applied to the fused features before being passed to a _Softmax_ layer for final predictions. These predictions are dichotomised mRS scores, where mRS \(\leq\) 2 indicates a good outcome and mRS \(>\) 2 expresses a bad outcome. Note that, dropout layers are deactivated during inference. ## 4 Experiments & Results **Dataset -** We used the MR CLEAN Trial dataset1, collected from a multi-centre study, which is one of the most comprehensive datasets of patients who underwent ischaemic stroke treatment. Five hundred patients (233 assigned to mechanical thrombectomy and 267 to usual care) were treated in 16 medical centres in the Netherlands. We refer the reader to the MR CLEAN study protocol [49, 18] for more detailed information on the dataset. Footnote 1: [https://www.mrclean-trial.org/home.html](https://www.mrclean-trial.org/home.html) Through pre-processing, some of the apparent variations due to various acquisition protocols at different clinical centres were reduced to allow our model to deal with more similar standard input. First, all scans were re-sampled to the same voxel size of 3x1x1\(mm^{3}\), followed by clipping the intensity range of 0-80HU. The skull structure was then removed in the NCCT scans and the volumes were cropped to \(32\times 192\times 128\) from the centre. Data augmentations, such as horizontal/vertical flips and Gaussian noise, were applied to increase the variation and number of input samples to help improve the robustness of the network. Finally, the voxels of the NCCT scans were normalised to zero mean and one standard deviation. **Implementation Details -** We split the dataset into three subsets, training (70%, 350 patients), validation (15%, 75 patients) and testing (15%, 75 patients). The proposed model was trained for 500 epochs using an Adam optimiser with a weight decay of 0.0001, a learning rate of 0.0003, and a batch size of 24. A cosine learning rate scheduler was used. The experiments were implemented in PyTorch and MONAI [50] on a single NVIDIA P100 16GB GPU. **Details of Experiments -** We evaluated the performance of our proposed approach against two existing methods and various transformer architectures that also operate on 3D NCCT volumes and predict the functional outcome of stroke treatment. The methods of Bacchi et al. [9] and Samak et al. [11] which both use imaging and clinical information, were re-trained on the registered MR CLEAN dataset and our data split from scratch. Although, the FeMA [10] model performs a similar task, it additionally uses 1-week follow-up scans that contain information on stroke changes after treatment during model training. Hence, in the interest of direct comparability, we do not include that work in the present evaluation. We also evaluated our TranSOP approach using different transformer architectures for its encoder part. These are referred to as TranSOP\({}_{ViT}\), TranSOP\({}_{DeiT}\), TranSOP\({}_{ConViT}\) and TranSOP\({}_{SwinT}\). TranSOP\({}_{ViT}\) uses the ViT network and is trained from scratch, TranSOP\({}_{DeiT}\) utilises the ImageNet pre-trained DeiT model to demonstrate the effect of transfer learning, and TranSOP\({}_{ConViT}\) uses the first three layers of convolutional blocks before the input is fed into the ViT model to explore the performance of a hybrid model. These three models have the same ViT network which consist of 12 layers of transformer blocks, 12 heads, a hidden MLP feature size of 768 and 3072. In TranSOP\({}_{SwinT}\), four stages each consisting of two Swin transformer blocks and \(N\) MHSA heads, where \(N=\{3,6,12,24\}\) for each stage respectively, were used. The ClinicDNN model only consumed clinical information to show the expected benefit from imaging information. Note, the multimodal fusion step is the same for all the models. We evaluated the classification performance of the models with three commonly used metrics, Accuracy, _F1-score_ and Area Under ROC Curve (AUC). Table 1 reports the evaluations of the transformer-based and convolution-based networks, along with confidence intervals, for two fusion methods. Broadly, the CNN-based state of the art works [9, 11] outperformed the transformer methods when only imaging information was used, for example, [9] and [11] performed best and second best in accuracy at 0.75 and 0.72 respectively. On the other hand, transformer-based methods exceeded Bacchi et al. [9] and Samak et al. [11] when clinical records were included for multimodal analysis, with the best result obtained by TranSOP\({}_{SwinT}\) at 0.85 AUC. These variations in performance by the transformer could be attributed to both the transformer's appetite for larger datasets (see [14]), and its already established superiority in handling 1D natural language data. As TranSOP\({}_{SwinT}\) achieved the best AUC score, and it is more efficient thanks to its hierarchical architecture and shifted windowing, it can be a more preferable approach. The results on the use of fusion methods (concat and addition) are inconclusive and further investigation on more efficient fusion methods is necessary. ## 5 Conclusion In this work, we investigated the performance of various networks in predicting the functional outcome of ischaemic stroke treatment based on 3D NCCT scans and clinical information, such as age, sex, and demographic data from the patient's medical history records. Transformer models outperformed convolutional architectures in multimodal settings. This suggests that transformer models, although not performing as well on only imaging data, can learn better complementary imaging information when combined with clinical metadata. In future work, we plan to investigate and explore a data-efficient transformer model for small image datasets. In addition, we would like to extend the proposed architecture to use follow-up scans, such as used in the FeMA [10] method during model training. ## 6 Acknowledgments The authors would like to thank the Principal Investigators of the MR CLEAN trial: Profs Aad van der Lugt, Diederik W.J. Dippel, Charles B.L.M. Majoie, Yvo B. W.E.M. Roos, Wim H. van Zwam and Robert J. van Oostenbrugge for providing the data. Z.A. Samak is funded by the Ministry of Education (1416/YLSY), the Republic of Turkiye.
2307.05614
Impact of Feature Encoding on Malware Classification Explainability
This paper investigates the impact of feature encoding techniques on the explainability of XAI (Explainable Artificial Intelligence) algorithms. Using a malware classification dataset, we trained an XGBoost model and compared the performance of two feature encoding methods: Label Encoding (LE) and One Hot Encoding (OHE). Our findings reveal a marginal performance loss when using OHE instead of LE. However, the more detailed explanations provided by OHE compensated for this loss. We observed that OHE enables deeper exploration of details in both global and local contexts, facilitating more comprehensive answers. Additionally, we observed that using OHE resulted in smaller explanation files and reduced analysis time for human analysts. These findings emphasize the significance of considering feature encoding techniques in XAI research and suggest potential for further exploration by incorporating additional encoding methods and innovative visualization approaches.
Elyes Manai, Mohamed Mejri, Jaouhar Fattahi
2023-07-10T23:58:45Z
http://arxiv.org/abs/2307.05614v1
# Impact of Feature Encoding on Malware Classification Explainability ###### Abstract This paper investigates the impact of feature encoding techniques on the explainability of XAI (Explainable Artificial Intelligence) algorithms. Using a malware classification dataset, we trained an XGBoost model and compared the performance of two feature encoding methods: Label Encoding (LE) and One Hot Encoding (OHE). Our findings reveal a marginal performance loss when using OHE instead of LE. However, the more detailed explanations provided by OHE compensated for this loss. We observed that OHE enables deeper exploration of details in both global and local contexts, facilitating more comprehensive answers. Additionally, we observed that using OHE resulted in smaller explanation files and reduced analysis time for human analysts. These findings emphasize the significance of considering feature encoding techniques in XAI research and suggest potential for further exploration by incorporating additional encoding methods and innovative visualization approaches. Explainability, XAI, feature encoding, malware classification, preprocessing, LE, OHE, XGBoost. ## I Introduction Machine learning has witnessed remarkable advancements in recent years, enabling the development of sophisticated models that achieve impressive performance on various tasks. As these tasks and the data they are trained on become more complex, so does the model complexity. This often causes the decision-making process to lack transparency, making it difficult to understand the reasons behind their predictions. In a society that uses AI for an ever-growing number of use cases, however, that lack of understanding can pose serious risks to the users. Averting these risks and allowing more control over what our AI is doing, thus allowing more responsible AIs, is the goal behind the Explainable Artificial Intelligence (XAI) subfield. This subdomain of AI focuses on making black-box models transparent by providing understandable explanations for their decisions. XAI also allows us to combine the powerful pattern-recognition learning capabilities of AI with human-readable explanations that humans can instinctively understand and explain. the algorithms used in XAI usually work by finding out what parts of the input and of the model weights most affect the model's predictions. The end result will be a summary of each feature's contribution to the model. How helpful are these summaries, however, which we can call the quality of the generated explanations, depends on several parameters such as the chosen algorithm, the model architecture, and the data preprocessing technique. This last parameter, however, is not as popular as the others. While most XAI research focuses on algorithms, use cases, and the quality of explanations generated, there is a lack of research on the impact of preprocessing on generated explanations. We think that the preprocessing technique has a sizable impact on the quality of generated explanations and should be more explored. More specifically, we are interested in the feature encoding step of the preprocessing pipeline. Since XAI methods summarize feature contribution, the way we encode our models will directly affect the understandability of the generated explanations. Since preprocessing directly affects model performance, considerations must be taken to not trade off too much performance for better explanations, as better explanations on an unprecise model are not useful. Nonetheless, we think that a minor performance loss for a major boost in explainability is worth it, as it also opens up the door for better model and data understanding, bias discovery, robustness tests, and overall higher quality assurance. This is especially important in critical industries such as Medicine, Finance, and Cyber Security. To showcase the added value of our idea in a real use case, we will apply Machine Learning and Explainability on a common problem in Cyber security: Malware Classification. It is one of the most common tasks that Machine Learning is applied to in modern antiviruses and Intrusion Detection Systems. We will train a model on a publicly available malware dataset, apply the XAI algorithm, switch the preprocessing technique and compare the generated explanations. We will show that new rules and pain points can be detected and further explored by just changing the preprocessing technique. To the best of our knowledge, no prior studies specifically addressed the subject of the direct impact of preprocessing on explanation quality in the field of XAI have been identified in the existing literature. Our comprehensive review of the literature revealed that research in XAI is more geared towards XAI algorithms [1, 2, 3, 4], the generated explanations [5, 6], alternative ways to bake explainability into the input features [7, 8] and other related problems [9, 10]. Our focus in this paper can be summarized as follows: Given that XAI algorithms use the input features as the key components for the generated explanations, it is safe to assume that the type of feature encoding used will directly affect the clarity of the explanations. The more explicit the feature, the more detailed should be the explanation we get. With that in mind, we will study two main questions in this paper: 1. Does feature encoding affect explainability? 2. If yes, what encoding yields better explainability and why? ## II Concepts ### _Feature Encoding_ Feature encoding, also known as feature transformation or feature representation, is a crucial step in data preprocessing where categorical or textual features are converted into numerical representations that can be effectively used by machine learning algorithms. This is a mandatory step as ML algorithms only deal with numerical features. The choice of encoding technique directly impacts the ML performance. Here are some common feature encoding techniques: * **One-Hot Encoding:** Each category within a categorical feature is represented by a binary feature. If a feature has n categories, it is encoded into n binary features, where only one feature is active (1) for a particular category, and the rest are inactive (0). One-hot encoding is useful when there is no inherent order or relationship among the categories. * **Label Encoding:** Label encoding assigns a unique numerical label to each category within a categorical feature. Each category is represented by a distinct integer value. Label encoding is suitable when the categories have an ordinal relationship or when using algorithms that can directly work with integer inputs. * **Ordinal Encoding:** Similar to label encoding, ordinal encoding assigns numerical labels to categories. However, ordinal encoding takes into account the order or rank of the categories and assigns values accordingly. For example, "low," "medium," and "high" could be encoded as 1, 2, and 3, respectively. * **Binary Encoding:** Binary encoding represents categories as binary bit patterns. Each category is assigned a unique binary code, and each bit in the code represents the presence or absence of a category. Binary encoding can be efficient for high-cardinality categorical features and reduces the dimensionality compared to one-hot encoding. * **Embedding:** Embedding techniques are commonly used for encoding textual or high-dimensional categorical features. Embeddings are dense, low-dimensional representations that capture semantic relationships between categories. Embeddings are learned using techniques like Word2Vec [11, 12] or categorical embedding layers in deep learning models [13]. ### _Explainability_ Explainability in the context of machine learning [14, 15, 16] refers to the ability to understand and interpret the decisions or predictions made by a machine learning model. It involves gaining insights into how and why a model arrives at a particular output, providing transparency and comprehensibility to the decision-making process. There are various approaches to achieving explainability: * **Model-Agnostic Approaches:** These methods aim to explain any black-box machine learning model without relying on its internal structure. They involve techniques like feature importance analysis, partial dependence plots [17], and surrogate models, which provide insights into the relationship between input features and model predictions. * **Rule-Based Approaches:** These approaches aim to generate human-readable rules that describe the decision-making process of the model. Rule-based models, such as decision trees or rule lists, can provide explicit if-then statements that explain how specific features influence predictions. * **Interpretable Model Architectures:** Some machine learning models, such as linear regression, logistic regression, or decision trees, inherently provide interpretable explanations. Their simplicity and transparency allow users to understand the impact of each feature on the final prediction. * **Local Explanations:** Local explanation methods focus on explaining individual predictions rather than the model as a whole. Techniques like LIME [2] (Local Interpretable Model-Agnostic Explanations) or SHAP [1] (SHapley Additive exPlanations) provide insights into which features contributed the most to a particular prediction. * **Visualizations:** Visualizations play a significant role in explaining complex models and high-dimensional data. Techniques like heatmaps, bar plots, scatter plots, or saliency maps help in visualizing feature importance, decision boundaries, or highlighting influential regions in the data. ### _Malware Detection_ To demonstrate our work, we will take the common task of detecting malware. Malware are malicious pieces of software that are designed to infiltrate and damage information systems without the users' consent [18, 19, 20, 21, 22]. The term malware covers a lot of categories such as viruses, ransomware, worms, trojans, backdoors, spyware, keyloggers, adhere, bots, and rootkits. Malware analysts have to discover exactly what happened to a system and make sure that the machines damaged by malicious software are isolated from the organization's network. The analysis done to single out the suspicious parts of the software can sometimes take a group of analysts and several hours or even days. Since undetected malware can have devastating consequences on any organization, malware detection has been deemed one of the most important tasks in cybersecurity. Several types of systems have been built to detect and capture malware such as Intrusion detection systems, antiviruses and firewalls, and these systems keep getting smarter thanks to the combined shared knowledge of the cyber security community and the rapid advancement of technology. Current Malware detection systems use Machine Learning and Deep Learning to detect anomalies in files and network packets to protect the systems they're installed on. Since Machine learning has been known for its fantastic classification capabilities, more and more complex architectures and models are being tested and deployed to the current market. ## III Implementation and experimental results ### _Dataset_ For this project, we found a Malware classification dataset from the 2015 Microsoft Malware Classification Challenge [23]. The public variant we managed to download contains 19611 rows and 78 features. Each row represents a single file. The dataset is imbalanced as there are 14599 malware files and 5012 non-malware files, so 3 times as much malware. The dataset has no missing data and all features are numerical aside from the "Name" one. ### _Preprocessing_ The "Name" feature has been modified by the competition organizers to include "virus" if the file is malware and thus be removed since it does not represent real-life data. We do not apply any other preprocessing on the data aside from feature encoding. In this work, we apply two encoding techniques to all the features: * **Label Encoding:** Each feature value is represented by a unique integer. * **One Hot Encoding:** Each feature value becomes a separate binary column where 1 means the file's value of that feature is the column name, and 0 if not. This allows for more precise knowledge of what went wrong. ### _Machine Learning Modeling_ For training, we choose XGBoost [24, 25] as our base model and train it using its default parameters, namely 100 estimators, a max depth of 5 and a learning rate of 0.1. We use the free Google Colab coding environment which offers a single sever with 12.7GB of RAM and a single NVIDIA T4 GPU with 15GB of GPU RAM. To evaluate our model, We use four popular metrics: Accuracy, Precision, Recall and F1. In a nutshell, accuracy measures the overall correctness of the model's predictions by calculating the proportion of correctly classified instances out of the total number of instances. Precision quantifies the proportion of true positive predictions out of all positive predictions made by the model, indicating the model's ability to correctly identify positive instances and minimize false positives. Recall measures the proportion of true positive predictions out of all actual positive instances in the dataset, representing the model's ability to capture positive instances and minimize false negatives. Finally, the F1 score combines precision and recall into a single metric by taking their harmonic mean, providing a balanced assessment of the model's accuracy and considering both false positives and false negatives. We showcase the performance results of XGBoost on the label encoded dataset in Table I. Although we did not preprocess our data, aside from encoding them differently, we managed to get pretty good results. We can therefore directly go to the explainability part. ### _Explainability_ For starters, we are going to take away non useful features because one hot encoding all 77 features created 85102 features, which kept crashing our environment due to insufficient RAM. To do that, we will use XGBoost's built in feature importance function to list each feature's impact on the model's decision making. In Table II, we extract the top 10 influential features and sort them from most to least important. According to Table II, the combined score of the 10 most important features are 0.9381 which means that they represent 93.81% of the model's decision making power. We, therefore, can just keep these 10 features and not use the rest. Doing so, we get the results shown in Table III. Comparing the results shown in Table III to those in Table I show that although we did lose a bit of performance, the drop is marginal (less than 1%). This means that if the One Hot encoding does provide us with more explainability power, it would be recommended to use. For the next par, we will use a dedicated Explainability Algorithm called Shapley Additive Explanations (SHAP) to dig deeper into the model's inner reasoning. #### Iii-D1 The SHAP algorithm SHAP [1, 26] was introduced in 2017 and provides a unified way of explaining the contribution of each input feature to the final prediction of the model, based on calculated values called Shapley values. A Shapley value is a measure of the marginal contribution of a feature to the prediction, averaged over all possible combinations of features in the dataset. To calculate the Shapley values for a particular prediction, SHAP applies a game-theoretic approach based on the concept of cooperative games. It considers each feature value as a "player" in the game and computes the contribution of each player to the final prediction. It then calculates the average contribution of each player across all possible coalitions of players, weighting each coalition by its probability of occurrence. This approach results in a set of Shapley values, which represent the relative importance of each feature to the prediction for a specific instance. These Shapley values can be used to generate an explanation for the prediction, showing which features had the greatest impact and how they affected the final outcome. The mathematical formula used by SHAP to generate the Shapley Values is presented in Figure 1. \[\phi_{i}(x)=\sum_{S\subseteq N\setminus\{i\}}\frac{|S|!(|N|-|S|-1)!}{|N|!}[f(x_ {S}\cup\{x_{i}\})-f(x_{S})] \tag{1}\] Once generated, SHAP uses these values to display plots for both global explanations and local explanations. #### Iii-B2 Global feature importance We use the SHAP algorithm to generate global summary plots that highlight the importance of each feature in the model's decision-making similarly to what we have done in Table II. Figures 1 and 2 display the importance plots for the Label Encoded dataset and the One Hot Encoded dataset, respectively. ## IV Discussion The main difference between these plots is that while we know what feature is more important with Label Encoding, we know what exact value of that feature is more important with One Hot Encoding. This means that we get more specificity as a feature's importance is the sum of the importance of its unique values. A top ranking feature in the Label Encoding model could have therefore reached its rank because of the importance of some of its values, but not the others. Using One Hot Encoding, we can single out what values exactly are the most relevant to further analyze. For example, the "MinorOperatingSystemVersion" feature in 1 has a mean SHAP value of almost 0.6, ranking fifth. However, in 2, we can see that is actually Version 3 of this feature that is really impactful, ranking first with a mean SHAP value of more than 1.2. Yet, version 1 of this feature only has a score of almost 0.2, and the rest of the version are not in the top 10 features. So using One Hot Encoding, we can single out files with the Version 3 of "MinorOperatingSystemVersion" and further analyze them separately in hopes of creating an easy rule for them or see what more we can learn. One drawback of this plot is that it is not easy to read when we have hundreds or thousands of features. In this example, we have 16087 features. It will be unproductive to use this plot to study feature importance. Instead, we can extract the raw SHAP values of all one hot encoded features, group them by original feature, and plot them side by side in another plot. We propose the plots in Figures 3 and 4 where we plot the importance of the different values of the "MajorSubsystemVersion" feature side by side, horizontally and vertically respectively. We chose this feature instead of the number 1 ranking "MinorOperatingSystemVersion" feature because it has considerably fewer distinct values making it easier to plot, wasting less space and delivering the same message. These figures allow us to better visually grasp the relativity in importance between the different values of a feature. This way, we can add or remove values to and from a watchlist and also construct rules for particular values. We can now combine this with the confidence score of the model at inference to start a routine, a check or apply a rule when Figure 1: Label Encoding global importance plot Figure 2: One Hot Encoding global importance plot the score doesn't hit the certainty threshold. At that point, we would start investigating individual instances, thus needing different explanations called local explanations. #### Iv-B1 local feature importance Local explanations focus on individual instances, displaying to the user the step-by-step contribution of each feature on the model's decision. Using SHAP's local explanation plots, we get Figures 5 and 6 which display the local explanation of instances 2 and 3 respectively, first using Label Encoding first and then One Hot Ecoding. Again, the added refinement of the exact feature value gives us a lot more insight into what pushed the model towards a certain classification. Although the one hot encoding in this case may seem useless since we already know what value of each feature the instance holds, it instead can be used as an assertion method to make sure there are no anomalies in the decision shifting. Finally, we can see that being trained on the individual values changes the base value and decision shift intensity of each feature, as it has been trained on more infegrained data and the model had the chance to learn combinations that go together. These combinations in a tree based model such as XGBoost can then be used extracted and used as normal conditional IF rules or analyzed to detect vulnerabilities that went under the radar. Even then, the feature encoding will have an impact on the generated rules. #### Iv-B2 IF-Rules IF-Rules are logical statements that express conditional relationships between input variables and output decisions and follow a simple structure: IF a specific condition or set of conditions is satisfied, THEN a particular action or decision should be taken. The conditions and actions are typically expressed using logical operators, such as "AND," "OR," and "NOT." IF rules provide a transparent and interpretable way to encode domain knowledge and decision-making criteria into a system. Due to their nature, tree-based models can be seen as a collection of IF rules combined together to form a decision-making process. Each node in a decision tree represents an IF statement on a specific feature or attribute, and the tree structure guides the flow of decision-making based on these conditions. The splitting criteria at each node determine the conditions for branching into different paths, leading to subsequent nodes or leaves with specific outcomes or predictions. Since XGBoost is a tree based model, we can extract the IF-Rules it learned during the training phase and use them to build logical pipelines or to study them. An example of the IF-Rules learned by our XGBoost model can be seen in Figures 7 and 8 for Label Encoding and One Hot Encoding respectively. While there is no apparent difference between the IF-Rules of the two encoding techniques, the difference lies in the metadata. In Table IV, we can see the difference in the rules' total text length in number of characters as well as the explanation file size in KB. We can see that One Hot Encoding resulted in less characters which means less file size. The indirect consequence of this is less analysis time, less system complexity and less ambiguity, all of which directly benefit analysts and systems. Fig. 4: Vertically stacked bar plot of the ”MajorSubsystemVersion”’s distinct values importance Fig. 5: Local explanation for test observation number 2 Fig. 8: Example IF-Rules for Label Encoding Fig. 6: Local explanation for test observation number 3 Fig. 7: Example IF-Rules for Label Encoding ## V Conclusion In this paper, we studied the impact of feature encoding on the explainability of XAI algorithms. We took a malware classification dataset as an example on which we trained an XGBoost model. We tried two different types of feature encoding: Label Encoding and One Hot Encoding and found there is a marginal performance loss by the model by using OHE instead of LE. That loss was made up with thanks to the more detailed explanations we managed to make thanks to OHE. We found that OHE allows us to go deeper in the details when searching for answers, both globally and locally. We also found that using OHE yields smaller explanation files and results in less time spent analyzing by human analysts. We think this is an interesting aspect to be taken into consideration when working with XAI and could be expanded by including more feature encoding techniques and more creative plots.
2302.09828
Hyperbolicity of the base of an admissible family of log canonical stable minimal models
We investigate the stratified hyperbolicity properties of Birkar's moduli stack of log canonical (lc) stable minimal models. The main technical result is a construction of Viehweg-Zuo's system of Higgs sheaves associated with an admissible family of lc stable minimal models, using the theory of degenerations of Hodge structure and non-abelian Hodge theory.
Junchao Shentu, Chen Zhao
2023-02-20T08:27:28Z
http://arxiv.org/abs/2302.09828v3
# Hyperbolicity of the base of an admissible family of log canonical stable minimal models ###### Abstract. We investigate the stratified hyperbolicity properties of Birkar's moduli stack of log canonical (lc) stable minimal models. The main technical result is a construction of Viehweg-Zuo's system of Higgs sheaves associated with an admissible family of lc stable minimal models, using the theory of degenerations of Hodge structure and non-abelian Hodge theory. ## 1. Introduction The construction of compact moduli spaces of varieties is a fundamental problem in algebraic geometry. Since the seminal work of Deligne-Mumford [10] on the moduli of stable curves, a concerted effort has been made to generalize the construction of compact moduli spaces to higher dimensional varieties, with the aim of providing a framework for studying families of algebraic varieties with prescribed properties. These efforts have led to the development of a rich and diverse theory of moduli spaces, which has found applications in a wide range of areas in algebraic geometry and related fields. Known examples include the meaningful compactifications of the moduli spaces of polarized abelian varieties [2], plane curves [15], manifolds of general type [22, 23, 24], polarized Calabi-Yau manifolds [26] and \(K\)-polystable Fano manifolds [64]. Given \(d\in\mathbb{N},c\in\mathbb{Q}^{\geq 0}\), a finite set \(\Gamma\subset\mathbb{Q}^{>0}\) and \(\sigma\in\mathbb{Q}[t]\), Birkar [5] introduced the notion of a \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal model as a triple \((X,B),A\), which consists of a variety \(X\) over \(\operatorname{Spec}(\mathbb{C})\) and \(\mathbb{Q}\)-divisors \(A\geq 0\), \(B\geq 0\) on \(X\) satisfying the following conditions: * \(\dim X=d\), \((X,B)\) is an slc projective pair, \(K_{X}+B\) is semi-ample, * the coefficients of \(A\) and \(B\) are in \(c\mathbb{Z}^{\geq 0}\), * \((X,B+tA)\) is slc, \(K_{X}+B+tA\) is ample for some \(t>0\), * \(\operatorname{vol}(K_{X}+B+tA)=\sigma(t)\) for \(0\leq t\ll 1\), and * \(\operatorname{vol}(A|_{F})\in\Gamma\) where \(F\) is any general fiber of the fibration \(f:X\to Z\) determined by \(K_{X}+B\). The moduli stack \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) of \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal models (see [5] or SS4.1) admits a projective good coarse moduli space ([5, Theorem 1.14]) and provides a meaningful compactification of the moduli of birational equivalence classes of projective manifolds of an arbitrary Kodaira dimension. In the present article, we study the global geometry of the open substack \(\mathscr{M}_{\operatorname{lc},[0,1)}(d,\Phi_{c},\Gamma,\sigma)\) of \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\), which parameterizes the lc stable minimal models \((X,B),A\) where the coefficients of \(B\) lie in \([0,1)\). These objects serve as canonical models of projective klt pairs. In recent years, the hyperbolicity properties of the moduli stack of varieties have attracted many attentions. The research in this area is pioneered by the works of Parsin [37], Arakelov [3] and a series of works of ## 1. Introduction Let \(f:X\to\mathbb{P}^{1}\) be a \(\mathbb{P}^{1}\)-bundle of \(f\). Let \(f:X\to\mathbb{P}^{1}\) be a Lefschetz pencil with \(S\subset\mathbb{P}^{1}\) the set of its critical values, such that the general fibers of \(f\) are canonically polarized \(d\)-folds with \(v\) their volumes. Then \(f:(X,0),0\to\mathbb{P}^{1}\) is a family of \((d,\Phi_{0},\{1\},v)\)-lc stable minimal models. The _relative_\(\mathbb{P}^{1}\)-bundle of \(f\) is a family of elliptic curves with \(\mathbb{P}^{1}\setminus S\), where \(\mathbb{P}^{1}\setminus S\) is hyperbolic (\(\#(S)\geq 3\) due to [56]). Let \(f:X\to\mathbb{P}^{1}\) be a Lefschetz pencil with \(S\subset\mathbb{P}^{1}\) the set of its critical values, such that the general fibers of \(f\) are canonically polarized \(d\)-folds with \(v\) their volumes. Then \(f:(X,0),0\to\mathbb{P}^{1}\) is a family of \((d,\Phi_{0},\{1\},v)\)-lc stable minimal models. The _relative_\(\mathbb{P}^{1}\)-bundle of \(f\) is a family of elliptic curves with \(\mathbb{P}^{1}\setminus S\), where \(\mathbb{P}^{1}\setminus S\) is hyperbolic (\(\#(S)\geq 3\) due to [56]). Let \(f:X\to\mathbb{P}^{1}\) be a Lefschetz pencil with \(S\subset\mathbb{P}^{1}\) the set of its critical values, such that the general fibers of \(f\) are canonically polarized \(d\)-folds with \(v\) their volumes. Then \(f:(X,0),0\to\mathbb{P}^{1}\) is a family of \((d,\Phi_{0},\{1\},v)\)-lc stable minimal models. The _relative_\(\mathbb{P}^{1}\)-bundle of \(f\) is a family of \((d,\Phi_{0},\{1\},v)\)-lc stable minimal models. **Example 1.2** (Log smooth families of projective pairs of log general type).: Let \(f:(X,B)\to S\) be a log smooth family of projective klt pairs of log general type. Assume that \(\dim X_{s}=d\), \(\operatorname{vol}(K_{X_{s}}+B_{s})=v\) for each \(s\in S\), and the coefficients of \(B\) lie in \(c\mathbb{Z}^{\geq 0}\) for some \(c\in\mathbb{Q}^{\geq 0}\). Then the relative lc model \((X^{\operatorname{can}},B^{\operatorname{can}}),0\to S\) (c.f. [6]) is an admissible family of \((d,\Phi_{c},\{1\},v)\)-lc stable minimal models (see [60, Page 721]). Furthermore, \(f\) induces a morphism \(S\to\mathscr{M}_{\operatorname{lc}}(d,\Phi_{c},\{1\},v)\). Theorem 1.1 yields the following corollary. **Corollary 1.3**.: _Fix \(d\in\mathbb{N},c\in\mathbb{Q}^{\geq 0}\) and \(v\in\mathbb{Q}^{>0}\). Let \(f^{o}:(X^{o},B^{o})\to S^{o}\) be a log smooth family of projective pairs of general type over a smooth quasi-projective variety \(S^{o}\) such that \(\dim X_{s}=d\), \(\operatorname{vol}(K_{X_{s}}+B_{s})=v\) for every \(s\in S^{o}\) and the coefficients of \(B\) lie in \(c\mathbb{Z}^{\geq 0}\cap[0,1)\). Let \(\xi^{o}:S^{o}\to\mathscr{M}_{\operatorname{lc}}(d,\Phi_{c},\{1\},v)\) be the map determined by \(f^{o}\). Then the following results hold._ * _Let_ \(S\) _be a projective variety containing_ \(S^{o}\) _as a dense Zariski open subset. If_ \(\xi^{o}\) _is quasi-finite, then_ \((S,S\backslash S^{o})\) _is a Picard pair._ * _If_ \(\xi^{o}\) _is quasi-finite, then_ \(S^{o}\) _is Borel hyperbolic and Brody hyperbolic._ * _If_ \(\xi^{o}\) _is generically finite, then_ \(S^{o}\) _is of log general type and Kobayashi hyperbolic modulo a proper Zariski closed subset._ Among the results above, the Viehweg hyperbolicity, the Brody hyperbolicity and the pseudo Kobayashi hyperbolicity have been proved, mainly due to [39, 40, 59, 60]. **Example 1.4** (Families of stable Calabi-Yau pairs).: A lc stable minimal model \((X,B),A\) is a stable Calabi-Yau pair if \(K_{X}+B\sim_{\mathbb{Q}}0\). The base of an admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable Calabi-Yau pairs satisfies the hyperbolicity properties proposed in Theorem 1.1. **Example 1.5** (Families of stable Fano pairs).: A lc stable minimal model \((X,B),A\) is a stable Fano pair if \((X,A+B),A\) is a stable Calabi-Yau pair. The base of an admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-stable Fano pairs satisfies the hyperbolicity properties proposed in Theorem 1.1. Many efforts have been made to the hyperbolicity properties of various moduli spaces of polarized projective manifolds. For an incomplete list see [11, 12, 39, 40, 59, 60] and the references therein. The _Viehweg hyperbolicity_ in Theorem 1.1 has been studied by numerous scholars, with a non-exhaustive list of works including but not limited to [8, 19, 20, 21, 27, 28, 29, 30, 35, 38, 39, 51, 56, 60]. The Viehweg hyperbolicity, in the case of families of canonically polarized manifolds (known as the Viehweg hyperbolicity conjecture), was first proved by Campana-Paun [8], based on the so-called Viehweg-Zuo sheaves constructed in [59]. By using Saito's theory of Hodge modules [44], Popa-Schnell [39] extended the construction of Viehweg-Zuo sheaves to base spaces of an arbitrary family of projective varieties with a geometric generic fiber that admits a good minimal model, and proved the relevant hyperbolicity property for these families. Popa-Schnell's result was further generalized by Wei-Wu [60] to log smooth families of pairs of log general type. There are also other admissible conditions to ensure the relevant hyperbolicity results, see [17, 36] for example. The _big Picard theorem_ can be traced back to the classical big Picard theorem, which states that any holomorphic map from the punctured disk \(\Delta^{*}\) into \(\mathbb{P}^{1}\) that omits three points can be extended to a holomorphic map \(\Delta\to\mathbb{P}^{1}\). In a recent work by Deng-Lu-Sun-Zuo [12], it has been confirmed that the big Picard theorem holds for the base of a maximal variational family of good minimal manifolds. The _Borel hyperbolicity_ is a formal consequence of the big Picard theorem (see [12, Corollary C] or Proposition 5.5). The _pseudo Kobayashi hyperbolicity_, in the case of family of curves of genus \(g>1\), was established by Royden [43] and Wolpert [63]. To-Yeung [52] made the first breakthrough on higher dimensional families, demonstrating that the base manifold of any effectively parametrized family of canonically polarized manifolds is Kobayashi hyperbolic. Further relevant works on other families can be found in Berndtsson-Paun-Wang [4], Schumacher [46], Deng [11] and To-Yeung [53, 54]. The _Brody hyperbolicity_, in the case of families of canonically polarized manifolds, was proved by Viehweg-Zuo [59] using their construction of systems of Higgs sheaves. It was generalized by Popa-Taji-Wu [40] to families of general type minimal manifolds, using similar constructions as in [39]. These results were further generalized by Deng [11] to families of good minimal manifolds, answering a question by Viehweg-Zuo [59, Question 0.1]. Wei-Wu [60] studied the Brody hyperbolicity of a log smooth family of pairs of log general type. Due to the works [8, 11, 12, 59], the five hyperbolicity results follow from the construction of a certain system of Higgs sheaves associated with the relevant family. The main contribution of the present article is the construction of the Viehweg-Zuo type system of Higgs sheaves associated with an admissible family of lc stable minimal models. **Theorem 1.6** (=Theorem 4.7).: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be an admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a generically finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Let \(S\) be a smooth projective variety containing \(S^{o}\) as a Zariski open subset such that \(D:=S\backslash S^{o}\) is a (reduced) simple normal crossing divisor and \(\xi^{o}\) extends to a morphism \(\xi:S\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\)2. Let \(\mathscr{L}\) be a line bundle on \(S\). Then there exist the following data._ Footnote 2: We do not require \(\xi\) to have a moduli interpretation at the boundary \(D\). 1. _A projective birational morphism_ \(\pi:S^{\prime}\to S\) _such that_ \(S^{\prime}\) _is smooth,_ \(\pi^{-1}(D)\) _is a simple normal crossing divisor and_ \(\pi\) _is a composition of smooth blow-ups._ 2. _A (possibly non-reduced) effective exceptional divisor_ \(E\) _of_ \(\pi\) _such that_ \(E\cup\pi^{-1}(D)\) _has a simple normal crossing support._ 3. \(A\) \(\mathbb{Q}\)_-polarized variation of Hodge structure of weight_ \(w>0\) _on_ \(S^{\prime}\backslash(E\cup\pi^{-1}(D))\) _with_ \((H=\bigoplus_{p+q=w}H^{p,q},\theta,h)\) _its associated Higgs bundle by taking the total graded quotients with respect to the Hodge filtration. Here_ \(h\) _is the Hodge metric._ _These data satisfy the following conditions._ 1. _There is a coherent ideal sheaf_ \(I_{Z}\) _on_ \(S\) _whose co-support_ \(Z\) _is contained in_ \(D\) _and_ \(\operatorname{codim}_{S}(Z)\geq 2\)_, and a natural inclusion_ \(\mathscr{L}\otimes I_{Z}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w,0}\right)\)_._ 2. _Let_ \((\bigoplus_{p=0}^{w}L^{p},\theta)\) _be the log Higgs subsheaf generated by_ \(L^{0}:=\mathscr{L}\otimes I_{Z}\)_, where_ \[L^{p}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w-p,p}\right).\] _Then the Higgs field_ \[\theta:L^{p}|_{S\backslash(D\cup\pi(E))}\to L^{p+1}|_{S\backslash(D\cup\pi(E)) }\otimes\Omega_{S\backslash(D\cup\pi(E))}\] _is holomorphic over_ \(S\backslash D\) _for each_ \(0\leq p<n\)_, that is,_ \[\theta(L^{p})\subset L^{p+1}\otimes\Omega_{S}(\log D).\] Here \({}_{<\pi^{-1}(D)+E}H^{p,q}\) is the prolongation of the Hodge bundle \(H^{p,q}\) in the sense of Simpson [48] and Mochizuki [33] (see SS2.2), consisting of the meromorphic sections with prescribed poles. Theorem 1.6 is proved by extending the original construction of Viehweg-Zuo [59] by using the theory of degenerations of Hodge structure and non-abelian Hodge theory. Our construction is also valid for Kahler morphisms (see SS3). The present article is organized as follows. In SS2 we recall the theory of degenerations and prolongations of a variation of Hodge structure. In SS3 we investigate the analytic prolongations of Viehweg-Zuo Higgs sheaves (Theorem 3.1). SS4 is the review of the basic notions and constructions of Birkar's moduli space of stable minimal models. Theorem 1.6 is proved in SS4.4. SS5 is devoted to the proof of Theorem 1.1 and several consequences. **Notations:** * Let \(F\) be a torsion free coherent sheaf on a smooth variety \(X\). Let \(F^{\vee}:=\mathscr{H}om_{\mathscr{O}_{X}}(F,\mathscr{O}_{X})\) so that \(F^{\vee\vee}\) is the reflexive hull of \(F\). We use \(\det(F)\) to denote the reflexive hull of \(\wedge^{\operatorname{rank}(F)}F\). * We define a functorial desingularization of \((X,Z)\), where \(X\) is a complex space and \(Z\subset X\) is a closed analytic subset, in the sense of Wlodarczyk [61, 62]. Specifically, this desingularization is a projective bimeromorphic morphism \(\pi:X^{\prime}\to X\) from a complex manifold \(X^{\prime}\), satisfying the conditions that \(\pi^{-1}(Z)\), \(\pi^{-1}(X_{\operatorname{sing}})\), and \(\pi^{-1}(Z)\cup\pi^{-1}(X_{\operatorname{sing}})\) are simple normal crossing divisors on \(X^{\prime}\), and \(\pi\) is biholomorphic over the loci where \((X,Z)\) is a log smooth pair. * \(\Delta:=\{t\in\mathbb{C}\mid|t|<1\}\) denotes the unit disck in \(\mathbb{C}\) and \(\Delta^{*}:=\Delta\setminus\{0\}\). * Let \(f:X\to S\) be a morphism between algebraic varieties. The fiber \(f^{-1}\{s\}\) over the point \(s\in S\) is denoted by \(X_{s}\). ## 2. Analytic prolongations of variations of Hodge structure ### Norm estimates for the Hodge metric Let \(\mathbb{V}=(\mathcal{V},\nabla,\mathcal{F}^{\bullet},Q)\) be an \(\mathbb{R}\)-polarized variation of Hodge structure over \((\Delta^{*})^{n}\times\Delta^{m}\), together with the Hodge metric \(h_{Q}\), where \((\mathcal{V},\nabla)\) is a flat connection, \(\mathcal{F}^{\bullet}\) is the Hodge filtration and \(Q\) is a real polarization. Let us fix the standard coordinates \(s_{1},\dots,s_{n},w_{1},\dots,w_{m}\) on \((\Delta^{*})^{n}\times\Delta^{m}\). Let \(D_{i}:=\{s_{i}=0\}\subset\Delta^{n+m}\) for every \(i=1,\dots,n\). Let \(N_{i}\) be the unipotent part of \(\operatorname{Res}_{D_{i}}\nabla\) and let \[p:\mathbb{H}^{n}\times\Delta^{m}\to(\Delta^{*})^{n}\times\Delta^{m},\] \[(z_{1},\dots,z_{n},w_{1},\dots,w_{m})\mapsto(e^{2\pi\sqrt{-1}z_{1}},\dots,e^ {2\pi\sqrt{-1}z_{n}},w_{1},\dots,w_{m})\] be the universal covering. Let \(W^{(1)}=W(N_{1}),\dots,W^{(n)}=W(N_{1}+\dots+N_{n})\) be the monodromy weight filtrations (centered at \(0\)) on \(V:=\Gamma(\mathbb{H}^{n}\times\Delta^{m},p^{*}\mathcal{V})^{p^{*}\nabla}\). The following important norm estimates for flat sections are due to Cattani-Kaplan-Schmid [9, Theorem 5.21] and Mochizuki [33, Part 3, Chapter 13]. **Theorem 2.1**.: _For any \(0\neq v\in\operatorname{Gr}_{l_{n}}^{W^{(n)}}\cdots\operatorname{Gr}_{l_{1}}^{ W^{(1)}}V\), one has_ \[|v|_{h_{Q}}^{2}\sim\left(\frac{\log|s_{1}|}{\log|s_{2}|}\right)^{l_{1}}\cdots \left(-\log|s_{n}|\right)^{l_{n}}\] _on any region of the form_ \[\left\{(s_{1},\dots,s_{n},w_{1},\dots,w_{m})\in(\Delta^{*})^{n}\times\Delta^{m }\middle|\frac{\log|s_{1}|}{\log|s_{2}|}>\epsilon,\dots,-\log|s_{n}|>\epsilon,(w_ {1},\dots,w_{m})\in K\right\}\] _for any \(\epsilon>0\) and \(K\subset\Delta^{m}\) compact._ The rest of this part is devoted to the norm estimates of \(h_{Q}\) on \(S(\mathbb{V})\), where \(S(\mathbb{V})\) denotes \(\mathcal{F}^{\max\{p|\mathcal{F}^{p}\neq 0\}}\). Let \(\mathcal{V}_{-1}\) denote Deligne's canonical extension of \((\mathcal{V},\nabla)\) whose real parts of the eigenvalues of the residue maps lie in \((-1,0]\). By the nilpotent orbit theorem [9]\(j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1}\) is a subbundle of \(\mathcal{V}_{-1}\). **Lemma 2.2**.: _Assume that \(n=1\). Then \(W_{-1}(N_{1})\cap\big{(}j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1}\big{)}|_{ \mathbf{0}}=0\)._ Proof.: Assume that \(W_{-1}(N_{1})\cap\big{(}j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1}\big{)}|_{ \mathbf{0}}\neq 0\) and let \(k\) be the weight of \(\mathbb{V}\). Let \(l=\max\{l\ |\ W_{-l}(N_{1})\cap\big{(}j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1} \big{)}|_{\mathbf{0}}\neq 0\}\). Then \(l\geq 1\). By [45, 6.16], the filtration \(j_{*}\mathcal{F}^{\bullet}\cap\mathcal{V}_{-1}\) induces a pure Hodge structure of weight \(m+k\) on \(W_{m}(N_{1})/W_{m-1}(N_{1})\). Moreover, \[N^{l}:W_{l}(N_{1})/W_{l-1}(N_{1})\to W_{-l}(N_{1})/W_{-l-1}(N_{1}) \tag{2.1}\] is an isomorphism of type \((-l,-l)\). Let \(p=\max\{i\ |\ \mathcal{F}^{i}\neq 0\}\). By the definition of \(l\), any nonzero element \(\alpha\in W_{-l}(N_{1})\cap\big{(}j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1} \big{)}|_{\mathbf{0}}\) induces a nonzero \([\alpha]\in W_{-l}(N_{1})/W_{-l-1}(N_{1})\) of Hodge type \((p,k-l-p)\). Since (2.1) is an isomorphism, there is \(\beta\in W_{l}(N_{1})/W_{l-1}(N_{1})\) of Hodge type \((p+l,k-p)\) such that \(N^{l}(\beta)=[\alpha]\). However, \(\beta=0\) since \(\mathcal{F}^{p+l}=0\). This contradicts to the fact that \([\alpha]\neq 0\). Consequently, \(W_{-1}(N_{1})\cap\big{(}j_{*}S(\mathbb{V})\cap\mathcal{V}_{-1}\big{)}_{ \mathbf{0}}\) must be zero. Let \(T_{i}\) denote the local monodromy operator of \(\mathbb{V}\) around \(D_{i}\). Since \(T_{1},\ldots,T_{n}\) are pairwise commutative, there is a finite decomposition \[\mathcal{V}_{-1}|_{\mathbf{0}}=\bigoplus_{-1<\alpha_{1},\ldots,\alpha_{n}\leq 0 }\mathbb{V}_{\alpha_{1},\ldots,\alpha_{n}}\] such that \((T_{i}-e^{2\pi\sqrt{-1}\alpha_{i}}\mathrm{Id})\) is unipotent on \(\mathbb{V}_{\alpha_{1},\ldots,\alpha_{n}}\) for each \(i=1,\ldots,n\). Let \[v_{1},\ldots,v_{N}\in(\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V}))|_{\mathbf{0}} \cap\bigcup_{-1<\alpha_{1},\ldots,\alpha_{n}\leq 0}\mathbb{V}_{\alpha_{1}, \ldots,\alpha_{n}}\] be an orthogonal basis of \((\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V}))|_{\mathbf{0}}\simeq\Gamma(\mathbb{H }^{n}\times\Delta^{m},p^{*}S(\mathbb{V}))^{p^{*}\nabla}\). Then \(\widetilde{v_{1}},\ldots,\widetilde{v_{N}}\) that are determined by \[\widetilde{v_{j}}:=\exp\left(\sum_{i=1}^{n}\log s_{i}(\alpha_{i}\mathrm{Id}+N_ {i})\right)v_{j}\ \text{if}\ v_{j}\in\mathbb{V}_{\alpha_{1},\ldots,\alpha_{n}},\quad\forall j=1, \ldots,N \tag{2.2}\] form a frame of \(\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\). We always use the notation \(\alpha_{D_{i}}(\widetilde{v_{j}})\) instead of \(\alpha_{i}\) in (2.2). By (2.2) we see that \[|\widetilde{v_{j}}|_{h_{Q}}^{2} \sim\left|\prod_{i=1}^{n}s_{i}^{\alpha_{D_{i}}(\widetilde{v_{j}})} \mathrm{exp}\left(\sum_{i=1}^{n}N_{i}\log s_{i}\right)v_{j}\right|_{h_{Q}}^{2}\] \[\sim|v_{j}|_{h_{Q}}^{2}\prod_{i=1}^{n}|s_{i}|^{2\alpha_{D_{i}}( \widetilde{v_{j}})},\quad j=1,\ldots,N\] where \(\alpha_{D_{i}}(\widetilde{v_{j}})\in(-1,0]\), \(\forall i=1,\ldots,n\). It follows from Theorem 2.1 and Lemma 2.2 that \[|v_{j}|_{h_{Q}}^{2}\sim\left(\frac{\log|s_{1}|}{\log|s_{2}|}\right)^{l_{1}} \cdots(-\log|s_{n}|)^{l_{n}}\quad\text{with}\quad 0\leq l_{1}\leq l_{2}\leq \cdots\leq l_{n},\] on any region of the form \[\left\{(s_{1},\ldots,s_{n},w_{1},\ldots,w_{m})\in(\Delta^{*})^{n}\times\Delta^{m} \bigg{|}\frac{\log|s_{1}|}{\log|s_{2}|}>\epsilon,\ldots,-\log|s_{n}|>\epsilon,(w _{1},\ldots,w_{m})\in K\right\}\] for any \(\epsilon>0\) and \(K\subset\Delta^{m}\) compact. Therefore, we know that \[1\lesssim|v_{j}|\lesssim|s_{1}\cdots s_{n}|^{-\epsilon},\quad\forall\epsilon>0.\] **Definition 2.3**.: (Zucker [65, page 433]) Let \((E,h)\) be a vector bundle with a possibly singular hermitian metric \(h\) on a hermitian manifold \((X,ds_{0}^{2})\). A holomorphic local frame \((v_{1},\ldots,v_{N})\) of \(E\) is called \(L^{2}\)-adapted if, for every set of measurable functions \(\{f_{1},\ldots,f_{N}\}\), \(\sum_{i=1}^{N}f_{i}v_{i}\) is locally square integrable if and only if \(f_{i}v_{i}\) is locally square integrable for all \(i\). **Lemma 2.4**.: _The local frame \((\widetilde{v_{1}},\ldots,\widetilde{v_{N}})\) is \(L^{2}\)-adapted._ Proof.: If \[\sum_{j=1}^{N}f_{j}\widetilde{v_{j}}=\exp\left(\sum_{i=1}^{n}N_{i}\log s_{i} \right)\left(\sum_{j=1}^{N}f_{j}\prod_{i=1}^{n}|s_{i}|^{\alpha_{D_{i}}( \widetilde{v_{j}})}v_{j}\right)\] is locally square integrable, then \[\sum_{j=1}^{N}f_{j}\prod_{i=1}^{n}|s_{i}|^{\alpha_{D_{i}}(\widetilde{v_{j}})}v _{j}\] is locally square integrable because the entries of the matrix \(\exp\left(-\sum_{i=1}^{n}N_{i}\log s_{i}\right)\) are \(L^{\infty}\)-bounded. Since \((v_{1},\ldots,v_{N})\) is an orthogonal basis, \(|f_{j}\widetilde{v_{j}}|_{h_{Q}}\sim\prod_{i=1}^{n}|s_{i}|^{\alpha_{D_{i}}( \widetilde{v_{j}})}|f_{j}v_{j}|_{h_{Q}}\) is locally square integrable for all \(j\). In conclusion, we obtain the following proposition. **Proposition 2.5**.: _Let \((X,ds_{0}^{2})\) be a hermitian manifold and \(D\) a normal crossing divisor on \(X\). Let \(\mathbb{V}\) be an \(\mathbb{R}\)-polarized variation of Hodge structure on \(X^{o}:=X\backslash D\). Then there is an \(L^{2}\)-adapted holomorphic local frame \((\widetilde{v_{1}},\ldots,\widetilde{v_{N}})\) of \(\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\) at every point \(x\in D\). Let \(z_{1},\cdots,z_{n}\) be holomorphic local coordinates on \(X\) so that \(D=\{z_{1}\cdots z_{r}=0\}\). Then there are constants \(\alpha_{D_{i}}(\widetilde{v_{j}})\in(-1,0]\), \(i=1,\ldots,r\), \(j=1,\ldots,N\) and positive real functions \(\lambda_{j}\in C^{\infty}(X\backslash D)\), \(j=1,\ldots,N\) such that_ \[|\widetilde{v_{j}}|^{2}\sim\lambda_{j}\prod_{i=1}^{r}|z_{i}|^{2\alpha_{D_{i}} (\widetilde{v_{j}})},\quad\forall j=1,\ldots,N\] _and_ \[1\lesssim\lambda_{j}\lesssim|z_{1}\cdots z_{r}|^{-\epsilon},\quad\forall \epsilon>0,\quad\forall j=1,\ldots,N\] ### Prolongations of a variation of Hodge structure: log smooth case Let \(X\) be a complex manifold and \(D=\sum_{i=1}^{l}D_{i}\) a reduced simple normal crossing divisor on \(X\). Let \((E,h)\) be a holomorphic vector bundle on \(X\backslash D\) with a smooth hermitian metric \(h\). Let \(D_{1}=\sum_{i=1}^{l}a_{i}D_{i}\), \(D_{2}=\sum_{i=1}^{l}b_{i}D_{i}\) be \(\mathbb{R}\)-divisors. We denote \(D_{1}<(\leq)D_{2}\) if \(a_{i}<(\leq)b_{i}\) for all \(i\). **Definition 2.6** (Analytic prolongation).: ([32], Definition 4.2) Let \(A=\sum_{i=1}^{l}a_{i}D_{i}\) be an \(\mathbb{R}\)-divisor, let \(U\) be an open subset of \(X\), and let \(s\in\Gamma(U\backslash D,E)\) be a holomorphic section. We denote \((s)\leq-A\) if \(|s|_{h}=O(\prod_{k=1}^{r}|z_{k}|^{-a_{i_{k}}-\epsilon})\) for any positive number \(\epsilon\), where \(z_{1},\ldots,z_{n}\) are holomorphic local coordinates such that \(D=\{z_{1}\cdots z_{r}=0\}\) and \(D_{i_{k}}=\{z_{k}=0\}\) for each \(k=1,\ldots,r\). The \(\mathscr{O}_{X}\)-module \({}_{A}E\) is defined as \[\Gamma(U,{}_{A}E):=\{s\in\Gamma(U\backslash D,E)|(s)\leq-A\}\] for any open subset \(U\subset X\). Let \[{}_{<A}E:=\bigcup_{B<A}{}_{B}E\quad\text{and}\quad\text{Gr}_{A}E:={}_{A}E/_{< A}E.\] Let \(\mathbb{V}=(\mathcal{V},\nabla,\mathcal{F}^{\bullet},Q)\) be an \(\mathbb{R}\)-polarized variation of Hodge structure of weight \(w\) on \(X\backslash D\). Let \((H:=\text{Gr}_{\mathcal{F}^{\bullet}}\mathcal{V},\theta:=\text{Gr}_{\mathcal{ F}^{\bullet}}\nabla)\) denote the total graded quotient. Then \((H,\theta)\) is the Higgs bundle corresponding to \((\mathcal{V},\nabla)\) via Simpson's correspondence [47]. The Hodge metric \(h_{Q}\) associated with \(Q\) is a harmonic metric on \((H,\theta)\). The triple \((H,\theta,h_{Q})\) is a tame harmonic bundle in the sense of Simpson [48] and Mochizuki [33]. Notice that \((H,\theta)\) is a system of Hodge bundles ([49, SS4]) in the sense that \[H=\bigoplus_{p+q=w}H^{p,q},\quad H^{p,q}\simeq\mathcal{F}^{p}/\mathcal{F}^{p+ 1},\quad\theta(H^{p,q})\subset H^{p-1,q+1}\otimes\Omega_{X\backslash D}.\] According to Simpson [48, Theorem 3] and Mochizuki [34, Proposition 2.53], the set of prolongations forms a parabolic structure. **Theorem 2.7**.: _Let \(X\) be a complex manifold and \(D=\sum_{i=1}^{l}D_{i}\subset X\) a reduced simple normal crossing divisor. Let \((H=\oplus_{p+q=w}H^{p,q},\theta,h_{Q})\) be the system of Hodge bundles associated with an \(\mathbb{R}\)-polarized variation of Hodge structure of weight \(w\) on \(X\backslash D\). For each \(\mathbb{R}\)-divisor \(A\) supported on \(D\), \({}_{A}H\) is a locally free coherent sheaf satisfying the following conditions:_ * \({}_{A+\epsilon D_{i}}H={}_{A}H\) _for any_ \(i=1,\ldots,l\) _and any constant_ \(0<\epsilon\ll 1\)_,_ * \({}_{A+D_{i}}H={}_{A}H\otimes\mathscr{O}(-D_{i})\) _for every_ \(1\leq i\leq l\)_,_ * _the subset of_ \((a_{1},\ldots,a_{l})\in\mathbb{R}^{l}\) _such that_ \(\text{Gr}_{\sum_{i=1}^{l}a_{i}D_{i}}H\neq 0\) _is discrete, and_ * _the Higgs field_ \(\theta\) _has at most logarithmic poles along_ \(D\)_, that is,_ \(\theta\) _extends to_ \[{}_{A}H\to{}_{A}H\otimes\Omega_{X}(\log D).\] The proof of the following lemma is straightforward and thus omitted here. **Lemma 2.8**.: _Let \(f\) be a holomorphic function on \(\Delta^{*}:=\{z\in\mathbb{C}|0<|z|<1\}\) and \(a\in\mathbb{R}\). Then_ \[\int_{|z|\leq\frac{1}{2}}|f|^{2}|z|^{2a}dzd\bar{z}<\infty\] _if and only if \(v(f)+a>-1\). Here_ \[v(f):=\min\{l|f_{l}\neq 0\text{ in the Laurent expansion }f=\sum_{i\in\mathbb{Z}}f_{i}z^{i}\}.\] **Lemma 2.9**.: _Notations as above. Let \(S(\mathbb{V})=\mathcal{F}^{\max\{p|\mathcal{F}^{p}\neq 0\}}\). Then there is a natural isomorphism_ \[\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\simeq{}_{<D}S(\mathbb{V}),\] _where \(j:X\backslash D\to X\) is the immersion and \({}_{<D}S(\mathbb{V})\) is taken with respect to the Hodge metric \(h_{Q}\). Let \(U\subset X\) be an open subset. Then a holomorphic section \(s\in S(\mathbb{V})(U\backslash D)\) extends to a section in \({}_{<D}S(\mathbb{V})(U)\) if and only if the integration_ \[\int|s|^{2}_{h_{Q}}\mathrm{vol}_{ds^{2}}\] _is finite locally at every point of \(U\cap D\), where \(ds^{2}\) is a hermitian metric on \(X\)._ Proof.: It follows from Proposition 2.5 and Lemma 2.8 that \[\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\subset{}_{<D}S(\mathbb{V}).\] For the converse, let \(\widetilde{v_{1}},\ldots,\widetilde{v_{N}}\) be the \(L^{2}\)-adapted local frame of \(\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\) (as in Proposition 2.5) at some point \(x\in D\). Let \(\alpha=\sum_{j=1}^{N}f_{j}\widetilde{v_{j}}\in{}_{<D}S(\mathbb{V})\) where \(f_{1},\ldots,f_{N}\) are functions that are holomorphic outside \(D\). By Lemma 2.8, \(\alpha\) is locally square integrable at \(x\). Hence, all \(f_{j}\widetilde{v_{j}}\) are locally square integrable at \(x\). According to 2.5 and Lemma 2.8, it follows that the functions \(f_{1},\ldots,f_{N}\) are holomorphic in some neighborhood of \(x\). This proves \[{}_{<D}S(\mathbb{V})\subset\mathcal{V}_{-1}\cap j_{*}S(\mathbb{V})\] and the last claim of the lemma. ### Prolongations of a variation of Hodge structure: general case The analytic prolongation of a variation of Hodge structure on a general base is defined via desingularization. Let \(X\) be a complex manifold and \(Z\subset X\) a closed analytic subset. Let \(D\subset Z\) be the union of the irreducible components of \(Z\) whose codimension is one. Let \(\pi:\widetilde{X}\to X\) be a functorial desingularization of the pair \((X,Z)\) so that \(\widetilde{X}\) is smooth, \(\pi^{-1}(Z)\) is a simple normal crossing divisor on \(\widetilde{X}\) and \[\pi^{o}:=\pi|_{\widetilde{X}^{o}}:\widetilde{X}^{o}:=\pi^{-1}(X\backslash Z) \to X^{o}:=X\backslash Z\] is biholomorphic. Let \(\mathbb{V}=(\mathcal{V},\nabla,\mathcal{F}^{\bullet},Q)\) be an \(\mathbb{R}\)-polarized variation of Hodge structure of weight \(w\) on \(\widetilde{X}^{o}\) and \((H=\oplus_{p+q=w}H^{p,q},\theta,h_{Q})\) the corresponding Higgs bundle with the Hodge metric \(h_{Q}\). Let \(A\) be an \(\mathbb{R}\)-divisor supported on \(\pi^{-1}(Z)\). Then \(\pi_{*}({}_{A}H)\) is a torsion free coherent sheaf on \(X\) whose restriction on \(X^{o}\) is \((\pi^{o})^{-1*}(H)\). By abuse of notation we still denote \(\theta:=(\pi^{o})^{-1*}(\theta)\). \(\theta\) is a meromorphic Higgs field on \(\pi_{*}({}_{A}H)\) with poles along \(Z\). Let \(\mathrm{Cryt}(\pi)\subset X\) be the degenerate loci of \(\pi\). Since \(\pi\) is functorial, \(D\backslash\mathrm{Cryt}(\pi)\) is a simple normal crossing divisor on \(X\backslash\mathrm{Cryt}(\pi)\) and the exceptional loci \(\pi^{-1}(\mathrm{Cryt}(\pi))\) is a simple normal crossing divisor on \(\widetilde{X}\). \((\pi_{*}({}_{A}H),\theta)|_{X\backslash\mathrm{Cryt}(\pi)}\) is locally free and \(\theta\) admits at most log poles along \(D\backslash\mathrm{Cryt}(\pi)\). The following negativity result for \(\ker(\theta)\) generalizes [66]. The main idea of its proof is due to Brunebarbe [7]. **Proposition 2.10**.: _Notations as above. Assume that \(\mathrm{supp}(A)\) lies in the exceptional divisor \(\pi^{-1}(\mathrm{Cryt}(\pi))\). Let \(K\subset\pi_{*}({}_{A}H)\) be a coherent subsheaf such that \(\theta(K)=0\). Then \(K^{\vee}\) is weakly positive in the sense of Viehweg [55]._ Proof.: We follow the notion of singular hermitian metrics on torsion free coherent sheaves (as in [16, 41]). The Hodge metric \(h_{Q}\) defines a singular hermitian metric on the bundle \({}_{A}H\), with singularities along \(\pi^{-1}(Z)\). Since \(\pi^{o}\) is biholomorphic, we may regard \(h_{Q}\) as a singular hermitian metric on the torsion free coherent sheaf \(\pi_{*}(_{A}H)\). Let \(K^{o}:=K|_{X^{o}}\). By Griffiths' curvature formula \[\Theta_{h_{Q}}(H)+\theta\wedge\overline{\theta}+\overline{\theta}\wedge\theta=0,\] one knows that \[\Theta_{h_{Q}}(K^{o})=-\theta\wedge\overline{\theta}|_{K^{o}}+\overline{B}\wedge B\] is Griffiths semi-negative, where \(B\in A^{1,0}_{X^{o}}(K^{o},K^{o\perp})\) is the second fundamental class. We claim that the hermitian metric \(h_{Q}|_{X^{o}}\) extends to a singular hermitian metric on \(K\) with semi-negative curvatures. It suffices to prove that \(\log|s|_{h_{Q}}\) can be extended to a plurisubharmonic function on \(X\) for an arbitrary section \(s\in K\). Since \(\Theta_{h_{Q}}(K^{o})\) is Griffiths semi-negative, \(\log|s|_{h_{Q}}\) is a smooth plurisubharmonic function on \(X^{o}\). By Riemannian extension theorem and Hartogs extension theorem for plurisubharmonic functions [16, Lemma 12.4], it suffices to show that \(\log|s|_{h_{Q}}\) is locally bounded from above in codimension one. Let \(\operatorname{Cryt}(\pi)\subset X\) be the degenerate loci of \(\pi\), which is of codimension \(\geq 2\). Then \(D\backslash\operatorname{Cryt}(\pi)\) is a simple normal crossing divisor on \(X\backslash\operatorname{Cryt}(\pi)\). The assumption on \(A\) yields that \(\pi_{*}(_{A}H)|_{X\backslash\operatorname{Cryt}(\pi)}\simeq{\bf 0}((\pi^{o})^{-1*}H)\), where \({\bf 0}\) is the zero divisor on \(X\backslash\operatorname{Cryt}(\pi)\). Let \(x\) be a general point of a component \(D_{i}\) of \(D\backslash\operatorname{Cryt}(\pi)\). Let \(N_{i}\) be the monodromy operator around \(D_{i}\) associated with the connection \(((\pi^{o})^{-1*}{\mathcal{V}},(\pi^{o})^{-1*}{\mathcal{V}})\) and let \(\{W_{k}\}_{k\in{\mathbb{Z}}}\) be the monodromy weight filtration determined by \(N_{i}\). Since \(\theta(s)=0\), one has \(s\in W_{0}\) according to [45, Corollary 6.7] (see also [7, Lemma 5.4]). Combining it with the fact that \(s\in{\bf 0}((\pi^{o})^{-1*}H)\), it follows from Simpson's norm estimate [48, page 721] that \(|s|_{h_{Q}}\) is locally bounded near \(x\). This implies the claim that \(h_{Q}\) extends (uniquely) to a singular hermitian metric on \(K\) with semi-negative curvature. Hence \(K^{\vee}\) is weakly positive in the sense of Viehweg by [42, Theorem 2.5.2]. ### Prolongations of a variation of Hodge structure of geometric origin Let \(f:Y\to X\) be a proper holomorphic morphism between complex manifolds and let \(n:=\dim X-\dim Y\). Let \(Z\subset X\) be a closed analytic subset such that \(f\) is a Kahler submersion over \(X^{o}:=X\backslash Z\). Let \(Y^{o}:=f^{-1}(X^{o})\) and \(f^{o}:=f|_{Y^{o}}:Y^{o}\to X^{o}\). Then \(R^{n}f^{o}_{*}({\mathbb{R}}_{Y^{o}})\) underlies an \({\mathbb{R}}\)-polarized variation of Hodge structure \({\mathbb{V}}^{n}_{f^{o}}=({\mathcal{V}}^{n},\nabla,{\mathcal{F}}^{\bullet},Q)\) of weight \(n\). Here \({\mathcal{V}}^{n}\simeq R^{n}f^{o}_{*}({\mathbb{R}}_{Y^{o}})\otimes_{{\mathbb{ R}}}\mathscr{O}_{X^{o}}\), \(\nabla\) is the Gauss-Manin connection, \({\mathcal{F}}^{p}\simeq R^{n}f^{o}_{*}(\Omega^{\geq p}_{Y^{o}/X^{o}})\) and \(Q\) is the \({\mathbb{R}}\)-polarization associated with a relative Kahler form. Let \(h_{Q}\) be the Hodge metric associated with \(Q\) and let \((H^{n}_{f^{o}}=\oplus_{p+q=n}H^{p,q}_{f^{o}},\theta)\) be the Higgs bundle associated with \({\mathbb{V}}^{n}_{f^{o}}\) where \(H^{p,q}_{f^{o}}\simeq R^{q}f^{o}_{*}(\Omega^{p}_{Y^{o}/X^{o}})\). Let \(\omega_{Y/X}:=\omega_{Y}\otimes f^{*}(\omega_{X}^{-1})\) be the relative dualizing sheaf. **Lemma 2.11**.: _Notations as above. If \(Z\) is a reduced simple normal crossing divisor, then there is an isomorphism_ \[f_{*}(\omega_{Y/X})\simeq{}_{<Z}H^{n,0}_{f^{o}}.\] Proof.: Let \(j:X^{o}\to X\) be the open immersion. It suffices to show that \({}_{<Z}H^{n,0}_{f^{o}}\otimes\omega_{X}=f_{*}(\omega_{Y})\) as subsheaves of \(j_{*}H^{n,0}_{f^{o}}\otimes\omega_{X}\). Let \(s\) be a local section of \(j_{*}H^{n,0}_{f^{o}}=j_{*}f^{o}_{*}(\omega_{Y^{o}/X^{o}})\). Let \(\phi=dz_{1}\wedge\cdots\wedge dz_{d}\) where \(z_{1},\ldots,z_{d}\) are holomorphic local coordinates on \(X\). According to Lemma 2.9, \(s\in{}_{<Z}H^{n,0}_{f^{o}}\) if and only if the integral \[\int_{X^{o}}|s|^{2}_{h_{Q}}\phi\wedge\overline{\phi}=\epsilon_{n}\int_{X^{o}} \left(\int_{f^{-1}\{x\}}s|_{f^{-1}\{x\}}\wedge\overline{s|_{f^{-1}\{x\}}} \right)\phi\wedge\overline{\phi}=\epsilon_{n}\int_{Y^{o}}(s\wedge f^{o*}(\phi) )\wedge\overline{s\wedge f^{o*}(\phi)}\] is finite locally at every point of \(Z\), where \(\epsilon_{n}=(-1)^{\frac{n(n-1)}{2}}(\sqrt{-1})^{n}\). The locally finiteness of the right handside is equivalent to that \(s\wedge f^{o*}(\phi)\) admits a holomorphic extension to \(Y\) (c.f. [18, Proposition 16]). This proves that \({}_{<Z}H^{n,0}_{f^{o}}\otimes\omega_{X}=f_{*}(\omega_{Y})\). Let us return to the general case. Consider the diagram (2.3) such that the following conditions hold. * \(\pi:X^{\prime}\to X\) is a desingularization of the pair \((X,Z)\). In particular, \(X^{\prime}\) is smooth, \(\pi^{-1}(Z)\) is a simple normal crossing divisor and \(\pi^{o}:=\pi|_{\pi^{-1}(X^{o})}:\pi^{-1}(X^{o})\to X^{o}\) is biholomorphic. * \(Y^{\prime}\) is a functorial desingularization of the main component of \(Y\times_{X}X^{\prime}\). In particular \(Y^{\prime}\to Y\times_{X}X^{\prime}\) is biholomorphic over \(f^{-1}(X^{o})\times_{X^{o}}\pi^{-1}(X^{o})\). Let \(\omega_{X^{\prime}}\simeq\pi^{*}\omega_{X}\otimes\mathscr{O}_{X^{\prime}}(E)\) for some exceptional divisor \(E\) of \(\pi\). We obtain the natural morphisms \[\pi^{*}(f_{*}(\omega_{Y/X}))\simeq\pi^{*}(f_{*}(\omega_{Y})\otimes\omega_{X}^{ -1})\to f_{*}^{\prime}(\omega_{Y^{\prime}})\otimes\omega_{X^{\prime}}^{-1} \otimes\mathscr{O}_{X^{\prime}}(E)\simeq f_{*}^{\prime}(\omega_{Y^{\prime}/X^ {\prime}})\otimes\mathscr{O}_{X^{\prime}}(E). \tag{2.4}\] Define \[f^{\prime o}:=f^{\prime}|_{\sigma^{-1}(f^{-1}(X^{o}))}:\sigma^{-1}(f^{-1}(X^{o }))\to\pi^{-1}(X^{o}),\] a proper Kahler submersion since \(\pi^{o}\) is biholomorphic. Let \(H^{n}_{f^{\prime o}}\) be the Higgs bundle associated with \(f^{\prime o}\). Lemma 2.11 yields that \[f_{*}^{\prime}(\omega_{Y^{\prime}/X^{\prime}})\simeq{}_{<\pi^{-1}(Z)_{\rm red }}H^{n,0}_{f^{\prime o}}.\] Combining it with (2.4), we obtain a generically injective morphism \[f_{*}(\omega_{Y/X})\to\pi_{*}({}_{<\pi^{-1}(Z)_{\rm red}}H^{n,0}_{f^{\prime o }}\otimes\mathscr{O}_{X^{\prime}}(E))\simeq\pi_{*}({}_{<\pi^{-1}(Z)_{\rm red }+E}H^{n,0}_{f^{\prime o}}).\] Since \(f_{*}(\omega_{Y/X})\) is torsion free, the above map must be injective. Thus we have concluded the following result. **Proposition 2.12**.: _Notations as above. Then there is an inclusion_ \[f_{*}(\omega_{Y/X})\subset\pi_{*}({}_{<\pi^{-1}(Z)_{\rm red}+E}H^{n,0}_{f^{ \prime o}}).\] ## 3. Analytic prolongations of Viehweg-Zuo Higgs sheaves In this section we generalize Viehweg-Zuo's construction of Higgs sheaves using analytic prolongations (Theorem 3.1). ### Setting Throughout this section let us fix a proper holomorphic morphism \(f:Y\to X\) between complex manifolds with \(n=\dim Y-\dim X\) the relative dimension. Assume that there is a simple normal crossing divisor \(D_{f}\subset X\) such that \(f^{o}:=f|_{Y^{o}}:Y^{o}\to X^{o}\) is a Kahler submersion where \(X^{o}:=X\backslash D_{f}\) and \(Y^{o}:=f^{-1}(X^{o})\). We fix a torsion free coherent sheaf \(L\) on \(X\) that is invertible on \(X^{o}\) (hence \(\operatorname{rank}(L)=1\)), and a nonzero morphism \[s_{L}:L^{\otimes k}\to f_{*}(\omega_{Y/X}^{\otimes k}) \tag{3.1}\] for some \(k\geq 1\). ### Viehweg-Zuo Higgs sheaves Notations as in SS3.1. Let \(L^{\vee\vee}\) be the reflexive hull of \(L\), with \(L\to L^{\vee\vee}\) the natural inclusion map. Since \(\operatorname{rank}(L^{\vee\vee})=1\), \(L^{\vee\vee}\) is an invertible sheaf. Since \(L\) is torsion free and is invertible on \(X^{o}\), \(\mathscr{I}_{T}:=L\otimes(L^{\vee\vee})^{-1}\subset\mathscr{O}_{X}\) is a coherent ideal sheaf whose co-support lies in a closed analytic subset \(T\subset D_{f}\) such that \(\operatorname{codim}_{X}(T)\geq 2\). Consider a diagram (3.2) of holomorphic maps between complex manifolds such that the following conditions hold. * \(\pi\) is a functorial desingularization of \((X,T,D_{f})\) in the sense of Wlodarczyk [62]. In particular, \(\widetilde{X}\) is a compact complex manifold, \(\pi\) is a projective morphism that is biholomorphic over \(X\backslash T\). \(\pi^{-1}(D_{f})\), \(E:=\pi^{-1}(T)_{\operatorname{red}}\) and \(\pi^{-1}(D_{f})\cup E\) are simple normal crossing divisors. * \(\widetilde{Y}\) is a functorial desingularization of the main component of \(Y\times_{X}\widetilde{X}\). In particular, \(\widetilde{Y}\to Y\times_{X}\widetilde{X}\) is biholomorphic over \(f^{-1}(X\backslash T)\times_{X\backslash T}\pi^{-1}(X\backslash T)\). Since \(\pi\) is biholomorphic on \(\widetilde{X}\backslash E\), there is a constant \(k_{0}\geq 0\) and a natural map \[\pi^{*}f_{*}(\omega_{Y/X}^{\otimes k})\otimes\mathscr{O}_{\widetilde{X}}(-k_ {0}kE)\to\widetilde{f}_{*}(\omega_{\widetilde{Y}/\widetilde{X}}^{\otimes k}).\] Taking (3.1) into account, we obtain a non-zero morphism \[\pi^{*}(L^{\vee\vee})^{\otimes k}\otimes\pi^{*}(\mathscr{I}_{T})^{\otimes k} \simeq\pi^{*}L^{\otimes k}\to\pi^{*}(f_{*}(\omega_{Y/X}^{\otimes k}))\to \widetilde{f}_{*}(\omega_{\widetilde{Y}/\widetilde{X}}^{\otimes k})\otimes \mathscr{O}_{\widetilde{X}}(k_{0}kE).\] Hence there is an effective divisor \(\widetilde{E}\), supported on \(E\), such that there is a nonzero map \[\pi^{*}(L^{\vee\vee})^{\otimes k}\otimes\mathscr{O}_{\widetilde{X}}(-k \widetilde{E})\to\widetilde{f}_{*}(\omega_{\widetilde{Y}/\widetilde{X}}^{ \otimes k}).\] Let \(\widetilde{L}:=\pi^{*}(L^{\vee\vee})\otimes\mathscr{O}_{\widetilde{X}}(- \widetilde{E})\) and \(L^{o}:=L|_{X^{o}}\). Let \(\pi^{o}:=\pi|_{\pi^{-1}(X^{o})}:\pi^{-1}(X^{o})\to X^{o}\). The arguments above show that there is a non-zero morphism \[s_{\widetilde{L}}:\widetilde{L}^{\otimes k}\to\widetilde{f}_{*}(\omega_{ \widetilde{Y}/\widetilde{X}}^{\otimes k}) \tag{3.3}\] and an isomorphism \[\pi^{o*}(L^{o})\simeq\widetilde{L}|_{\pi^{-1}(X^{o})} \tag{3.4}\] such that the diagram \[\begin{CD}\widetilde{L}^{\otimes k}|_{\pi^{-1}(X^{o})}@>{s_{\widetilde{L}^{|_{\pi^ {-1}(X^{o})}}}}>{}>\widetilde{f}_{*}(\omega_{\widetilde{Y}/\widetilde{X}}^{ \otimes k})|_{\pi^{-1}(X^{o})}\\ @V{\simeq}V{\pi^{o*}(L^{o})^{\otimes k}}>\pi^{o*}(s_{L|X^{o})}>\pi^{o*}f_{*}^{o}( \omega_{Y^{o}/X^{o}}^{\otimes k})\end{CD} \tag{3.5}\] is commutative. Define \[B^{o}=\omega_{Y^{o}/X^{o}}\otimes f^{o*}(L^{o})^{-1},\] a line bundle on \(Y^{o}\) and \[\widetilde{B}=\omega_{\widetilde{Y}/\widetilde{X}}\otimes\tilde{f}^{*}( \widetilde{L}^{-1}),\] a line bundle on \(\widetilde{Y}\). Then the map \(s_{\widetilde{L}}\) determines a non-zero section \(\widetilde{s}\in H^{0}(\widetilde{Y},\widetilde{B}^{\otimes k})\). Let \(\varpi:\widetilde{Y}_{k}\to\widetilde{Y}\) be the \(k:1\) cyclic covering map that is branched along \(\{\widetilde{s}=0\}\) and let \(\mu:Z\to\widetilde{Y}_{k}\) be a functorial desingularization that is biholomorphic over the complement of \(\{\varpi^{*}\widetilde{s}=0\}\). The morphisms are gathered in the following diagram where \(g\) denotes \(\widetilde{f}\varpi\mu\). Let \(D_{g}\subset\widetilde{X}\) be a reduced closed analytic subset containing \(\pi^{-1}(D_{f})\), such that \(g\) is a submersion over \(\widetilde{X}^{o}:=\widetilde{X}\backslash D_{g}\). Let \(Z^{o}:=g^{-1}(\widetilde{X}^{o})\) and let \(g^{o}:=g|_{Z^{o}}:Z^{o}\to\widetilde{X}^{o}\). Since \(\mu\) and \(\varpi\) are projective morphisms, \(g^{o}\) is a proper Kahler submersion. Consider the diagram (3.6) where \(\varphi:=\sigma\varpi\mu\sigma^{\prime}\) and \(\psi:=\pi\rho\), such that the following conditions hold. * \(\rho:X^{\prime}\to\widetilde{X}\) is a functorial desingularization of \((\widetilde{X},D_{g},\pi^{-1}(D_{f}))\). In particular, \(X^{\prime}\) is smooth, \(\rho^{-1}(D_{g})\), \(\psi^{-1}(D_{f})\) and \(\rho^{-1}(D_{g})\cup\psi^{-1}(D_{f})\) are simple normal crossing divisors and \(\rho^{o}:=\rho|_{\rho^{-1}(\widetilde{X}^{o})}:\rho^{-1}(\widetilde{X}^{o}) \to\widetilde{X}^{o}\) is biholomorphic. * \(Z^{\prime}\) is a functorial desingularization of the main component of \(Z\times_{\widetilde{X}}X^{\prime}\). In particular \(Z^{\prime}\to Z\times_{\widetilde{X}}X^{\prime}\) is biholomorphic over \(Z^{o}\times_{\widetilde{X}^{o}}\rho^{-1}(\widetilde{X}^{o})\). Let \(X^{\prime o}:=\rho^{-1}(\widetilde{X}^{o})\), \(Z^{\prime o}:=h^{-1}(X^{\prime o})\) and \(h^{o}:=h|_{Z^{\prime o}}:Z^{\prime o}\to X^{\prime o}\). Notice that \(h^{o}\) is a proper Kahler submersion of relative dimension \(n\), which is the pullback of the family \(g^{o}:Z^{o}\to\widetilde{X}^{o}\) via the isomorphism \(\rho^{o}:X^{\prime o}\to\widetilde{X}^{o}\). Then \(R^{n}h^{o}_{*}(\mathbb{R}_{Z^{\prime o}})\) underlies an \(\mathbb{R}\)-polarized variation of Hodge structure of weight \(n\) on \(X^{\prime o}\). Let \((H^{n}_{h^{o}}=\bigoplus_{p=0}^{n}H^{p,n-p}_{h^{o}},\theta,h_{Q})\) be the associated system of Hodge bundles with the Hodge metric \(h_{Q}\). Namely, \(R^{q}h_{*}^{o}\Omega_{Z^{o}/X^{\prime o}}^{p}\) and \(\theta:H_{h^{o}}^{p,q}\to H_{h^{o}}^{p-1,q+1}\otimes\Omega_{X^{\prime o}}\) is defined by taking wedge product with the Kodaira-Spencer class. Let \(\omega_{X^{\prime}}\simeq\rho^{*}\omega_{\widetilde{X}}\otimes\mathscr{O}_{X^{ \prime}}(E^{\prime})\) for some exceptional divisor \(E^{\prime}\) of \(\rho\). By Theorem 2.7 and SS2.3, \[\left(\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}}H_{h^{o}}^ {n}\right)=\bigoplus_{p=0}^{n}\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red }}+E^{\prime}}H_{h^{o}}^{p,n-p}\right),\theta\right)\] is a meromorphic Higgs sheaf on \(X\) such that the Higgs field \(\theta\) is holomorphic over \(X\backslash\pi(D_{g})\) and is regular along \(\pi(D_{g})\). The main result of this subsection is the following theorem, inspired by the constructions in [57]. **Theorem 3.1**.: _Notations and assumptions as in SS3.1 and SS3.2. Then the following hold._ 1. _There is a natural inclusion_ \(\pi_{*}(\widetilde{L})\subset\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red }}+E^{\prime}}H_{h^{o}}^{n,0}\right)\)_._ 2. _Let_ \((\bigoplus_{p=0}^{n}L^{p},\theta)\subset\left(\psi_{*}\left({}_{<\psi^{-1}(D_{ f})_{\mathrm{red}}+E^{\prime}}H_{h^{o}}^{n}\right),\theta\right)\) _be the meromorphic Higgs subsheaf generated by_ \(L^{0}:=\pi_{*}(\widetilde{L})\)_, where_ \[L^{p}\subset\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}}H_{ h^{o}}^{n-p,p}\right).\] _Then the Higgs field_ \[\theta:L^{p}|_{X\backslash\pi(D_{g})}\to L^{p+1}|_{X\backslash\pi(D_{g})} \otimes\Omega_{X\backslash\pi(D_{g})}\] _is holomorphic over_ \(X\backslash D_{f}\) _and has at most log poles along_ \(D_{f}\) _for each_ \(0\leq p<n\)_, that is,_ \[\theta(L^{p})\subset L^{p+1}\otimes\Omega_{X}(\log D_{f}).\] The proof will occupy the remainder of this subsection. It will be accomplished by constructing a log Higgs subsheaf \(\bigoplus_{p+q=n}G^{p,q}\) of \(\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}}H_{h^{o}}^{n}\right)\) which contains \(\pi_{*}(\widetilde{L})\) such that the Higgs field is holomorphic on \(X\backslash D_{f}\). We first construct the Higgs subsheaf on \(\psi(X^{\prime o})\) and then extend it to the whole manifold \(X\) by using analytic prolongations. #### 3.2.1. The construction on \(\psi(X^{\prime o})\) Let \(X_{1}:=\psi(X^{\prime o})\subset X^{o}\) and \(Y_{1}:=f^{-1}(X_{1})\subset Y^{o}\). Then \(f_{1}:=f|_{Y_{1}}:Y_{1}\to X_{1}\) is a proper Kahler submersion. Let \(\varphi^{o}:=\varphi|_{Z^{\prime o}}:Z^{\prime o}\to Y_{1}\). Since \(\widetilde{Y}_{k}\) is embedded into the total space of the line bundle \(\widetilde{B}\), the pullback \((\varpi\mu)^{*}\widetilde{B}\) has a tautological section. This gives an injective morphism \[(\varpi\mu\sigma^{\prime})^{*}(\widetilde{B}^{-1})\to\mathscr{O}_{Z^{\prime}}.\] Combining it with (3.4), one gets an injective map \[\varphi^{o*}(B^{o}|_{Y_{1}})^{-1}\simeq(\varpi\mu\sigma^{\prime})^{*}( \widetilde{B}^{-1})|_{Z^{\prime o}}\to\mathscr{O}_{Z^{\prime o}}.\] By composing it with the natural map \(\varphi^{o*}\Omega_{Y_{1}/X_{1}}^{p}\to\Omega_{Z^{\prime o}/X^{\prime o}}^{p}\), we obtain a natural morphism \[\varphi^{o*}((B^{o})^{-1}\otimes\Omega_{Y^{o}/X^{o}}^{p}|_{Y_{1}})\to\Omega_{Z^ {\prime o}/X^{\prime o}}^{p} \tag{3.7}\] for every \(p=0,\dots,n\). Hence (3.7) induces a map \[\iota_{X_{1}}:R^{q}f_{*}^{o}((B^{o})^{-1}\otimes\Omega_{Y^{o}/X^{o}}^{p})|_{X_{ 1}}\to\psi_{*}^{o}R^{q}h_{*}^{o}(\Omega_{Z^{\prime o}/X^{\prime o}}^{p}) \tag{3.8}\] for every \(p,q\geq 0\), where \(\psi^{o}:=\psi|_{X^{\prime o}}:X^{\prime o}\to X_{1}\) is an isomorphism. Consider the diagram By taking the higher direct image \(R^{*}h^{o}_{*}\), we obtain the Higgs field as the coboundary map \[\theta:R^{q}h^{o}_{*}(\Omega^{p}_{Z^{\prime o}/X^{\prime o}})\to R^{q+1}h^{o}_{* }(\Omega^{p-1}_{Z^{\prime o}/X^{\prime o}})\otimes\Omega_{X^{\prime o}}. \tag{3.9}\] Consider the diagram By tensoring it with \((B^{o})^{-1}\) and taking the higher direct image \(R^{*}f^{o}_{*}\), one has the coboundary map \[\vartheta:R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})\to R^{q+ 1}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p-1}_{Y^{o}/X^{o}})\otimes\Omega_{X^{o}}. \tag{3.10}\] It follows from (3.7) that there is a morphism between distinguished triangles in the derived category \(D(Y_{1})\) Then there is a commutative diagram \[R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})|_{X_{1}} \xrightarrow{\vartheta|_{X_{1}}}R^{q+1}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p -1}_{Y^{o}/X^{o}})|_{X_{1}}\otimes\Omega_{X_{1}}. \tag{3.11}\] #### 3.2.2. Extend the Higgs sheaves to \(X^{o}\) Notice that \(H^{p,q}_{h^{o}}\simeq R^{q}h^{o}_{*}(\Omega^{p}_{Z^{\prime o}/X^{\prime o}})\). The main result of this part is the following lemma. **Lemma 3.2**.: _The map (3.8) extends to a map_ \[\iota_{X^{o}}:R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}}) \to\psi_{*}\left({}_{<\rho^{-1}(D_{g})_{\mathrm{red}}}H^{p,q}_{h^{o}}\right)|_ {X^{o}}. \tag{3.12}\] Proof.: Consider the diagram \[Z^{\prime\prime}\xrightarrow{\beta}\varphi^{-1}(Y^{o})\xrightarrow{\varphi}Y^{ o}. \tag{3.13}\] where \(h^{\prime}:=h|_{\varphi^{-1}(Y^{o})}\) is a proper Kahler submersion over \(X^{\prime o}=\psi^{-1}(X^{o})\backslash\rho^{-1}(D_{g})\) and \(\beta:Z^{\prime\prime}\to\varphi^{-1}(Y^{o})\) is a functorial desingularization of the pair \((\varphi^{-1}(Y^{o}),h^{\prime-1}(\psi^{-1}(X^{o})\backslash X^{\prime o}))\). Notice that there is a closed analytic subset \(S\subset\psi^{-1}(X^{o})\backslash X^{\prime o}\) so that \(\operatorname{codim}_{\psi^{-1}(X^{o})}(S)\geq 2\) and \(h^{\prime}\beta:Z^{\prime\prime}\to\psi^{-1}(X^{o})\) is semistable (SS4.3) over \(\psi^{-1}(X^{o})\backslash S\). (3.13) induces the natural morphisms \[\psi^{*}\left(R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}}) \right)\to R^{q}h^{\prime}_{*}(\Omega^{p}_{\varphi^{-1}(Y^{o})/\psi^{-1}(X^{o })})\to R^{q}(h^{\prime}\beta)_{*}\left(\Omega^{p}_{Z^{\prime\prime}/\psi^{-1} (X^{o})}\right) \tag{3.14}\] for every \(p,q\geq 0\). Let \(X^{\prime}_{2}:=\psi^{-1}(X^{o})\backslash S\), \(D_{X^{\prime}_{2}}:=\rho^{-1}(D_{g})\cap X^{\prime}_{2}\), \(Z^{\prime\prime}_{2}:=(h^{\prime}\beta)^{-1}(X^{\prime}_{2})\) and \(D_{Z^{\prime\prime}_{2}}:=(h^{\prime}\beta)^{-1}(D_{X_{2}})_{\mathrm{red}}\). Then \(h^{\prime}\beta|_{Z^{\prime\prime}_{2}}:(Z^{\prime\prime}_{2},D_{Z^{\prime \prime}_{2}})\to(X^{\prime}_{2},D_{X^{\prime}_{2}})\) is a proper Kahler semistable morphism (SS4.3). Consider the associated logarithmic Gauss-Manin connection \[\nabla_{\mathrm{GM}}:R^{m}(h^{\prime}\beta|_{Z^{\prime\prime}_{2}})_{*}\left( \Omega^{\bullet}_{Z^{\prime\prime}_{2}/X^{\prime}_{2}}(\log D_{Z^{\prime\prime }_{2}})\right)\to R^{m}(h^{\prime}\beta|_{Z^{\prime\prime}_{2}})_{*}\left( \Omega^{\bullet}_{Z^{\prime\prime}_{2}/X^{\prime}_{2}}(\log D_{Z^{\prime\prime }_{2}})\right)\otimes\Omega_{X^{\prime}_{2}}(\log D_{X^{\prime}_{2}})\] where \(0\leq m\leq 2n\). According to [50, Proposition 2.2], the real parts of the eigenvalues of the residue map of \(\nabla_{\mathrm{GM}}\) along each component of \(D_{X^{\prime}_{2}}\) lie in \([0,1)\). As a consequence, the corresponding logarithmic Higgs bundle lies in the prolongation \({}_{<D_{X^{\prime}_{2}}}H^{m}\), where \[H^{m}:=\bigoplus_{p+q=m}R^{q}(h^{\prime}\beta|_{Z^{\prime\prime}_{2},D_{Z^{ \prime\prime}_{2}}})_{*}\left(\Omega^{p}_{(Z^{\prime\prime}_{2}\backslash D_{ Z^{\prime\prime}_{2}})/X^{\prime o}}\right)\] is the Higgs bundle associated with the proper Kahler submersion \(Z^{\prime\prime}_{2}\backslash D_{Z^{\prime\prime}_{2}}\to X^{\prime o}\). Namely there is a natural inclusion \[R^{q}(h^{\prime}\beta|_{Z^{\prime\prime}_{2}})_{*}\left(\Omega^{p}_{Z^{\prime \prime}_{2}/X^{\prime}_{2}}(\log D_{Z^{\prime\prime}_{2}})\right)\to_{<D_{X^{ \prime}_{2}}}R^{q}(h^{\prime}\beta|_{Z^{\prime\prime}_{2}\backslash D_{Z^{ \prime\prime}_{2}}})_{*}\left(\Omega^{p}_{(Z^{\prime\prime}_{2}\backslash D_{ Z^{\prime\prime}_{2}})/X^{\prime o}}\right),\quad\forall p,q\geq 0.\] Since \(\beta\) is an isomorphism over the open submanifold \(h^{\prime-1}(X^{\prime o})\), the family \(Z^{\prime\prime}_{2}\backslash D_{Z^{\prime\prime}_{2}}\to X^{\prime o}\) is isomorphic to the family \(h^{o}:Z^{\prime o}\to X^{\prime o}\). Consequently, one obtains a natural inclusion \[R^{q}(h^{\prime}\beta|_{Z^{\prime\prime}_{2}})_{*}\left(\Omega^{p}_{Z^{\prime \prime}_{2}/X^{\prime}_{2}}(\log D_{Z^{\prime\prime}_{2}})\right)\to_{<\rho^{-1 }(D_{g})_{\mathrm{red}}}H^{p,q}_{h^{o}}|_{X^{\prime}_{2}},\quad\forall p,q\geq 0.\] Taking (3.14) into account, one gets a map \[\psi^{*}\left(R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}}) \right)\big{|}_{X^{\prime}_{2}}\to_{<\rho^{-1}(D_{g})_{\mathrm{red}}}H^{p,q}_ {h^{o}}|_{X^{\prime}_{2}},\quad\forall p,q\geq 0. \tag{3.15}\] Since \({}_{<\rho^{-1}(D_{g})_{\mathrm{red}}}H^{p,q}_{h^{o}}\) is locally free (Theorem 2.7) and \(\mathrm{codim}_{\psi^{-1}(X^{o})}(S)\geq 2\), the morphism (3.15) extends to a morphism \[\psi^{*}\left(R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}}) \right)\to_{<\rho^{-1}(D_{g})_{\mathrm{red}}}H^{p,q}_{h^{o}}|_{\psi^{-1}(X^{o })},\quad p,q\geq 0.\] by Hartogs extension theorem. Taking the adjoint we obtain (3.12). #### 3.2.3. Extend the Higgs sheaves to \(X\) In this part we extend (3.12) to \(X\). Let \[R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})\langle D_{f}\rangle :=\bigcup_{n\in\mathbb{Z}}R^{q}f_{*}((\omega_{Y/X}\otimes f^{*}(L^{\vee\vee})^ {-1})^{-1}\otimes\Omega^{p}_{Y/X})(nD_{f})\] be the sheaf of sections of \(j_{X^{o}*}R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})\) that are meromorphic along \(D_{f}\), where \(j_{X^{o}}:X^{o}\to X\) is the immersion. Let \[R^{q}h^{o}_{*}(\Omega^{p}_{Z^{\prime o}/X^{\prime o}})\langle\rho^{-1}(D_{g}) \rangle:=\bigcup_{n\in\mathbb{Z}}R^{q}h_{*}(\Omega^{p}_{Z^{\prime}/X^{\prime}})( n\rho^{-1}(D_{g}))\] ne the sheaf of sections of \(j_{X^{\prime o}*}R^{q}h^{o}_{*}(\Omega^{p}_{Z^{\prime o}/X^{\prime o}})\) that are meromorphic along \(\rho^{-1}(D_{g})\), where \(j_{X^{\prime o}}:X^{\prime o}\to X^{\prime}\) is the immersion. (3.11) naturally extends to the diagram (3.16) Define \[G^{p,q}:=\operatorname{Im}(\iota)\cap\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{ \operatorname{red}}+E^{\prime}}H^{p,q}_{h^{o}}\right).\] Notice that the sections of \(G^{p,q}\) have bounded degrees of poles along \(X\backslash X_{1}\) since they lie in \(\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\operatorname{red}}+E^{\prime}}H^{p,q}_{ h^{o}}\right)\). Hence \(G^{p,q}\) equals the intersection of \(\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\operatorname{red}}+E^{\prime}}H^{p,q}_{ h^{o}}\right)\) with \[\operatorname{Im}\left(R^{q}f_{*}((\omega_{Y/X}\otimes f^{*}(L^{\veevee})^{- 1})^{-1}\otimes\Omega^{p}_{Y/X})(n_{1}D_{f})\to\psi_{*}\big{(}R^{q}h_{*}( \Omega^{p}_{Z^{\prime}/X^{\prime}})(n_{2}\rho^{-1}(D_{g}))\big{)}\right)\] for some \(n_{1},n_{2}\in\mathbb{Z}\). In particular, \(G^{p,q}\) is a coherent sheaf on \(X\) for every \(p,q\geq 0\). **Lemma 3.3**.: (3.17) \[\theta(G^{p,q})\subset G^{p-1,q+1}\otimes\Omega_{X}(\log D_{f}).\] Proof.: _Case I._ Let \(x\in X^{o}=X\backslash D_{f}\) and let \(z_{1},\dots,z_{d}\) be holomorphic local coordinates at \(x\). It suffices to show that \[\theta(\frac{\partial}{\partial z_{i}})(G^{p,q})\subset G^{p-1,q+1},\quad \forall i=1,\dots,d.\] Let \(v\in R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})\) such that \(\iota(v)\in\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\operatorname{red}}+E^{\prime} }H^{p,q}_{h^{o}}\right)\). One has \[\vartheta(\frac{\partial}{\partial z_{i}})(v)\in R^{q+1}f^{o}_{*}((B^{o})^{-1 }\otimes\Omega^{p-1}_{Y^{o}/X^{o}})\] according to (3.10). Thus \[\theta(\frac{\partial}{\partial z_{i}})(\iota(v))=\iota\left(\vartheta(\frac {\partial}{\partial z_{i}})(v)\right)\in\operatorname{Im}(\iota),\quad\forall i =1,\dots,d\] by (3.16). Lemma 3.2 yields that \[\operatorname{Im}(\iota)|_{X^{o}}\subset\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{ \operatorname{red}}+E^{\prime}}H^{p,q}_{h^{o}}\right)|_{X^{o}}.\] This shows (3.17) on \(X\backslash D_{f}\). _Case II._ Let \(x\in D_{f}\). Let \(z_{1},\dots,z_{d}\) be holomorphic local coordinates at \(x\) so that \(D_{f}=\{z_{1}\cdots z_{l}=0\}\). Let \[\xi_{i}:=\left\{\begin{array}{ll}z_{i}\frac{\partial}{\partial z_{i}},&i=1, \dots,l\\ \frac{\partial}{\partial z_{i}},&i=l+1,\dots,d\end{array}.\right.\] It suffices to show that \[\theta(\xi_{i})(G^{p,q})\subset G^{p-1,q+1},\quad\forall i=1,\dots,d.\] Let \(v\in R^{q}f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{p}_{Y^{o}/X^{o}})\langle D_{f}\rangle\) such that \(\iota(v)\in\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}}H^{p,q}_ {h^{o}}\right)\). It follows from (3.16) that \[\theta(\xi_{i})(\iota(v))=\iota\left(\vartheta(\xi_{i})(v)\right)\in\mathrm{ Im}(\iota),\quad\forall i=1,\ldots,d.\] Theorem 2.7 yields that \[\theta(\xi_{i})(\iota(v))\in\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}} +E^{\prime}}H^{p,q}_{h^{o}}\right),\quad\forall i=1,\ldots,d. \tag{3.18}\] This shows (3.17) on \(X\). #### 3.2.4. Final proof It suffices to show the following lemma to finish the proof of Theorem 3.1, in accordance with Lemma 3.3. **Lemma 3.4**.: _There is a natural inclusion \(\pi_{*}(\widetilde{L})\subset G^{n,0}\)._ Proof.: Consider the natural map \[\widetilde{\alpha}:\pi_{*}(\widetilde{L})\to\pi_{*}\widetilde{f}_{*}( \widetilde{f}^{*}\widetilde{L})\simeq\pi_{*}\widetilde{f}_{*}(\widetilde{B}^{ -1}\otimes\omega_{\widetilde{Y}/\widetilde{X}})\subset\pi_{*}g_{*}(\omega_{Z/ \widetilde{X}})\subset\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{ \prime}}H^{n,0}_{h^{o}}\right),\] where the last inclusion is deduced from Proposition 2.12. Now it suffices to show that \[\mathrm{Im}(\widetilde{\alpha})|_{X^{o}}\subset\mathrm{Im}(\iota)|_{X^{o}}= \mathrm{Im}(\iota_{X^{o}}). \tag{3.19}\] Consider the natural map \[L^{o}\to f^{o}_{*}(f^{o*}L^{o})\simeq f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{n} _{Y^{o}/X^{o}}).\] Since \(\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}}H^{n,0}_{h^{o}}\right)\) is torsion free, the composition map \[\alpha:L^{o}\to f^{o}_{*}((B^{o})^{-1}\otimes\Omega^{n}_{Y^{o}/X^{o}}) \stackrel{{\iota_{X^{o}}}}{{\to}}\psi_{*}\left({}_{<\psi^{-1}(D_ {f})_{\mathrm{red}}+E^{\prime}}H^{n,0}_{h^{o}}\right)|_{X^{o}}\] is injective. So it induces an injective morphism \[L^{o}\to\mathrm{Im}\left(\iota_{X^{o}}:f_{*}((B^{o})^{-1}\otimes\Omega^{n}_{Y ^{o}/X^{o}})\to\psi_{*}\left({}_{<\psi^{-1}(D_{f})_{\mathrm{red}}+E^{\prime}} H^{n,0}_{h^{o}}\right)|_{X^{o}}\right).\] According to (3.5), one obtains that \(\widetilde{\alpha}|_{X^{o}}=\alpha\). Consequently, \[\mathrm{Im}(\widetilde{\alpha})|_{X^{o}}=\alpha(L^{o})\subset\mathrm{Im}( \iota_{X^{o}}).\] The lemma is proved. ## 4. Admissible families of canonical stable minimal models ### Stable minimal models and their moduli We review the main results in [5] that will be used in the sequel. For the purpose of the present article, everything is defined over \(\mathrm{Spec}(\mathbb{C})\). A _stable minimal model_ is a triple \((X,B),A\) where \(X\) is a reduced connected projective scheme of finite type over \(\mathrm{Spec}(\mathbb{C})\) and \(A,B\geq 0\) are \(\mathbb{Q}\)-divisor satisfying the following conditions: * \((X,B)\) is a projective connected slc pair, * \(K_{X}+B\) is semi-ample, * \(K_{X}+B+tA\) is ample for some \(t>0\), and * \((X,B+tA)\) is slc for some \(t>0\). Let \[d\in\mathbb{N},c\in\mathbb{Q}^{\geq 0},\Gamma\subset\mathbb{Q}^{>0}\text{ a finite set, and }\sigma\in\mathbb{Q}[t].\] A \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal model is a stable minimal model \((X,B),A\) satisfying the following conditions: * \(\dim X=d\), * the coefficients of \(A\) and \(B\) are in \(c\mathbb{Z}^{\geq 0}\), * \(\operatorname{vol}(A|_{F})\in\Gamma\) where \(F\) is any general fiber of the fibration \(f:X\to Z\) determined by \(K_{X}+B\), and * \(\operatorname{vol}(K_{X}+B+tA)=\sigma(t)\) for \(0\leq t\ll 1\). Let \(S\) be a reduced scheme over \(\operatorname{Spec}(\mathbb{C})\). A family of \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal models over \(S\) consists of a projective morphism \(X\to S\) of schemes and \(\mathbb{Q}\)-divisors \(A\) and \(B\) on \(X\) satisfying the following conditions: * \((X,B+tA)\to S\) is a locally stable family (that is, \(K_{X/S}+B+tA\) is \(\mathbb{Q}\)-Cartier) for every sufficiently small rational number \(t\geq 0\), * \(A=cN\), \(B=cD\) where \(N,D\geq 0\) are relative Mumford divisors, and * \((X_{s},B_{s}),A_{s}\) is a \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal model for each point \(s\in S\). Let \(\operatorname{Sch}_{\mathbb{C}}^{\operatorname{red}}\) denote the category of reduced schemes defined over \(\operatorname{Spec}(\mathbb{C})\). Define \[\mathscr{M}_{\operatorname{slc}}^{\operatorname{red}}(d,\Phi_{c},\Gamma, \sigma):S\mapsto\{\text{family of }(d,\Phi_{c},\Gamma,\sigma)-\text{stable minimal models over }S\},\] the functor of groupoids over \(\operatorname{Sch}_{\mathbb{C}}^{\operatorname{red}}\). **Theorem 4.1** (Birkar [5]).: _There is a proper Deligne-Mumford stack \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) over \(\mathbb{C}\) such that the following hold._ * \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)|_{\operatorname{ Sch}_{\mathbb{C}}^{\operatorname{red}}}=\mathscr{M}_{\operatorname{slc}}^{ \operatorname{red}}(d,\Phi_{c},\Gamma,\sigma)\) _as functors of groupoids._ * \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) _admits a projective good coarse moduli space_ \(M_{slc}(d,\Phi_{c},\Gamma,\sigma)\)_._ Proof.: See the proof of [5, Theorem 1.14]. Using the notations in [5, SS10.7], we have \[\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)=\left[M_{ \operatorname{slc}}^{e}(d,\Phi_{c},\Gamma,\sigma,a,r,\mathbb{P}^{n})/\mathrm{ PGL}_{n+1}(\mathbb{C})\right],\] where the right hand side is the stacky quotient. A stable minimal model \((X,B),A\) is called a lc stable minimal model if \((X,B)\) is a lc pair. Let \(\mathscr{M}_{\operatorname{lc}}(d,\Phi_{c},\Gamma,\sigma)\subset\mathscr{M}_ {\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) denote the open substack that consists of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models. We use \(M_{\operatorname{lc}}(d,\Phi_{c},\Gamma,\sigma)\) to denote the quasi-projective coarse moduli spaces of \(\mathscr{M}_{\operatorname{lc}}(d,\Phi_{c},\Gamma,\sigma)\). ### Polarization on \(M_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) In this subsection we consider some natural ample \(\mathbb{Q}\)-line bundles on \(M_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\). Their constructions are implicit in the proof of [5, Theorem 1.14], depending on the ampleness criterion by Kollar [25]. Fix a data \(d,\Phi_{c},\Gamma,\sigma\). Since \(\mathscr{M}_{\operatorname{slc}}(d,\Phi_{c},\Gamma,\sigma)\) is of finite type, there are constants \[(a,r,j)\in\mathbb{Q}^{\geq 0}\times(\mathbb{Z}^{>0})^{2},\] depending only on \(d,\Phi_{c},\Gamma,\sigma\) such that every \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal model \((X,B),A\) satisfies the following conditions (c.f. [5, Lemma 10.2]): * \(X+B+aA\) is slc, * \(r(K_{X}+B+aA)\) is a very ample integral Cartier divisor with \[H^{i}(X,kr(K_{X}+B+aA))=0,\quad\forall i>0,\forall k>0,\] * the embedding \(X\hookrightarrow\mathbb{P}(H^{0}(X,r(K_{X}+B+aA)))\) is defined by degree \(\leq j\) equations, and * the multiplication map \[S^{j}(H^{0}(X,r(K_{X}+B+aA)))\to H^{0}(X,jr(K_{X}+B+aA))\] is surjective. **Definition 4.2**.: \((a,r,j)\in\mathbb{Q}^{\geq 0}\times(\mathbb{Z}^{>0})^{2}\) that satisfies the conditions above is called a \((d,\Phi_{c},\Gamma,\sigma)\)-_polarization data_. Let \((a,r,j)\in\mathbb{Q}^{\geq 0}\times(\mathbb{Z}^{>0})^{2}\) be a \((d,\Phi_{c},\Gamma,\sigma)\)-polarization data. Let \((X,B),A\to S\) be a family of \((d,\Phi_{c},\Gamma,\sigma)\)-stable minimal models. Then \(f_{*}(r(K_{X/S}+B+aA))\) is locally free and commutes with an arbitrary base change. Therefore the assignment \[f:(X,B),A\to S\in\mathscr{M}_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)(S) \mapsto f_{*}(r(K_{X/S}+B+aA))\] gives a locally free coherent sheaf on the stack \(\mathscr{M}_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)\), denoted by \(\Lambda_{a,r}\). Let \(\lambda_{a,r}:=\det(\Lambda_{a,r})\). Since \(\mathscr{M}_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)\) is Deligne-Mumford, some power \(\lambda_{a,r}^{\otimes k}\) descends to a line bundle on \(M_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)\). For this reason we regard \(\lambda_{a,r}\) as a \(\mathbb{Q}\)-line bundle on \(M_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)\). **Proposition 4.3**.: _Let \((a,r,j)\in\mathbb{Q}^{\geq 0}\times(\mathbb{Z}^{>0})^{2}\) be a \((d,\Phi_{c},\Gamma,\sigma)\)-polarization data. Then \(\lambda_{a,r}\) is ample on \(M_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)\)._ Proof.: By the same arguments as in [25, SS2.9], it suffices to show that \(f_{*}(r(K_{X/S}+B+aA))\) is nef when \(S\) is a smooth projective curve. This has been accomplished by Fujino [13] and Kovacs-Patakfalvi [31]. ### A technical lemma for semistable families A morphism \(f:Y\to X\) between smooth varieties is called _semistable_ (resp. _strictly semistable_) if there is a (not necessarily connected) smooth divisor \(D\) on \(X\) such that the following conditions hold. 1. \(f\) is a submersion over \(X\backslash D\) and the schematic preimage \(f^{-1}(D)\) is a (resp. reduced) simple normal crossing divisor on \(Y\). 2. \(f\) sends submersively any stratum of \(f^{-1}(D)_{\mathrm{red}}\) onto an irreducible component of \(D\). A morphism \(f:Y\to X\) between smooth varieties is _strictly semistable in codimension one_ if there is a dense Zariski open subset \(U\subset X\) with \(\operatorname{codim}_{X}(X\backslash U)\geq 2\), such that \(f|_{f^{-1}(U)}:f^{-1}(U)\to U\) is strictly semistable. For a surjective morphism \(Y\to X\) between algebraic varieties, let \(Y_{X}^{[r]}\) denote the main component of the \(r\)-fiber product \(Y\times_{X}Y\times_{X}\cdots\times_{X}Y\) (that is, the union of irreducible components that is mapped onto \(X\)). Let \(f^{[r]}:Y_{X}^{[r]}\to X\) denote the projection map. The following lemma is known to experts. We present the proof for the convenience of readers. **Lemma 4.4**.: _Let \(f:Y\to X\) be a strictly semistable morphism and \(\tau:Y^{(r)}\to Y_{X}^{[r]}\) a desingularization. Denote \(f^{(r)}=f^{[r]}\tau\). Then the following hold._ 1. \(\tau_{*}(\omega_{Y^{(r)}}^{\otimes k})\simeq\omega_{Y_{X}^{[r]}}^{\otimes k}\) _for every_ \(k\geq 1\)_, where_ \(\omega_{Y_{X}^{[r]}}\) _is the dualizing sheaf (invertible since_ \(Y_{X}^{[r]}\) _is Gorenstein)._ 2. \(f_{*}^{(\tau)}(\omega_{Y^{(r)}/X}^{\otimes k})\) _is a reflexive sheaf for every_ \(k\geq 1\)_._ 3. \(f_{*}^{(r)}(\omega_{Y^{(r)}/X}^{\otimes k})\simeq(f_{*}(\omega_{Y/X}^{\otimes k })^{\otimes r})^{\vee\vee}\) _for every_ \(k\geq 1\) Proof.: A strictly semistable morphism is weakly semistable in the sense of Abramovich-Karu [1]. Hence \(Y_{X}^{[r]}\) has only normal, rational and Gorenstein singularities by [1, Proposition 6.4]. Consequently, it can be inferred that \(Y_{X}^{[r]}\) has canonical singularities. The first claim is proved. For the second claim, it suffices to show that any section of \(f_{*}^{[r]}(\omega_{Y_{X}^{[r]}/X}^{\otimes k})\simeq f_{*}^{(r)}(\omega_{Y^{( r)}/X}^{\otimes k})\) extends across an arbitrary locus of codimension \(\geq 2\). Let \(U\subset X\) be an open subset and \(Z\subset U\) a Zariski closed subset of codimension \(\geq 2\). Let \[s\in\Gamma(U\backslash Z,f_{*}^{[r]}(\omega_{Y_{X}^{[r]}/X}^{\otimes k}))= \Gamma((f^{[r]})^{-1}(U\backslash Z),\omega_{Y_{X}^{[r]}/X}^{\otimes k}).\] Since \(f\) is flat, so is \(f^{[r]}\). Hence \((f^{[r]})^{-1}(Z)\) is of codimension \(\geq 2\) in \((f^{[r]})^{-1}(U)\). Since \(Y_{X}^{[r]}\) is normal and \(\omega_{Y_{X}^{[r]}/X}^{\otimes k}\) is invertible, there is \[\widetilde{s}\in\Gamma(U,f_{*}^{[r]}(\omega_{Y_{X}^{[r]}/X}^{\otimes k}))= \Gamma((f^{[r]})^{-1}(U),\omega_{Y_{X}^{[r]}/X}^{\otimes k})\] that extends \(s\). This proves Claim (2). Finally we show the last claim. Since \(f^{[r]}\) and \(f\) are Gorenstein, one obtains that \[\omega_{Y_{X}^{[r]}/X}^{\otimes k}\simeq\otimes_{i=1}^{r}p_{i}^{*}\omega_{Y/X} ^{\otimes k}\] where \(p_{i}:Y_{X}^{[r]}\to Y\) denotes the projection to the \(i\)th component. Let \(U\subset X\) be the largest open subset on which \(f_{*}^{[r]}(\omega_{Y_{X}^{[r]}/X}^{\otimes k})\) and \(f_{*}(\omega_{Y/X}^{\otimes k})\) are locally free. Since the relevant sheaves are torsion free, \(X\backslash U\) is of codimension \(\geq 2\). By the flat base change we obtain that \[f_{*}^{(r)}(\omega_{Y^{(r)}/X}^{\otimes k})|_{U}\simeq f_{*}(\omega_{Y/X}^{ \otimes k})^{\otimes r}|_{U}.\] Since \(f_{*}^{(r)}(\omega_{Y^{(r)}/X}^{\otimes k})\) and \((f_{*}(\omega_{Y/X}^{\otimes k})^{\otimes r})^{\vee\vee}\) are reflexive, we have proven Claim (3). ### Higgs sheaves associated to a family of lc stable minimal models The aim of this section is to prove Theorem 1.6 (=Theorem 4.7). Recall that a family \(f:(X,\Delta)\to S\) is _log smooth_ if \(f\) is a smooth projective morphism between varieties and \(\Delta\) is a simple normal crossing \(\mathbb{Q}\)-divisor on \(X\) such that every stratum of \(\operatorname{supp}(\Delta)\) is smooth over \(S\). **Definition 4.5**.: Let \(f:(X,B),A\to S\) be a family of stable minimal models over a variety \(S\). A _log smooth birational model_ of \(f\) is a log smooth family \((X^{\prime},\Delta^{\prime})\to S\), together with a birational map \(g:X^{\prime}\dashrightarrow X\) over \(S\) such that the following conditions hold. 1. \(g\) is defined on a dense Zariski open subset of \(\operatorname{supp}(\Delta^{\prime})\) and \(A+B\) is the birational transform of \(\Delta^{\prime}\). 2. For every \(s\in S(\mathbb{C})\), \(g\) is defined on Zariski open subsets of \(X^{\prime}_{s}\) and \(\operatorname{supp}(\Delta^{\prime}_{s})\), and \(g|_{X^{\prime}_{s}}:X^{\prime}_{s}\dashrightarrow X_{s}\) is a birational map. If \(g\) only satisfies Condition (1), we say that \((X^{\prime},\Delta^{\prime})\to S\) is a _log smooth weakly birational model_ of \(f\). \(f\) is called _admissible_ (resp. _weakly admissible_) if it admits a log smooth (resp. weakly) birational model and the coefficients of \(B\) lie in \([0,1)\). **Lemma 4.6** (Relative Kawamata's covering).: _Let \(f:X\to S\) be a morphism between smooth projective varieties and \(D\) a simple normal crossing \(\mathbb{Q}\)-divisor on \(X\) whose coefficients lie in \([0,1)\). Let \(S^{o}\subset S\) be a Zariski open subset such that \((f^{-1}(S^{o}),D|_{f^{-1}(S^{o})})\to S^{o}\) _is a log smooth family. Then there is a finite surjective morphism \(h:Y\to X\) satisfying the following conditions:_ 1. \(Y\) _is a smooth projective variety,_ 2. \(f\circ h\) _is a smooth morphism over_ \(S^{o}\)_, and_ 3. _there is a_ \(\mathbb{Q}\)_-divisor_ \(F\geq 0\) _on_ \(Y\) _such that_ (4.1) \[h^{*}(K_{X}+D)=K_{Y}-F.\] Proof.: The proof is the same as [18, Theorem 17] except two modifications. The first is that the general hyperplanes \(H_{1},\dots,H_{d}\) in loc. cit. should satisfy that \[H_{1}+\dots+H_{d}+D|_{f^{-1}(S^{o})}\] is a relative simple normal crossing divisor over \(S^{o}\). The second is that one should let \(m_{i}\) in loc. cit. be sufficiently large in order to ensure the validity of (4.1). **Theorem 4.7**.: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be a weakly admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a generically finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Let \(S\) be a smooth projective variety containing \(S^{o}\) as a Zariski open subset such that \(D:=S\backslash S^{o}\) is a (reduced) simple normal crossing divisor and \(\xi^{o}\) extends to a morphism \(\xi:S\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Let \(\mathscr{L}\) be a line bundle on \(S\). Then there exist the following data._ 1. _A projective birational morphism_ \(\pi:S^{\prime}\to S\) _such that_ \(S^{\prime}\) _is smooth,_ \(\pi^{-1}(D)\) _is a simple normal crossing divisor and_ \(\pi\) _is a composition of smooth blow-ups._ 2. _A (possibly non-reduced) effective exceptional divisor_ \(E\) _of_ \(\pi\) _such that_ \(E\cup\pi^{-1}(D)\) _has a simple normal crossing support._ 3. \(A\) \(\mathbb{Q}\)_-polarized variation of Hodge structure of weight_ \(w>0\) _on_ \(S^{\prime}\backslash(E\cup\pi^{-1}(D))\) _with_ \((H=\bigoplus_{p+q=w}H^{p,q},\theta,h)\) _its associated Higgs bundle by taking the total graded quotients with respect to the Hodge filtration. Here_ \(h\) _is the Hodge metric._ _These data satisfy the following conditions._ 1. _There is a coherent ideal sheaf_ \(I_{Z}\) _on_ \(S\) _whose co-support_ \(Z\) _is contained in_ \(D\) _and_ \({\rm codim}_{S}(Z)\geq 2\)_, and a natural inclusion_ \(\mathscr{L}\otimes I_{Z}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w,0}\right)\)_._ 2. _Let_ \((\bigoplus_{p=0}^{w}L^{p},\theta)\) _be the log Higgs subsheaf generated by_ \(L^{0}:=\mathscr{L}\otimes I_{Z}\)_, where_ \[L^{p}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w-p,p}\right).\] _Then the Higgs field_ \[\theta:L^{p}|_{S\backslash(D\cup\pi(E))}\to L^{p+1}|_{S\backslash(D\cup\pi(E) )}\otimes\Omega_{S\backslash(D\cup\pi(E))}\] _is holomorphic over_ \(S\backslash D\) _for each_ \(0\leq p<n\)_, that is,_ \[\theta(L^{p})\subset L^{p+1}\otimes\Omega_{S}(\log D).\] Proof.: _Step 1: Compactify the family._ By the properness of \(\mathscr{M}_{\rm slc}(d,\Phi_{c},\Gamma,\sigma)\), we have the following constructions. * A generically finite proper surjective morphism \(\sigma:\widetilde{S}\to S\) from a smooth projective variety \(\widetilde{S}\) such that \(\sigma^{-1}(D)\) is a simple normal crossing divisor. \(\sigma\) is a combination of smooth blow-ups and a finite flat morphism. * Let \(\widetilde{S}^{o}:=\sigma^{-1}(S^{o})\) and \(\widetilde{X}^{o}:=\widetilde{S}^{o}\times_{S^{o}}X^{o}\). Let \(\widetilde{A}^{o}\) and \(\widetilde{B}^{o}\) be the divisorial pullbacks of \(A^{o}\) and \(B^{o}\) on \(\widetilde{X}^{o}\) respectively. There is a completion \(\widetilde{f}:(\widetilde{X},\widetilde{B}),\widetilde{A}\to\widetilde{S}\) of the base change family \((\widetilde{X}^{o},\widetilde{B}^{o}),\widetilde{A}^{o}\to\widetilde{S}^{o}\) such that \(\widetilde{f}\in\mathscr{M}_{\mathrm{slc}}(d,\Phi_{c},\Gamma,\sigma)( \widetilde{S})\). _Step 2: Take the log smooth weakly birational models._ Consider the following diagram where the arrows are explained as follows. * \(f^{\prime o}:X^{\prime o}\to S^{o}\) is a projective smooth morphism and \(\rho^{o}:X^{\prime o}\dashrightarrow X^{o}\) is a birational map over \(S^{o}\) whose image contains a dense Zariski open subset of \(\mathrm{supp}(A^{o}+B^{o})\). Let the \(\mathbb{Q}\)-divisors \(A^{\prime o}\) and \(B^{\prime o}\) on \(X^{\prime o}\) be the birational transforms of \(A^{o}\) and \(B^{o}\) respectively. \(f^{\prime o}:(X^{\prime o},A^{\prime o}+B^{\prime o})\to S^{o}\) is a log smooth weakly birational model of \(f^{o}:(X^{\prime o},B^{o}),A^{o}\to S^{o}\). * \(\widetilde{f}^{\prime o}\) and \(\widetilde{\rho}^{o}\) are the base changes of \(\widetilde{f}^{\prime o}\) and \(\rho^{o}\) respectively. Let \(\widetilde{A}^{\prime o}\) and \(\widetilde{B}^{\prime o}\) be the birational transforms of \(\widetilde{A}^{o}\) and \(\widetilde{B}^{o}\) on \(\widetilde{X}^{\prime o}\) respectively. Then \(\widetilde{f}^{\prime o}:(\widetilde{X}^{\prime o},\widetilde{A}^{\prime o}+ \widetilde{B}^{\prime o})\to\widetilde{S}^{o}\) is a log smooth weakly birational model of \((\widetilde{X}^{o},\widetilde{B}^{o}),\widetilde{A}^{o}\to\widetilde{S}^{o}\). * \(\widetilde{f}^{\prime}:\widetilde{X}^{\prime}\to\widetilde{S}\) is a completion of \(\widetilde{f}^{\prime o}\). Since the supports of \(\widetilde{A}\) and \(\widetilde{B}\) do not contain any irreducible component of any fiber of \(\widetilde{f}\), \(\widetilde{\rho}^{o}\) extends naturally to a birational map \(\widetilde{\rho}:\widetilde{X}^{\prime}\dashrightarrow\widetilde{X}\) whose image contains a dense Zariski open subset of \(\mathrm{supp}(\widetilde{A}+\widetilde{B})\). Hence we can define the birational transforms of \(\widetilde{A}\) and \(\widetilde{B}\) on \(\widetilde{X}^{\prime}\), denoted by \(\widetilde{A}^{\prime}\) and \(\widetilde{B}^{\prime}\) respectively. By replacing \(\widetilde{X}^{\prime}\) with a suitable blow-up on \(\widetilde{X}^{\prime}\) whose center lies in \(\widetilde{f}^{\prime-1}\sigma^{-1}(D)\), it may be assumed that the following are valid: * \(\widetilde{X}^{\prime}\) is a smooth projective variety, * \(\widetilde{f}^{\prime-1}\sigma^{-1}(D)+\widetilde{A}^{\prime}+\widetilde{B}^{\prime}\) is a simple normal crossing \(\mathbb{Q}\)-divisor, and * \(\widetilde{f}^{\prime}:\widetilde{X}^{\prime}\to\widetilde{S}\) is smooth over \(\widetilde{S}^{o}\). The constructions yield that \[\widetilde{f}_{*}(r(K_{\widetilde{X}/\widetilde{S}}+\widetilde{B}+a\widetilde{ A}))\simeq\widetilde{f}_{*}^{\prime}(r(K_{\widetilde{X}^{\prime}/\widetilde{S}}+ \widetilde{B}^{\prime}+a\widetilde{A}^{\prime})) \tag{4.2}\] whenever \(r(K_{\widetilde{X}/\widetilde{S}}+\widetilde{B}+a\widetilde{A})\) is integral. _Step 3: Kawamata's trick._ Let \((a,r,j)\in\mathbb{Q}^{\geq 0}\times(\mathbb{Z}^{>0})^{2}\) be a \((d,\Phi_{c},\Gamma,\sigma)\)-polarization data with \(0<a\ll 1\) so that \(\widetilde{B}^{\prime}+a\widetilde{A}^{\prime}\) is a simple normal crossing divisor with coefficients in \([0,1)\). Lemma 4.6 yields that there is a finite surjective morphism \(\varrho:\widetilde{Y}^{\prime}\to\widetilde{X}^{\prime}\) satisfying the following conditions: 1. \(\widetilde{Y}^{\prime}\) is a smooth projective variety, 2. \(\widetilde{f}^{\prime}\varrho\) is smooth over \(S^{o}\), and 3. there is a \(\mathbb{Q}\)-divisor \(F\geq 0\) such that (4.3) \[\varrho^{*}(K_{\widetilde{X}^{\prime}}+\widetilde{B}^{\prime}+a\widetilde{A}^{ \prime})=K_{\widetilde{X}^{\prime}}-F.\] The same construction is valid on \(X^{\prime o}\). One can choose a suitable Kawamata covering map \(Y^{o}\to X^{\prime o}\) such that \(\widetilde{f}^{\prime}\varrho:\widetilde{Y}^{\prime}\to\widetilde{S}\) is a completion of the base change morphism \(Y^{o}\times_{S^{o}}\widetilde{S}^{o}\to\widetilde{S}^{o}\). Consider the following diagram (4.4) where \(Y\to S\) is a completion of \(Y^{o}\to S^{o}\) with \(Y\) smooth and projective, \(\widetilde{Y}\to\widetilde{Y}^{\prime}\) is some modification, biholomorphic over \(Y^{o}\times_{S^{o}}\widetilde{S}^{o}\), so that there is a morphism \(\widetilde{Y}\to Y\) making the diagram commutative. We may assume that \(\sigma\) is sufficiently ramified so that \(\widetilde{Y}\to\widetilde{S}\) is strictly semistable in codimension one. Let \(g:Y\to S\) and \(\widetilde{g}:\widetilde{Y}\to\widetilde{S}\) be the maps that we have just constructed. According to (4.2) and (4.3), it follows that there is an inclusion \[\widetilde{f}_{*}(rK_{\widetilde{X}/\widetilde{S}}+r\widetilde{B}+ra \widetilde{A})\subset\widetilde{g}_{*}(rK_{\widetilde{Y}/\widetilde{S}}). \tag{4.5}\] _Last step._ We finish the proof by applying Theorem 3.1. Let \[W:=\widetilde{f}_{*}(rK_{\widetilde{X}/\widetilde{S}}+r\widetilde{B}+ra \widetilde{A})\quad\text{and}\quad l:=\text{rank}(W).\] By the construction of \(\lambda_{a,r}\) one has \[(\xi\circ\sigma)^{*}\lambda_{a,r}\simeq\det(W).\] Since \(\lambda_{a,r}\) is ample (Proposition 4.3) and \(\xi\circ\sigma\) is generically finite, \(\det(W)\) is big. Take an ample line bundle \(M\) on \(S\) so that there is an inclusion \[\sigma_{*}\mathscr{O}_{\widetilde{S}}\otimes M^{-1}\subset\mathscr{O}_{S}^{ \otimes N}\] for some \(N>0\). Since \(\det(W)\) is big, there is an inclusion \[\sigma^{*}(\mathscr{L}^{\otimes r}\otimes M)\subset\det(W)^{\otimes kr}\] for some \(k>0\). Let \(\widetilde{Y}^{(klr)}\) denote a functorial desingularization of the main component of the \(klr\)-fiber product \(\widetilde{Y}\times_{\widetilde{S}}\times\cdots\times_{\widetilde{S}} \widetilde{Y}\) and let \(\widetilde{g}^{(klr)}:\widetilde{Y}^{(klr)}\to\widetilde{S}\) denote the projection map. Define \(g^{(klr)}:Y^{(klr)}\to S\) similarly. We may assume that there is a morphism \(\widetilde{Y}^{(klr)}\to Y^{(klr)}\) such that the diagram is commutative. According to (4.5), there is an inclusion \(W\subset\widetilde{g}_{*}(rK_{\widetilde{Y}/\widetilde{S}})\). Since \(\widetilde{Y}\to\widetilde{S}\) is smooth over \(\widetilde{S}^{o}\) and is strictly semistable in codimension one, it follows from Lemma 4.4 that there is a natural inclusion \[\det(W)^{\otimes kr}\otimes I_{\widetilde{Z}}\to\left(\widetilde{g}_{*}(rK_{ \widetilde{Y}/\widetilde{S}})^{\otimes klr}\right)^{\vee\vee}\otimes I_{ \widetilde{Z}}\subset\widetilde{g}_{*}^{(klr)}(rK_{\widetilde{Y}^{(klr)}/ \widetilde{S}}),\] where \(I_{\widetilde{Z}}\) is an ideal sheaf whose co-support \(\widetilde{Z}\) lies in \(\sigma^{-1}(D)\) and \(\operatorname{codim}_{\widetilde{S}}(\widetilde{Z})\geq 2\). Notice that there is an inclusion \[\widetilde{g}_{*}^{(klr)}(rK_{\widetilde{Y}^{(klr)}/\widetilde{S}})\subset \sigma^{*}g_{*}^{(klr)}(rK_{Y^{(klr)}/S})\] by [55, Lemma 3.2] (see also [14, Lemma 3.1.20]). Let \(I\) be an ideal sheaf on \(S\) such that the map \(\sigma^{*}(I)\to\mathscr{O}_{\widetilde{S}}\) factors through \(I_{\widetilde{Z}}\). Taking the composition of the maps above, we get a morphism \[\sigma^{*}(\mathscr{L}^{\otimes r}\otimes M\otimes I^{\otimes r})\to\sigma^{ *}g_{*}^{(klr)}(rK_{Y^{(klr)}/S}).\] This induces a non-zero map \[\mathscr{L}^{\otimes r}\otimes I^{\otimes r}\to g_{*}^{(klr)}(rK_{Y^{(klr)}/ S})\otimes\sigma_{*}\mathscr{O}_{\widetilde{S}}\otimes M^{-1}\subset g_{*}^{( klr)}(rK_{Y^{(klr)}/S})^{\oplus N}.\] Hence we obtain a non-zero map \[\mathscr{L}^{\otimes r}\otimes I^{\otimes r}\to g_{*}^{(klr)}(rK_{Y^{(klr)}/ S}). \tag{4.6}\] Applying Theorem 3.1 to the morphism \(Y^{(klr)}\to S\) (which is smooth over \(S^{o}\)) and the torsion free sheaf \(\mathscr{L}\otimes I\), we obtain the theorem. ## 5. Hyperbolicity properties for admissible families of lc stable minimal models Based on the constructions in the previous sections (especially Theorem 4.7), we investigate various hyperbolicity properties of the base of a (weakly) admissible family of lc stable minimal models. ### Viehweg hyperbolicity **Theorem 5.1**.: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be a weakly admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a generically finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Let \(S\) be a smooth projective variety containing \(S^{o}\) as a Zariski open subset such that \(D:=S\backslash S^{o}\) is a divisor. Then \(\omega_{S}(D)\) is big._ Proof.: Since the claim of the theorem is independent of the choice of the compactification \(S^{o}\subset S\), we may assume that \(D\) is a simple normal crossing divisor and the morphism \(\xi^{o}\) extends to a morphism \(\xi:S\to M_{\rm slc}(d,\Phi_{c},\Gamma,\sigma)\). Take a line bundle \(\mathscr{L}\) on \(S\) so that \(\mathscr{L}\otimes\mathscr{O}_{S}(-D)\) is big. Let \[\bigoplus_{p=0}^{w}L^{p}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H_{h^{o}}^{w} \right)=\bigoplus_{p=0}^{w}\pi_{*}\left({}_{<\pi^{-1}(D)+E}H_{h^{o}}^{w-p,p}\right)\] be the Higgs subsheaf generated by \(L^{0}=\mathscr{L}\otimes I_{Z}\) as in Theorem 4.7, where \(I_{Z}\) is an ideal sheaf on \(S\) whose co-support \(Z\) lies in \(D\) and \(\operatorname{codim}_{S}(Z)\geq 2\). Then \[\mathscr{L}\otimes\mathscr{O}_{S}(-D)\otimes I_{Z}\subset\pi_{*}\left({}_{<E} H_{h^{o}}^{w,0}\right)\] and \(\bigoplus_{p=0}^{w}L^{p}\otimes\mathscr{O}_{S}(-D)\) is a log Higgs subsheaf of \(\pi_{*}\left({}_{<E}H_{h^{o}}^{w}\right)\) such that \[\theta(L^{p}\otimes\mathscr{O}_{S}(-D))\subset L^{p+1}\otimes\mathscr{O}_{S}( -D)\otimes\Omega_{S}(\log D),\quad\forall p=0,\dots,w-1.\] Consider the diagram \[L^{0}\otimes\mathscr{O}_{S}(-D)\stackrel{{\theta}}{{\to}}L^{1} \otimes\mathscr{O}_{S}(-D)\otimes\Omega_{S}(\log D)\stackrel{{\theta \otimes\operatorname{ld}}}{{\to}}L^{2}\otimes\mathscr{O}_{S}(-D)\otimes\Omega _{S}^{\otimes 2}(\log D)\to\cdots.\] Notice that there is a minimal \(n_{0}\leq w\) such that \(L^{0}\otimes\mathscr{O}_{S}(-D)\) is sent into \[\ker\left(L^{n_{0}}\otimes\mathscr{O}_{S}(-D)\otimes\Omega_{S}^{\otimes n_{0}}( \log D)\to L^{n_{0}+1}\otimes\mathscr{O}_{S}(-D)\otimes\Omega_{S}^{\otimes n_{0 }+1}(\log D)\right)\subset K\otimes\Omega_{S}^{\otimes n_{0}}(\log D)\] where \[K=\ker\left(\theta:\pi_{*}\left({}_{<E}H_{h^{o}}^{w}\right)\to\pi_{*}\left({}_ {<E}H_{h^{o}}^{w}\otimes\Omega_{\vec{S}}(\log\pi^{-1}(D)\cup E)\right)\right).\] Since \(n_{0}\) is minimal and \(K\) is torsion free, we obtain an inclusion \[\mathscr{L}\otimes\mathscr{O}_{S}(-D)\otimes I_{Z}\subset K\otimes\Omega_{S}^ {\otimes n_{0}}(\log D). \tag{5.1}\] This induces a nonzero morphism \[\beta:\mathscr{L}\otimes\mathscr{O}_{S}(-D)\otimes I_{Z}\otimes K^{\vee}\to \Omega_{S}^{\otimes n_{0}}(\log D). \tag{5.2}\] Since \(K\subset\pi_{*}\left({}_{<E}H_{h^{o}}^{w}\right)\), \(K^{\vee}\) is weakly positive by Proposition 2.10. Since \(\mathscr{L}\) and \(\mathscr{L}\otimes I_{Z}\) are isomorphic in codimension one, \(\mathscr{L}\otimes\mathscr{O}_{S}(-D)\otimes I_{Z}\) is big by assumption. Consequently, \(\Omega_{S}^{\otimes n_{0}}(\log D)\) contains the big sheaf \(\operatorname{Im}(\beta)\), which implies that \(n_{0}>0\). Thus \(\omega_{S}(D)\) is big by [8, Theorem 7.11]. ### Big Picard theorem #### 5.2.1. DLSZ criterion In this section we review the criterion by Deng-Lu-Sun-Zuo [12] on the big Picard type result via Finsler pseudometrics. **Definition 5.2** (Finsler metric).: Let \(E\) be a holomorphic vector bundle on a complex manifold \(X\). A Finsler pseudometric on \(E\) is a continuous function \(h:E\to[0,\infty)\) such that \[h(av)=|a|h(v),\quad\forall a\in\mathbb{C},\forall v\in E.\] We call \(h\) a Finsler metric if it is non-degenerate, that is, if \(h(v)=0\) only when \(v=0\) holds. **Theorem 5.3**.: _[_12_, Theorem A]_ _Let \((X,\omega)\) be a compact Kahler manifold and \(D\) a simple normal crossing divisor on \(X\). Let \(\gamma:\Delta^{*}:=\{z\in\mathbb{C}\mid 0<|z|<1\}\to X\backslash D\) be a holomorphic map. Assume that there is a Finsler pseudometric \(h\) on \(T_{X}(-\log D)\) such that \(|\frac{\partial}{\partial z}|_{\gamma^{*}h}^{2}\) is not identically zero and that the following inequality holds in the sense of currents_ \[\partial\bar{\partial}\log\left|\frac{\partial}{\partial z}\right|_{\gamma^{* }h}^{2}\geq\gamma^{*}(\omega).\] _Then \(\gamma\) extends to a holomorphic map \(\overline{\gamma}:\Delta\to X\)._ #### 5.2.2. Picard pair Let \(X\) be a compact complex space and \(Z\subset X\) a closed analytic subset. \((X,Z)\) is called a _Picard pair_ if either \(0\in\overline{\gamma^{-1}(S\backslash S^{o})}\) or \(\gamma\) can be extended to a holomorphic map \(\overline{\gamma}:\Delta\to S\) for any given holomorphic map \(\gamma:\Delta^{*}\to S\) from the punctured unit disc \(\Delta^{*}\). In particular, any holomorphic map \(\gamma:\Delta^{*}\to S^{o}\) extends to a holomorphic map \(\overline{\gamma}:\Delta\to S\). The classical big Picard theorem is equivalent to that \((\mathbb{P}^{1},\{0,1,\infty\})\) is a Picard pair. **Lemma 5.4**.: _Let \(X\) be a compact complex space and \(Z\subset X\) a closed analytic subset. Let \(\pi:X^{\prime}\to X\) be a proper bimeromorphic morphism which is biholomorphic over \(X\backslash Z\). Then \((X,Z)\) is a Picard pair if and only if \((X^{\prime},\pi^{-1}(Z))\) is a Picard pair._ Proof.: Assume that \((X,Z)\) is a Picard pair and \(\gamma:\Delta^{*}\to X^{\prime}\) is a holomorphic map such that \(0\notin\overline{\gamma^{-1}(\pi^{-1}(Z))}\). Then \(\pi\circ\gamma:\Delta^{*}\to X\) extends to a holomorphic morphism \(\overline{\pi\circ\gamma}:\Delta\to X\). Since \(\pi\) is biholomorphic over \(X\backslash Z\) and there is a neighborhood \(U\subset\Delta\) of \(0\) such that \(\pi\circ\gamma(U\backslash\{0\})\subset X\backslash Z\), the map \(\overline{\pi\circ\gamma}:\Delta\to X\) can be lifted to a holomorphic morphism \(\Delta\to X^{\prime}\). This proves that \((X^{\prime},\pi^{-1}(Z))\) is a Picard pair. Conversely, we assume that \((X^{\prime},\pi^{-1}(Z))\) is a Picard pair. Let \(\gamma:\Delta^{*}\to X\) be a holomorphic map such that \(0\notin\overline{\gamma^{-1}(Z)}\). Take a neighborhood \(U\subset\Delta\) of \(0\) such that \(\gamma(U\backslash\{0\})\subset X\backslash Z\). Since \(\pi\) is biholomorphic over \(X\backslash Z\), \(\gamma|_{U\backslash\{0\}}\) can be lifted to \(\gamma^{\prime}:U\backslash\{0\}\to X^{\prime}\) such that \(0\notin\overline{\gamma^{\prime-1}(\pi^{-1}(Z))}\). Then it admits a holomorphic extension \(\overline{\gamma^{\prime}}:U\to X^{\prime}\). Now \(\pi\circ\overline{\gamma^{\prime}}:U\to X\) is a holomorphic extension of \(\gamma|_{U\backslash\{0\}}\). This proves that \((X,Z)\) is a Picard pair. **Proposition 5.5**.: _Let \((X,Z)\) be a Picard pair where \(X\) is a compact Kahler space and \(Z\subset X\) is a closed analytic subset containing \(X_{\mathrm{sing}}\). Then \(X\backslash Z\) is Borel hyperbolic._ Proof.: By Lemma 5.4, one can take a desingularization of the pair \((X,Z)\) and assume that \(X\) is a compact Kahler manifold and \(Z\) is a simple normal crossing divisor. For the remainder of the proof, see [12, Corollary C]. #### 5.2.3. Finsler metrics associated with the analytic prolongations of Viehweg-Zuo Higgs sheaves We follow the ideas in [12] and make a little refinement by using non-canonical prolongations. Let \(f:(X^{o},B^{o}),A^{o}\to S^{o}\) be a weakly admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a quasi-projective smooth variety \(S^{o}\) which determines a generically finite morphism \(\xi^{o}:S^{o}\to M_{\mathrm{lc}}(d,\Phi_{c},\Gamma,\sigma)\). Let \(S\) be a smooth projective compactification of \(S^{o}\) so that \(D:=S\backslash S^{o}\) is a simple normal crossing divisor and \(\xi^{o}\) can be extended to a morphism \(\xi:S\to M_{\mathrm{lc}}(d,\Phi_{c},\Gamma,\sigma)\). Let \(\mathscr{L}\) be an ample line bundle on \(S\). Let \[\bigoplus_{p=0}^{w}L^{p}\subset\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w}_{h^{o}} \right):=\bigoplus_{p=0}^{w}\pi_{*}\left({}_{<\pi^{-1}(D)+E}H^{w-p,p}_{h^{o}}\right)\] be the Higgs subsheaf generated by \(L^{0}=\mathscr{L}\otimes\mathscr{O}_{S}(D)\otimes I_{Z}\) as in Theorem 4.7, where \(I_{Z}\) is some coherent ideal sheaf on \(S\) whose co-support \(Z\) lies in \(D\) and \(\mathrm{codim}_{S}(Z)\geq 2\). Then \[\mathscr{L}\otimes I_{Z}\subset\pi_{*}\left({}_{<E}H^{w,0}_{h^{o}}\right)\] and \(\bigoplus_{p=0}^{w}L^{p}\otimes\mathscr{O}_{S}(-D)\) is a log Higgs subsheaf of \(\pi_{*}\left({}_{<E}H^{w}_{h^{o}}\right)\) such that \[\theta(L^{p}\otimes\mathscr{O}_{S}(-D))\subset L^{p+1}\otimes\mathscr{O}_{S} (-D)\otimes\Omega_{S}(\log D),\quad\forall p=0,\dots,w-1.\] Consider the diagram \[L^{0}\otimes\mathscr{O}_{S}(-D)\stackrel{{\theta}}{{\to}}L^{1} \otimes\mathscr{O}_{S}(-D)\otimes\Omega_{S}(\log D)\stackrel{{ \theta\otimes\mathrm{ld}}}{{\to}}L^{2}\otimes\mathscr{O}_{S}(-D)\otimes \Omega_{S}^{\otimes 2}(\log D)\to\cdots.\] This induces a map \[\tau_{p}:\mathscr{L}\otimes I_{Z}\subset\pi_{*}\left({}_{<E}H^{w,0}_{h^{o}} \right)\to\pi_{*}\left({}_{<E}H^{w-p,p}_{h^{o}}\right)\otimes\Omega_{S}^{ \otimes p}(\log D)\] for every \(p=0,\dots,\dim S\). This induces a map \[\rho_{p}:T^{\otimes p}_{S}(-\log D)\to\mathscr{L}^{-1}\otimes\pi_{*}\left({}_{ <E}H^{w-p,p}_{h^{o}}\right). \tag{5.3}\] The following lemma is essentially due to Deng [11], while we make a mild modification in order to adapt the result to the context of analytic prolongations. **Lemma 5.6**.: \(\rho_{1}:T_{S}(-\log D)\to\mathscr{L}^{-1}\otimes\pi_{*}\left({}_{<E}H_{h^{o}}^{w- 1,1}\right)\) _is generically injective._ Proof.: See [11]. Notice that the metric \(h_{Q}\) on \(\pi_{*}\left({}_{<E}H_{h^{o}}^{w-1,1}\right)\) is bounded near the boundary \(D\). Let \(U_{0}\subset S\backslash D\) be a dense Zariski open subset such that \(\rho_{1}\) is injective on \(U_{0}\). Let \(\gamma:\Delta^{*}\to S\backslash D\) be a holomorphic map such that \(\operatorname{Im}(\gamma)\cap U_{0}\neq\emptyset\). By Lemma (5.6), the natural map \[\tau_{\gamma,1}:\gamma^{*}(\mathscr{L}\otimes I_{Z})\stackrel{{ \gamma^{*}\tau_{1}}}{{\to}}\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-1,1} \right)\otimes\gamma^{*}\Omega_{S}(\log D)\stackrel{{\operatorname {Id}\otimes d\gamma}}{{\to}}\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-1,1} \right)\otimes\Omega_{\Delta^{*}}\] is nonzero. Hence there is a minimal integer \(1\leq n_{\gamma}\leq\dim S\) such that \[\tau_{\gamma,n_{\gamma}}:\gamma^{*}(\mathscr{L}\otimes I_{Z})\stackrel{{ \gamma^{*}\tau_{n_{\gamma}}}}{{\to}}\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{ w-n_{\gamma},n_{\gamma}}\right)\otimes\gamma^{*}\Omega_{S}^{\otimes n_{ \gamma}}(\log D)\stackrel{{\operatorname{Id}\otimes d\gamma}}{{ \to}}\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{\gamma}}\right) \otimes\Omega_{\Delta^{*}}^{\otimes n_{\gamma}}\] is non-zero, whereas the composition \[\gamma^{*}(\mathscr{L}\otimes I_{Z})\stackrel{{\tau_{\gamma,n_{ \gamma}}}}{{\to}}\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{ \gamma}}\right)\otimes\Omega_{\Delta^{*}}^{\otimes n_{\gamma}}\to\gamma^{*} \pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma}-1,n_{\gamma}+1}\right)\otimes \Omega_{\Delta^{*}}^{\otimes(n_{\gamma}+1)}\] is zero. Then one has a non-zero map \[\mathscr{L}\otimes I_{Z}\to\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{ \gamma}}\right)\otimes S^{n_{\gamma}}\Omega_{S}(\log D).\] This induces a map \[\iota_{n_{\gamma}}:T_{S}^{\otimes n_{\gamma}}(-\log D)\to\mathscr{L}^{-1} \otimes\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{\gamma}}\right). \tag{5.4}\] Let \(h_{\mathscr{L}}\) be a hermitian metric on \(\mathscr{L}\) with positive curvature form and let \(h_{Q}\) be the Hodge metric which is regarded as a singular hermitian metric on \(\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{\gamma}}\right)\). Then the pullback \(h_{\gamma}:=\iota_{n_{\gamma}}^{*}(h_{\mathscr{L}}^{-1}h_{Q})\) induces a Finsler pseudometric on \(T_{S}(-\log D)\) by \[|v|_{h_{\gamma}}:=|\iota_{n_{\gamma}}(v^{\otimes n_{\gamma}})|_{h_{\mathscr{L }}^{-1}h_{Q}}^{\frac{1}{n_{\gamma}}},\quad v\in T_{S}(-\log D).\] The following proposition is essentially due to Deng-Lu-Sun-Zuo [12]. **Proposition 5.7**.: _Notations as above. Then \(|\frac{\partial}{\partial z}|_{\gamma^{*}h_{\gamma}}^{2}\) is not identically zero and the following inequality holds in the sense of currents_ \[\partial\bar{\partial}\log\left|\frac{\partial}{\partial z}\right|_{\gamma^{* }h_{\gamma}}^{2}\geq\frac{1}{n}\gamma^{*}(\Theta_{h_{\mathscr{L}}}(\mathscr{L} )).\] Proof.: Notice that \(\gamma^{*}h_{\gamma}\) is a (possibly degenerate) smooth hermitian metric on \(T_{\Delta^{*}}\). The first claim follows from the fact that \(\tau_{\gamma,n_{\gamma}}\) is non-zero. By the Poincare-Lelong equation, one has \[\partial\bar{\partial}\log\left|\frac{\partial}{\partial z}\right|_{\gamma^{* }h_{\gamma}}^{2}=-\Theta_{\gamma^{*}h_{\gamma}}(T_{\Delta^{*}})+R\] where \(R\) is the ramification divisor of \(\gamma\). Let \(N\) denote the saturation of the image of \(d\gamma:T_{\Delta^{*}}\to\gamma^{*}T_{S}(-\log D)\). Then one knows that \[\Theta_{\gamma^{*}h_{\gamma}}(T_{\Delta^{*}}) \leq\Theta_{\gamma^{*}h_{\gamma}}(N)=\frac{1}{n_{\gamma}}\Theta_{ \gamma^{*}h_{\gamma}^{n_{\gamma}}}(N^{\otimes n_{\gamma}})\] \[\leq\frac{1}{n_{\gamma}}\gamma^{*}\Theta_{h_{\mathscr{L}}^{n_{ \gamma}}}\left(T_{S}^{\otimes n_{\gamma}}(-\log D)\right)|_{N^{\otimes n_{ \gamma}}}\] \[\leq\frac{1}{n_{\gamma}}\gamma^{*}\Theta_{h_{\mathscr{L}}^{-1}h_ {Q}}\left(\mathscr{L}^{-1}\otimes\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma}, n_{\gamma}}\right)\right)|_{\gamma^{*}(t_{n_{\gamma}}(N^{\otimes n_{\gamma}}))}\] \[=-\frac{1}{n_{\gamma}}\gamma^{*}\Theta_{h_{\mathscr{L}}}( \mathscr{L})+\frac{1}{n_{\gamma}}\gamma^{*}\Theta_{h_{Q}}\left(\pi_{*}\left({ }_{<E}H_{h^{o}}^{w-n_{\gamma},n_{\gamma}}\right)\right)|_{\gamma^{*}(t_{n_{ \gamma}}(N^{\otimes n_{\gamma}}))}\] as \((1,1)\)-forms on \(\Delta^{*}\). By the definition of \(n_{\gamma}\), one sees that \(\gamma^{*}(\iota_{n_{\gamma}}(N^{\otimes n_{\gamma}}))\) lies in the kernel of the Higgs field \[\theta_{\gamma}:\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{ \gamma}}\right)\to\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma}-1,n_{ \gamma}+1}\right)\otimes\Omega_{\Delta^{*}}\] of the Higgs sheaf \(\gamma^{*}\pi_{*}\left({}_{<E}H_{h^{o}}^{w}\right)\). By Griffiths' curvature formula \[\gamma^{*}\Theta_{h_{Q}}(H_{h^{o}}^{w})+\theta_{\gamma}\wedge\overline{\theta_ {\gamma}}+\overline{\theta_{\gamma}}\wedge\theta_{\gamma}=0,\] one gets that \[\gamma^{*}\Theta_{h_{Q}}\left(\pi_{*}\left({}_{<E}H_{h^{o}}^{w-n_{\gamma},n_{ \gamma}}\right)\right)|_{\gamma^{*}(t_{n_{\gamma}}(N^{\otimes n_{\gamma}}))}= -\theta_{\gamma}\wedge\overline{\theta_{\gamma}}|_{\gamma^{*}(t_{n_{\gamma}}( N^{\otimes n_{\gamma}}))}\leq 0.\] Combining the formulas above, the proposition is proved. #### 5.2.4. Admissible families and big Picard theorem **Theorem 5.8**.: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be an admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a quasi-finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Let \(S\) be a projective variety containing \(S^{o}\) as a Zariski open subset. Then \((S,S\backslash S^{o})\) is a Picard pair. Consequently, \(S^{o}\) is Borel hyperbolic._ Proof.: Let \(\gamma:\Delta^{*}\to S\) be a holomorphic map such that \(0\notin\overline{\gamma^{-1}(S\backslash S^{o})}\). We are going to show that \(\gamma\) extends holomorphically to \(0\in\Delta\). By shrinking \(\Delta\) we may assume that \({\rm Im}(\gamma)\subset S^{o}\). Take \(B\subset S\) to be the Zariski closure of \({\rm Im}(\gamma)\). Let \(\pi:B^{\prime}\to B\) be a desingularization so that \(\pi\) is biholomorphic over \(B_{\rm reg}\cap S^{o}\) and \(\pi^{-1}(B\backslash(B_{\rm reg}\cap S^{o}))\) is a simple normal crossing divisor on \(B^{\prime}\). Since \(\pi\) is a proper map, \(\gamma\) can be lifted to \(\gamma^{\prime}:\Delta^{*}\to\pi^{-1}(B\cap S^{o})\). It suffices to show that \(\gamma^{\prime}\) extends to a holomorphic map \(\overline{\gamma^{\prime}}:\Delta\to B^{\prime}\). Taking the base change of \(f^{o}\) via \(\pi^{-1}(B\cap S^{o})\to S^{o}\), we may assume the following without loss of generality. \(S\) is smooth, \(D:=S\backslash S^{o}\) is a simple normal crossing divisor on \(S\) and \(\gamma:\Delta^{*}\to S\backslash D\) is a holomorphic map such that \({\rm Im}(\gamma)\) is Zariski dense in \(S\). We are going to show that \(\gamma\) extends to a holomorphic map \(\overline{\gamma}:\Delta\to S\). By SS5.2.3 there is a Finsler pseudometric \(h_{\gamma}\) on \(T_{S}(-\log D)\) such that \(|\frac{\partial}{\partial z}|_{\gamma^{*}h_{\gamma}}^{2}\) is not identically zero and that the following inequality holds in the sense of currents \[\partial\bar{\partial}\log\left|\frac{\partial}{\partial z}\right|_{\gamma^{*}h_ {\gamma}}^{2}\geq\gamma^{*}(\omega_{S}),\] where \(\omega_{S}\) is a hermitian metric on \(S\). Thus \(\gamma\) extends to a holomorphic map \(\overline{\gamma}:\Delta\to S\) by Theorem 5.3. This proves that \((S,S\backslash S^{o})\) is a Picard pair. The claim that \(S^{o}\) is Borel hyperbolic follows from Proposition 5.5. ### Pseudo Kobayashi hyperbolicity Theorem 4.7, combined with the method in [11], implies the pseudo Kobayashi hyperbolicity of the base of a weakly admissible family of lc stable minimal models. For the basic notions of pseudo Kobayashi hyperbolicity, the reader is referred to [11]. **Theorem 5.9**.: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be a weakly admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a generically finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Then \(S^{o}\) is pseudo Kobayashi hyperbolic._ Proof.: The arguments in [11] is applicable to our case. According to [11, SS2.3] and (5.3), one obtains a Finsler pseudometric \[F:=\left(\sum_{k=1}^{w}k\alpha_{k}\rho_{k}^{*}(h_{\mathscr{L}}^{-1}h_{Q})^{ \frac{2}{k}}\right)^{\frac{1}{2}},\] which is positive definite on some Zariski open subset of \(S^{o}\) and is bounded from above by a negative constant for some \(\alpha_{1},\dots,\alpha_{w}>0\). Consequently, \(S^{o}\) is pseudo Kobayashi hyperbolic by [11, Lemma 2.4]. ### Brody hyperbolicity **Theorem 5.10**.: _Let \(f^{o}:(X^{o},B^{o}),A^{o}\to S^{o}\) be an admissible family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a smooth quasi-projective variety \(S^{o}\) which defines a quasi-finite morphism \(\xi^{o}:S^{o}\to M_{\rm lc}(d,\Phi_{c},\Gamma,\sigma)\). Then \(S^{o}\) is Brody hyperbolic._ Proof.: Assume that there is a holomorphic map \(h:\mathbb{C}\to S^{o}\). By Theorem 5.8, \(h\) can be extended to an algebraic morphism \(\overline{h}:\mathbb{P}^{1}\to S\). Taking the base change of the family \(f^{o}\) via the map \(h\), we obtain an admissible (algebraic) family \(f^{o}|_{\mathbb{C}}\) of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over \(\mathbb{C}\). Since \(\mathbb{C}\) is not of log general type, Theorem 5.1 implies that the classifying map \(\xi^{o}\circ h\) of \(f^{o}|_{\mathbb{C}}\) must be constant. Consequently, \(h\) is constant due to the quasi-finiteness of \(\xi^{o}\). This proves the theorem. ### Stratified hyperbolicity Based on the results in the previous sections, we introduce the following notion. **Definition 5.11**.: Let \(f:(X,B),A\to S\) be a family of lc stable minimal models over a quasi-projective variety \(S\) such that the coefficients of \(B\) lie in \([0,1)\). An _admissible stratification_ of \(S\) with respect to \(f\) is a filtration of Zariski closed subsets \[\emptyset=S_{-1}\subset S_{0}\subset\dots\subset S_{d}=S\] satisfying the following conditions: * \(d=\dim S\), * \(S_{i}\backslash S_{i-1}\) is a (possibly disconnected) smooth variety of dimension \(i\) for each \(i=0,\ldots,d\), * each \(\overline{S_{i}}\backslash S_{i}\) is a disjoint union of strata of \(\{S_{i}\}\), and * the pullback family of \(f\) over \(S_{i}\backslash S_{i-1}\) is admissible for each \(i=0,\ldots,d\). The following lemma shows that an admissible stratification always exists. **Lemma 5.12**.: _Let \(f:(X,B),A\to S\) be a family of lc stable minimal models over a quasi-projective variety \(S\) such that the coefficients of \(B\) lie in \([0,1)\). Then there is a dense Zariski open subset \(U\subset S_{\mathrm{reg}}\) such that \(f|_{U}\) is admissible._ Proof.: Take a log resolution \(\pi:X^{\prime}\to X\) of the pair \((X,\mathrm{supp}(A+B))\). By generic smoothness, there is a dense Zariski open subset \(U\subset S_{\mathrm{reg}}\) such that \((X^{\prime},\pi^{*}(A+B))\) is a log smooth family over \(U\) and \(\pi|_{X^{\prime}_{s}}:X^{\prime}_{s}\to X_{s}\) is a birational morphism for every \(s\in U(\mathbb{C})\). This proves the lemma. The following theorem is a direct consequence of Theorem 1.1. **Theorem 5.13**.: _Let \(f:(X,B),A\to S\) be a family of \((d,\Phi_{c},\Gamma,\sigma)\)-lc stable minimal models over a quasi-projective variety \(S\) such that the coefficients of \(B\) lie in \([0,1)\). Assume that \(f\) induces a quasi-finite morphism \(S\to M_{\mathrm{lc}}(d,\Phi_{c},\Gamma,\sigma)\). Let_ \[\emptyset=S_{-1}\subset S_{0}\subset\cdots\subset S_{\dim S}=S\] _be an admissible stratification of \(S\) with respect to \(f\). Then the following hold for each \(i=0,\ldots,\dim S\)._ * (big Picard theorem) _\((S_{i},S_{i-1})\) is a Picard pair._ * (Borel hyperbolicity) _Any holomorphic map from an algebraic variety to_ \(S_{i}\backslash S_{i-1}\) _is algebraic._ * (Viehweg hyperbolicity) _\(S_{i}\backslash S_{i-1}\) is of log general type._ * (pseudo Kobayashi hyperbolicity) _\(S_{i}\backslash S_{i-1}\) is Kobayashi hyperbolic away from a proper Zariski closed subset._ * (Brody hyperbolicity) _\(S_{i}\backslash S_{i-1}\) is Brody hyperbolic._
2308.15282
Diffusion-based kernel density estimation improves the assessment of carbon isotope modelling
Comparing differently sized data sets is one main task in model assessment and calibration. This is due to field data being generally sparse compared to simulated model results. We tackled this task by the application of a new diffusion-based kernel density estimator (diffKDE) that approximates probability density functions of a data set nearly independent of the amount of available data. We compared the resulting density estimates of measured and simulated marine particulate organic carbon-13 isotopes qualitatively and quantitatively by the Wasserstein distance. For reference we also show the corresponding comparison based on equally sized data set with reduced simulation and field data. The comparison based on all available data reveals a better fit of the simulation to the field data and shows misleading model properties in the masked analysis. A comparison between the diffKDE and a traditional Gaussian KDE shows a better resolution of data features under the diffKDE. We are able to show a promising advantage in the application of KDEs in calibration of models, especially in the application of the diffKDE.
Maria-Theresia Pelz, Christopher Somes
2023-08-29T13:12:46Z
http://arxiv.org/abs/2308.15282v1
# Diffusion-based kernel density estimation improves the assessment of carbon isotope modelling ###### Abstract Comparing differently sized data sets is one main task in model assessment and calibration. This is due to field data being generally sparse compared to simulated model results. We tackled this task by the application of a new diffusion-based kernel density estimator (diffKDE) that approximates probability density functions of a data set nearly independent of the amount of available data. We compared the resulting density estimates of measured and simulated marine particulate organic carbon-13 isotopes qualitatively and quantitatively by the Wasserstein distance. For reference we also show the corresponding comparison based on equally sized data set with reduced simulation and field data. The comparison based on all available data reveals a better fit of the simulation to the field data and shows misleading model properties in the masked analysis. A comparison between the diffKDE and a traditional Gaussian KDE shows a better resolution of data features under the diffKDE. We are able to show a promising advantage in the application of KDEs in calibration of models, especially in the application of the diffKDE. Keywordsdata comparison, differently sized data, Earth system models, model assessment, model calibration, probability density functions ## 1 Introduction Ocean data are highly diverse and thus require good performance of evaluation tools on different data features. They can describe individual biological, chemical or geological tracers, resolve physical properties of the ocean, be linked to specific location or time or even to each other. Sources of marine data furthermore diversify marine data [12, 18]. They can be collected by field measurements from a research vessel, time series stations, autonomous devices or fixed traps in the water column. Furthermore, they can be obtained as results from simulations of marine processes as for example included in Earth system models. Being influenced by various processes, marine data is often multimodal [2, 9], sometimes boundary close [11] and generally noised [4, 27, 6]. Hence, a tool for the evaluation of marine data must account for this specific characteristics and resolve the true data structure under the noise. Comparing marine data is a fundamental task in ocean research. It can assess changes in measurement data, to evaluate projections and test cases from simulations and to assess the quality of a model [20]. In many cases it is important to be able to compare differently sized data sets. In model assessment and calibration, sparsity of field data induces the need to compare differently sized data sets. Field data is generally only available in measured times and locations, whereas simulation data exists in every grid cell and for every time step. To compare such differently sized data, a mask can be applied to reduce the data to a comparable amount. In model assessment this mask is generally chosen to mark the grid cells, where both data kinds are available and only incorporate data from these grid cells [22]. Approximated probability density functions (PDFs) allow to investigate data nearly independent of their size and by this build a basis to compare differently sized data [21]. PDFs give an intuitive visual insight into the distribution of data and provide a continuous function for following analyses. There are two main approaches towards the estimation of PDFs: parametric and non-parametric [24]. The parametric approach assumes the knowledge of an underlying specific density and aims at estimating its parameters. This can be an efficient way for approximation, but requires the assumption of an underlying specif density to be true and hence is generally insufficient for such diverse data as from the marine environment. The non-parametric approximation does not require any knowledge about the input data and attempts to estimate the density by weighing all input data equally. This offers the opportunity to explore data with multiple modes of unknown count and location as in many marine data. The most prominent non-parametric PDF estimator is a kernel density estimator (KDE) [21]. There exist a variety of choices for the KDEs. A common choice is a Gaussian KDE build on the distribution function of the Gaussian distribution. Unfortunately, this tends to oversmooth multimodal structure and is also inconsistent at the boundaries of restricted domains [1]. An improved approach on this specific tasks in a diffusion-based KDE [1] build on the solution of the diffusion heat equation. We used a new implementation of a diffusion-based KDE (diffKDE) to show a new comparison possibility for simulated and measured marine particulate organic carbon-13 isotopes, expressed in the delta notation which is based on its ratio relative to carbon-12 (\(\delta^{13}\)C\({}_{\mathrm{POC}}\)). The diffKDE includes optimal smoothing properties for geoscientific data and well resolves typical structures of marine data [16]. Simulation results are obtained by [22] and corresponding field data by [26]. We created two test scenarios comparing (1) a traditional masked data approach with the comparison of equally size data and (2) a full data approach with the comparison of all available data. The paper is structured as followed: In the second section, we describe the method of kernel density estimation and introduce the two estimators applied in this study. Furthermore, we describe the used example data of marine carbon-13 isotopes. The third section shows the results of the two estimators on the test data in the two scenarios and four test cases each. The paper ends with a short discussion of the observed results and a concluding section. Methods We applied the non-parametric approach of a kernel density estimator (KDE) to approximate probability density functions (PDFs) of carbon isotope data. This allows for the comparison of differently sized data by switching from a direct data comparison to a comparison of their densities. The resulting estimates from simulated and measured field data can be compared qualitatively by eye and quantitatively by a divergence function [23]. In the following we shortly describe the applied KDEs and the incorporated data. ### Kernel density estimation by diffusion and Gaussian kernels A very common choice for a KDE is a weighed sum of Gaussian kernels [21]. This sum definition of a KDE can generally be given as \[\hat{f}:\mathbb{R}\times\mathbb{R}_{>0}\rightarrow\mathbb{R}_{\geq 0},\quad(x ;t)\mapsto\frac{1}{n\sqrt{t}}\sum_{j=1}^{n}K\left(\frac{x-X_{j}}{\sqrt{t}} \right). \tag{1}\] [14]. For the Gaussian KDE the kernel function \(K:\mathbb{R}\rightarrow\mathbb{R}_{>0}\) is set to be the density of the Gaussian distribution as \[\Phi:\mathbb{R}\rightarrow\mathbb{R}_{\geq 0},w\mapsto\frac{1}{\sqrt{2\pi}}e^{ -\frac{1}{2}w^{2}}. \tag{2}\] The here applied Gaussian KDE is part of the stats package from the SciPy Python library [8]. Despite widely applied, the Gaussian KDE has several disadvantages in the application on marine data. These include inconsistency at the Figure 1: A Gauss kernel centered around 0 with increasing variance from \(\sigma=0.5\) to \(\sigma=1.5\). boundaries of restricted data domains and oversmoothing of multimodal distributions [1]. To better account for these shortcomings, we expanded the Gaussina KDE approach to the diffusion-based KDE (diffKDE). The here applied diffKDE was proposed by [3], expanded by [1] and implemented by [16] in [17]. The motivation originates from the Gaussian distribution being closely related to the diffusion process. An increase in the variance parameter \(t\in\mathbb{R}_{>0}\) in Eq. 1 with the Gaussian kernel from Eq. 2 can be interpreted with an increase in time while solving the diffusion equation. This thought experiment is visualized in Fig. 1. Mathematically, the correlation between Gaussian KDE and diffusion equation is given by the Gaussian kernel solving the diffusion equation as a fundamental solution [3]. The diffKDE is defined as the solution \(u\in C^{2,1}\left(\Omega\times\mathbb{R}_{>0},\mathbb{R}_{\geq 0}\right)\), which solves the diffusion partial differential equation \[\frac{\partial}{\partial t}u\left(x;t\right) =\frac{1}{2}\frac{d^{2}}{dx^{2}}\left(\frac{u\left(x;t\right)}{p \left(x\right)}\right), x\in\Omega,t\in\mathbb{R}_{>0}, \tag{3}\] \[\frac{\partial}{\partial x}\left(\frac{u\left(x;t\right)}{p\left( x\right)}\right) =0, x\in\partial\Omega,t\in\mathbb{R}_{>0},\] (4) \[u\left(x;0\right) =\frac{1}{N}\sum_{j=1}^{N}\delta\left(x-X_{j}\right), x\in\Omega. \tag{5}\] up to a final iteration time \(T\in\mathbb{R}_{>0}\). We chose this approach, since the diffKDE promises better results on typical marine data properties. Due to the Neumann boundary conditions it is consistent at the boundaries. The incorporated parameter function \(p\in C^{2}\left(\Omega,\mathbb{R}_{>0}\right)\) induces adaptive smoothing, which leads to a better resolution of multiple and boundary close modes. The theoretical properties of the diffKDE are in detail discussed in [1]. The here employed implementation, its underlying algorithm for the solution of the partial differential equation in Eq. 3 and the specific choices for \(p\) and \(T\) are in detail explained in [16]. The applied software is available at [17]. To provide an insight into the different performance on typical data structures, we used artificial random samples from known distributions and applied both KDEs. The resulting KDEs are shown in comparison to the true distribution in Fig. 2 in an adapted graphic from [16]. In comparison with the Gauss KDE this algorithm shows superior resolution of multimodal and boundary close data. For the multimodal data the Gaussian KDE hardly detects the third mode in both observed cases. Furthermore, the pronounciation of the main mode is fairly undererestimated by the Gaussian KDEin comparison to the diffKDE. The latter detects two modes for the case of 50 random samples and all three modes for 100 random samples. The main mode is well met for the larger random sample. The minimal between the modes are always better resolved by the diffKDE than the Gaussian KDE. In the test cases with the boundary close data the diffKDE detects the height of the mode far better in both test cases than the Gaussian KDE. Furthermore, the steep decline left of the mode is far better met by the diffKDE than the Gaussian KDE, which does not approach zero within the observed domain. This results also in an integral not equal to one for the Gaussian KDE, whereas the diffKDE integrates to one in all observed cases. ### Carbon-13 isotope data for comparison The simulation results are obtained by [22]. This model is built on the UVic Earth System Climate Model version 2.9 [5]. The general circulation has a \(1.8^{\circ}\times 3.6^{\circ}\) resolution and 19 vertical layers increasing with depth. The biogeochemical model is the Model of Ocean Biogeochemistry and Isotopes (MOBI) version 2.0. This simulates latest findings of carbon cycling [10] and carbon isotopes [19]. The field data are globally available carbon-13 isotope \(\delta^{13}\)C\({}_{\rm POC}\) by [26]. The data covers the 1960s to the 2010s and all major ocean basins. A detailed description of the data set version is available at [25]. The data were interpolated onto the grid of the simulation model to make them comparable [25]. For the scenario (1) of the comparison of equally sized data sets, we applied a mask to both, simulation and field data. This mask marks the grid cells, where both data kinds are available, and only data from these grid cells will be incorporated in the analyses for scenario (1). Scenario (2) on the other hand will incorporate all available data of the chosen time and domain. Figure 2: Gaussian and diffKDE performance on known data. This figure is an adapted version from [16]. The known distribution is shown as the grey area in the background. The upper row of panels uses an artificial trimodal distribution. The lower row shows a lognormal distribution. All KDEs are calculated over the domain \([-1,12]\). The upper and lower left panels show the diffKDE in blue and the Gaussian KDE in orange on a random sample of 50 data points of the distribution in the background. The upper and lower right show equally the diffKDE and Gaussian KDE on a random sample of 100 data points of the distribution in the background. Results We show model assessment approach based on comparison of estimated densities. The simulation and field data are averaged \(\delta^{13}C_{\mathrm{POC}}\) data over the 1990s. We compare two scenarios: (1) a masked data approach only incorporating data from grid cells, where both data kinds simulation and field are available and (2) a full data approach incorporating all available data of the chosen time frame and areas. For each of the scenarios, we show four test cases: a comparison of all available data over all depths and ocean basins, a comparison of the data restricted to the euphotic zone, a comparison of the data restricted to the euphotic zone excluding the Southern Ocean and a comparison of the data restricted to the euphotic zone and the Southern Ocean. For all four cases we show comparison of simulation and field data by the diffKDE as well as the traditional Gaussian KDE. We also add to each comparison an error calculated between the graphs for simulation and field data measured by the Wasserstein distance [13]. The here discussed test cases are refined experiments already teased in [15]. We show the results in Fig. 3 and Fig. 4. In both figures, all four panels show the comparison of simulation and field data by their estimated densities is shown. The continuous lines present the densities estimated from the simulation results and the dashed lines those estimated from the field data. The blue graphs are estimates by the diffKDE, the orange by the Gaussian KDE. The number of incorporated data points for all is depicted as well as the error calculated by the Wasserstein distance for each KDE approach. First, we show scenario (1), the assessment based on equally sized data sets. This approach is often employed to compare only data, which has a directly corresponding equivalent in the respective other data set. A typical approach is to incorporate only grid cells into the analyses, where both data kinds are available [22]. By this, one will obtain a specific insight into how well a model simulates tracers at the exactly corresponding location. We realised this approach by using field data interpolated into the same grid as the simulation results [26] and applying a mask to the data before analyses that only allows the grid cells with both data kinds available. The overall comparison incorporates 232 data points and shows general too low values in the simulation. Field and simulation data show two main modes each. The lower in the field data is located at around \(\delta^{13}C_{\mathrm{POC}}=-28\) % and simulated at around \(\delta^{13}C_{\mathrm{POC}}=-29.5\) %c. The larger mode is located in the field data at around \(\delta^{13}C_{\mathrm{POC}}=-22\) %c and in the simulation results at around \(\delta^{13}C_{\mathrm{POC}}=-25\) %c. Here, the simulation results reveal an additional less pronounced mode at around \(\delta^{13}C_{\mathrm{POC}}=-23\) %c especially under the diffKDE analysis. Also the pronounciation of the modes is stronger under the simulation data. The lower modes of the field data reaches up to a density of around 0.16 under the diffKDE and around 0.12 under the Gaussian KDE. In comparison the corresponding modes reach up to a density of around 0.24 under the diffKDE and around 0.17 under the Gaussian. For the higher mode the densities of field data reach up to around 0.12 under the diffKDE and 0.95 under the Gaussian, those of simulation data up to 0.14 and 0.12 for diffKDE and Gaussian KDE, respectively. The errors between simulation and field data are with 0.0114 for the diffKDE and 0.0108 for the Gaussian KDE quite comparable and only slightly smaller under the Gaussian KDE analysis. The comparison of the euphotic zone data incorporates 157 data points and reveals a similar pattern as the whole ocean comparison. But the second mode is better fit this time. The location of the higher mode is still simulated too low in comparison with the field data. But with around \(\delta^{13}C_{\mathrm{POC}}=-23\) %c the density estimated from the simulation data is a far better fit to the field data in this case. Furthermore, the pronunciation of the higher mode is in good agreement between simulation and field data, now. Also in this case there is an expected third mode under the diffKDE in the simulation data at around \(\delta^{13}C_{\mathrm{POC}}=-25\) %c and in the field data at around \(\delta^{13}C_{\mathrm{POC}}=-26\) %c. The Gaussian KDE does not reveal these patterns. The errors have also reduced and are still comparable, with yet smaller values originating from the Gaussian KDE. The euphotic zone excluding the Southern Ocean data are 73 data points for each data kind and reveals the simulation of a far too low and more pronounced second lower mode. The main mode is in the field Figure 3: Scenario (1) of simulation and field data comparison by comparing KDEs of equally sized data: All four panels show in blue the diffKDE and in orange the Gaussian KDE. The continuous lines are the KDEs calculated from the simulation data. The dashed lines are the KDEs calculated from the field data. All data is averaged over the 1990s and only taken from grid cells, where both data kinds are available. The upper left panel shows the KDEs calculated from all simulation and field data. The upper right panel shows the KDEs calculated from the euphotic zone, i.e. the uppermost 130 m of the oceans. The lower left panel shows the KDEs calculated from the euphotic zones excluding the Southern Ocean, i.e. only data north from 45\({}^{\circ}\) S. The lower right panel shows the KDEs calculated from the euphotic zone of the Southern Ocean, i.e. only data south from 45\({}^{\circ}\) S. The annotated error in each panel is calculated as the Wasserstein distance between simulation and field data for each KDE type. This figure is an adapted version from [15]. data at around \(\delta^{13}C_{\rm{POC}}=-22\) %\({}_{\circ}\) and in the simulation data at around \(\delta^{13}C_{\rm{POC}}=-23\). A second lower mode is detectable in the field data only under the diffKDE at around \(\delta^{13}C_{\rm{POC}}=-24\). The simulation data exhibits a second lower mode far more clearly under both KDEs at around \(\delta^{13}C_{\rm{POC}}=-25\). A third mode at around \(\delta^{13}C_{\rm{POC}}=-17.5\) %\({}_{\circ}\) is visible in the field data under the diffKDE and met by no similar feature in the simulation data. The pronounciations of the modes are again stronger under the diffKDE. The errors are equal up to one thousand. The euphotic zone Southern Ocean data are 84 data points for each data kind and their analysis shows again too small values and a far too pronounced smallest mode in the simulation data compared to the field data. The main mode in the field data is located at around \(\delta^{13}C_{\rm{POC}}=-28\) %\({}_{\circ}\) and a corresponding main mode in the field data at around \(\delta^{13}C_{\rm{POC}}=-29.5\). A second smaller mode is located in the field data at around \(\delta^{13}C_{\rm{POC}}=-26\) %\({}_{\circ}\) and in the simulation data at around \(\delta^{13}C_{\rm{POC}}=-27\). In this case and generally for the only time in these examples, the mode in the field data is more pronounced than the mode in the simulation data. A final third mode is visible in the field data only under the diffKDE at around \(\delta^{13}C_{\rm{POC}}=-23\) %\({}_{\circ}\) and in the simulation data at around \(\delta^{13}C_{\rm{POC}}=-24.5\). The pronunciation of the modes is again stronger under the diffKDE. The errors show the biggest difference in this case with a smaller value for the Gaussian KDE. Second, we use the property of the diffKDE to evaluate unequally sized data in scenario (2) and compare all available data. We use the same four test cases as in the first masked analyses. The data are also the same simulation and field data from [22] and [26], respectively, but now all averaged 1990s data are incorporated without the prior restriction to a mask. This leads to the comparison of very differently sized data sets, only restricted to the main location characteristics such as euphotic zone or Southern Ocean describing the four test cases. We show the results in Fig. 4. The overall data comparison incorporates 59000 simulation and 261 field data points and reveals a better fit of the location of the higher mode between the two data kinds. The lower mode is located similar to the masked example in field an simulation data. The pronounciation of the modes is in better fit now. The higher mode is located in the field data as in the masked analysis. In the simulation data the mode is far higher located now at around \(\delta^{13}C_{\rm{POC}}=-21\) %\({}_{\circ}\) and by this far better fitting the field data. A third mode in the simulation data is located at around \(\delta^{13}C_{\rm{POC}}=-27\) %\({}_{\circ}\) in the simulation data only and even a fourth at around \(\delta^{13}C_{\rm{POC}}=-23\), mainly under the diffKDE. Some additional smaller structures are visible under the diffDKE in the simulation data between the two outermost modes. The error has decreased for both estimators and is now even smallest under the diffKDE with 0.0074 in comparison to 0.0114 in the masked analysis. The euphotic zone comparison incorporates 11885 simulation data points and 172 data points and reveals similar pattern changes in comparison to the masked comparison as the overall data comparison. The pronounciation of the lower mode is in better fit between the field and simulation data, but with nearly the same offset as in the masked example. The higher mode is located far higher for the simulation data this time at around \(\delta^{13}C_{\rm{POC}}=-21\), which leads to a similar offset as in the masked analysis, but now in the opposite direction. The pronounciation of this mode is now too strong in the simulation data in comparison to the field data. There are two additional modes visible in the simulation data at around \(\delta^{13}C_{\rm{POC}}=-27\) %\({}_{\rm{e}}\) and \(\delta^{13}C_{\rm{POC}}=-23\), especially under the diffKDE. A third mode in the field data is expectable at around \(\delta^{13}C_{\rm{POC}}=-26\) %\({}_{\rm{e}}\) under the diffKDE. The error has slightly increased in this example and is now also smaller for the diffKDE. The unmasked euphotic zone excluding the Southern Ocean data are 8946 simulation data points and 88 field data points and show a far better fit between both data kinds. The main mode is located at around \(\delta^{13}C_{\rm{POC}}=-22\) %\({}_{\rm{e}}\) in the field data and at around \(\delta^{13}C_{\rm{POC}}=-21\) %\({}_{\rm{e}}\) in the simulation data. The pronunciation fits well under the diffKDE and is stronger in the simulation data under the Gaussian KDE. A second and third mode are visible in the field data only under the diffKDE at around \(\delta^{13}C_{\rm{POC}}=-25\) %\({}_{\rm{e}}\) and \(\delta^{13}C_{\rm{POC}}=-27.5\), respectively. The lowest is fit well by a mode in the simulation data, that is more pronounced, but similarly located. A final third mode in the simulation data is again visible only under the diffKDE at around \(\delta^{13}C_{\rm{POC}}=-23\). The error has Figure 4: Scenario (2) of simulation and field data comparison by comparing KDEs of all available data: All four panels show in blue the diffKDE and in orange the Gaussian KDE. The continuous lines are the KDEs calculated from the simulation data. The dashed lines are the KDEs calculated from the field data. All data is averaged over the 1990s. The upper left panel shows the KDEs calculated from all simulation and field data. The upper right panel shows the KDEs calculated from the euphotic zone, i.e. the uppermost 130 m of the oceans. The lower left panel shows the KDEs calculated from the euphotic zones excluding the Southern Ocean, i.e. only data north from 45\({}^{\circ}\) S. The lower right panel shows the KDEs calculated from the euphotic zone of the Southern Ocean, i.e. only data south from 45\({}^{\circ}\) S. The annotated error in each panel is calculated as the Wasserstein distance between simulation and field data for each KDE type. This figure is an adapted version from [15]. again decreased from the masked analyses to this unmasked analyses for both KDEs and is smaller this time for the Gaussian KDE. The euphotic zone data restricted to the Southern Ocean incorporates 2939 simulation data points and 84 field data points and shows similar, but better fitting fits between the KDEs as in the masked analysis. The main mode in simulation and field data are similarly located as in the masked example, but this time with a slightly better fitting pronunciation. The location of the second mode is well fitting between simulation and field data in this unmasked analysis, while the pronunciation is similarly different as in the masked analysis. All of this leads to a decreased error, especially for the diffKDE. ## 4 Discussion Even highly increased availability of field data leaves these sparse in comparison to simulation results. The here incorporated \(\delta^{13}\)C\({}_{\rm{POC}}\) are a highly increased data set version of the at that time latest global version [7] with nearly 10 times as many globally available data points. Nevertheless, in comparison to simulation results these data are still sparse. This leads to the need to either reduce the data to comparable amounts or to use a comparison measure independent of the available data amount. Involving all available data improves insights into the model's performance. A decrease of data is inevitably correlated to a loss of information. This is why we chose to employ a comparison measure that allows to take into account all available data. Kernel density estimators approximate PDFs from nearly any amount of handed in data. We showed the performance of a classical Gaussian KDE in comparison to the new diffKDE. The diffKDE resolved always more structure of the underlying data and revealed modes that were not detectable under the Gaussian We conducted four test analyses in different global ocean subsets and see on all four a decrease of the error or at least better qualitative fit for the incorporation of all available data. This mainly comes from a better fit of the location of (one of) the main mode(s). This is generally too low in the density estimated from the masked simulation data. Furthermore, the pronunciation of the modes is often better fitting between unmasked simulation and field data than between the masked data. The best improvement of fit is observable in the euphotic zone excluding the Southern Ocean. Location and pronunciation of the main mode are well fitting in the unmasked data comparison, while its pronunciation is overestimated in the masked simulation data and location underestimated in the masked simulation data. The same is true for the second smaller mode, which is even far lower located in the unmasked analysis than in the masked analysis. Comparison between Gaussian KDE and diffKDE always shows more details revealed under the diffKDE than the Gaussian KDE, leading to a smaller error calculated from the Gaussian KDE. Again, this is most prominent in the Southern Ocean example, where a mode in the field data is detectable under the diffKDE at around \(\delta^{13}C_{\rm{POC}}=-27.5\) %c matching a similar one in the simulation data, that is not at all visible under the Gaussian KDE. Furthermore, two additional modes are visible under the diffKDE in the field data at around \(\delta^{13}C_{\rm{POC}}=-25\) %c and the simulation data at around \(\delta^{13}C_{\rm{POC}}=-23\) %c that are not visible under the Gaussian KDE. ## 5 Conclusion The KDE based comparison offers a possibility to compare differently sized data sets as commonly required in model assessment. Such assessment regularly compares simulation results with corresponding field data to validate the model results. This comparison generally requires the comparison of differently sized data, since field data is generally sparse in comparison to globally available simulation data. A common measure is to reduce both data sets to a comparable amount by only incorporating data points from grid cells, where both data kinds are available. This approach naturally leads to a loss of information, which can be avoided by the KDE based approach. The KDE estimates the data's PDF nearly independent of the amount of available data points. The PDFs are then comparable by a proper divergence function such as the Wasserstein distance. The model resolves general patterns of observational data well. This is well visible under all approaches to model assessment. Number of modes and general shape of the KDEs are mostly comparable. The traditional approach with the masked data shows too low values from the simulation results in comparison with the field data. This improves a lot when taking into account all available data into the analyses. In this latter approach the error between simulation and field data generally decreases while the fit of location and pronunciation of modes also improves. The overestimated values in the all data comparison analyses mostly occur in the whole ocean comparison. In the all depths approach as well as in the euphotic zone comparison the higher mode is simulated too high in the model as well its pronunciation too strong in the simulation results. Mostly underestimated in the all data comparison are the values in the Southern Ocean. The main mode in the Southern Ocean data is too low and too strong pronounced in the simulation data in comparison with the field data. In the whole ocean general and euphotic zone comparison the location of the lower mode is too low in the simulation data, but the pronunciations are comparable. It is not yet clear what causes the model-data discrepancies. In several cases, simulation and field data seem to fit, but not exactly, as for example in the lower mode in the whole but Southern Ocean data comparison, where the mode is visible in both data kinds but far less expressed in the field data. For future investigations more detailed resolution of the ocean areas can be used to investigate, whether these discrepancies originate from sparse field data in these regions or from actual mismatches simulated by the model. Overall the comparison by involving all available data provided a more accurate insight into the fit between simulation and field data. This is especially visible for the all but Southern Ocean euphotic zone comparison, where number and location of modes fit significantly better. For future model assessments and calibration this approach can be used to get such accurate insight into model's performance for a variety of applications and very differently sized field data sets.
2301.06093
Accretion-induced Collapse of Dark Matter-admixed Rotating White Dwarfs: Dynamics and Gravitational-wave Signals
We present two-dimensional hydrodynamic simulations of the accretion-induced collapse (AIC) of rotating white dwarfs admixed with an extended component of dark matter (DM) comprising of sub-GeV degenerate fermionic DM particles. We find that the DM component would follow the collapse of the normal matter (NM) component to become a bound DM core. Thus, we demonstrate how a DM-admixed neutron star could form through DM-admixed AIC (DMAIC) for the first time, with the dynamics of DM taken into account. The gravitational-wave (GW) signature from the DMAIC shows distinctive features. In the diffusive DM limit, the DM admixture indirectly suppresses the post-bounce spectral peak of the NM GWs. In the compact DM limit, the collapse dynamics of the DM in a Milky Way event generate GWs that are strong enough to be detectable by Advanced LIGO as continuous low-frequency ($< 1000$ Hz) signals after the NM core bounce. Our study not only is the first-ever computation of GW from a collapsing DM object but also provides the key features to identify DM in AIC events through future GW detections.
Ho-Sang Chan, Ming-chung Chu, Shing-Chi Leung
2023-01-15T13:33:23Z
http://arxiv.org/abs/2301.06093v1
Accretion-induced Collapse of Dark Matter-admixed Rotating White Dwarfs: Dynamics and Gravitational-wave Signals ###### Abstract We present two-dimensional hydrodynamic simulations of the accretion-induced collapse (AIC) of rotating white dwarfs admixed with an extended component of dark matter (DM) comprising of sub-GeV degenerate fermionic DM particles. We find that the DM component would follow the collapse of the normal matter (NM) component to become a bound DM core. Thus, we demonstrate how a DM-admixed neutron star could form through DM-admixed AIC (DMAIC) for the first time, with the dynamics of DM taken into account. The gravitational-wave (GW) signature from the DMAIC shows distinctive features. In the diffusive DM limit, the DM admixture indirectly suppresses the post-bounce spectral peak of the NM GWs. In the compact DM limit, the collapse dynamics of the DM in a Milky Way event generate GWs that are strong enough to be detectable by Advanced LIGO as continuous low-frequency (\(<1000\) Hz) signals after the NM core bounce. Our study not only is the first-ever computation of GW from a collapsing DM object but also provides the key features to identify DM in AIC events through future GW detections. Astronomical simulations(1857), Hydrodynamical simulations(767), Dark matter(353), White dwarf stars(1799), Stellar rotation(1629), Gravitational waves(678), Neutron stars(1108) + Footnote †: journal: ApJ 0000-0002-8070-788X]Ho-Sang Chan 0000-0002-3882-788X]Ming-chung Chu 0000-0002-3883-0886]Shing-Chi Leung ## 1 Introduction ### Dark Matter-admixed Astrophysical Objects It is widely believed that dark matter (DM) constitutes the major mass-energy component of galaxy clusters (Clowe et al., 2006) and large-scale structures of the Universe (Davis et al., 1985). Besides terrestrial experiments, physicists are tackling the DM problem through astrophysical observations. It is shown that in a region with a high concentration of DM particles, DM could be captured by normal matter (NM, Sulistiyowati et al., 2014; Arun et al., 2019). Therefore, it is natural to expect stellar objects composed of NM and DM. There have been extensive theoretical studies on the possible effects of the DM admixture on the stellar evolution (Lopes & Lopes, 2019; Clea et al., 2020; Raen et al., 2021). Unusual stellar objects consistent with these models might hint at the existence of DM-admixed stars. Furthermore, there are studies utilizing DM-admixed star models to understand the properties of DM. For instance, Leung et al. (2022) proposed a method for inferring the DM particle mass by measuring the tidal deformability of neutron stars. Using the DM-admixed neutron star model, Bramante et al. (2013) and Bell et al. (2013) gave constraints on the bosonic DM particle mass and annihilation cross section. These examples show that DM-admixed stellar objects could be a promising channel to probe astrophysical DM. ### Rotating White Dwarfs The majority of studies on white dwarfs (WD) assume they are not rotating, but observational evidence shows the opposite (Spruit, 1998; Kawaler, 2004). It was suggested that WDs gain angular momentum through accretion from a companion star (Langer et al., 2003; Yoon & Langer, 2004) or mergers between two or more WDs (Gvaramadze et al., 2019; Pshirkov et al., 2020). Therefore, rotation is an important ingredient of the full picture of WD structure and evolution (Yoon & Langer, 2004; Yoon, S.-C. & Langer, N., 2005). In addition, rotating WDs have been proposed to be progenitors of super-luminous thermonuclear supernovae because ro tating WDs could support more mass than their traditional Chandrasekhar limit (Pfannes, J. M. M. et al., 2010; Wang et al., 2014; Fink, M. et al., 2018). Recently, the effects of the strong magnetic field on the equilibrium structures of WDs have been studied (Franzon & Schramm, 2015; Bera & Bhattacharya, 2016; Chatterjee et al., 2017), for which the WD rotation takes a critical role. ### Accretion-induced Collapse It was widely believed that a WD would undergo a thermonuclear explosion when its mass is approaching the canonical Chandrasekhar limit. However, if the WD contains an oxygen-neon (O-Ne) core, Accretion-Induced Collapse (AIC) is possible as its mass increases towards the Chandrasekhar limit through stable accretion from a companion object (Nomoto & Kondo, 1991; Wang, 2018; Ruiter et al., 2019), though a binary WD merger seems to be another possible scenario (Liu & Wang, 2020). The collapse is triggered by electron capture in the degenerate matter, weakening the electron degenerate pressure (Brooks et al., 2017). On the other hand, pycnonuclear burning is also possible in such an extremely dense core. Hence the ultimate fate of an O-Ne WD would depend on the competition between nuclear runaway and electron capture (Wang & Liu, 2020). However, it was later found that the central temperature of O-Ne WDs is insufficient for explosive O-Ne burning (Wu & Wang, 2018). Even if deflagration occurs, it fails to unbind the WD, which directly leads to a collapse for a wide range of parameters (Leung & Nomoto, 2019; Zha et al., 2019; Leung et al., 2020). Besides the iron-core collapse of massive stars, the AIC of WDs has been proposed as another channel for forming neutron stars. However, AIC is much less luminous than typical core-collapse supernovae. The small amount of nickel synthesized indicates that AICs are usually faint transients (Darbha et al., 2010). On the other hand, AIC emits radio signatures (Moriya, 2016) and has been hypothesized as a source candidate for Fast Radio Bursts (Margalit et al., 2019) and Millisecond Pulsars (Wang et al., 2022). Electromagnetic-wave detection of AIC would be a challenging but possible task. One possible way to search for AIC is by neutrino detection because a neutrino burst should accompany AIC after the WD dynamical collapse (Dar et al., 1992). The burst luminosity could be as large as \(10^{55}\) erg s\({}^{-1}\)(Zha, 2019; Dessart et al., 2006). On the other hand, the collapse dynamics of the compact iron core are expected to produce strong GW signals (Ott et al., 2005; Ott, 2009). There have been some efforts to predict the GW signature from an AIC. Dessart et al. (2006) simulated 2D AIC with neutrino transport and estimated the GW emission from AIC via the Newtonian quadrupole formula. They concluded that LIGO-class detectors could detect Milky Way AIC events. Abdikamalov et al. (2010) found that the GW signals from an AIC show a generic "Type III" shape, though detailed neutrino physics have been omitted. ### Motivations Although DM-admixed neutron stars have been studied and applied to explain anomalous compact objects, there is still no in-depth research on their formation channel. Even though Leung et al. (2019); Zha et al. (2019) numerically investigated DM-admixed AIC (DMAIC), they assumed the DM component to be spherically symmetric and non-moving. As pointed out by Leung et al. (2019), the stationary DM approximation may break down if the dynamical time scales for DM and NM become comparable, and the dynamical modelling of the DM becomes important. They also pointed out that there is a moment during the collapse in which the NM has a mass density comparable with that of the DM. Also, Chan et al. (2021) showed that fermionic DM with a sub-GeV particle mass would produce a massive and extended component comparable in size to that of the NM. In such a scenario, modelling the DM dynamics would be necessary. In this study, we extend the multi-dimensional simulations by Zha et al. (2019) to include also the dynamical evolution of the DM component. Our study aims to investigate if DMAIC could make a DM-admixed neutron star, with the DM motion taken into account, and to predict the corresponding GW signature to facilitate the search for DM through observing AIC in the future. The structure of the paper is as follows: we present the method of obtaining the progenitor and the tools for simulation in Section 2. We then present the results of collapse dynamics and gravitational-wave signals in Section 3. Finally, we conclude our study in Section 4. ## 2 Methodology ### Equations of Hydrostatic Equilibrium We compute DM-admixed rotating WDs (DMRWDs) as DMAIC progenitors by solving the Newtonian hydrostatic equations, including the centripetal force: \[\vec{\nabla}P_{1}=-\rho_{1}\vec{\nabla}\Phi,\] \[\vec{\nabla}P_{2}=-\rho_{2}\vec{\nabla}\Phi+[\rho_{2}\omega_{2}( s)^{2}s]\hat{s}, \tag{1}\] \[\nabla^{2}\Phi=4\pi G(\rho_{1}+\rho_{2}).\] Here, the subscript \(i=1(2)\) denotes the DM (NM) quantities, and \(\rho\), \(P\), \(\omega\), and \(\Phi\) are the density, pressure, angular speed, and gravitational potential of the fluid element. \(s\) is the perpendicular distance from the rotation axis, and \(\hat{s}\) is the unit vector orthogonal to and pointing away from that axis. The angular velocity is assumed to be a function of \(s\) only. We consider the Newtonian framework because the rotation speed and compactness of WDs are low. We follow Eriguchi and Mueller (1985), Hachisu (1986) and Aksenov and Blinnikov (1994) to integrate the equations of equilibrium: \[\begin{split} H_{i}+\Phi+\delta_{i2}h_{i}^{2}\psi_{i}& =C_{i},\\ \int\frac{dP_{i}}{\rho_{i}}&=H_{i},\\ \int\omega(s)_{i}^{2}sds&=-h_{i}^{2}\psi_{i},\end{split} \tag{2}\] where \(C_{i}\) is an integration constant, \(H\) is the enthalpy, \(\psi\) is the rotational potential, and \(h^{2}\) is a constant to be determined (Hachisu, 1986). We solve the equilibrium equations for the DM and NM using a two-fluid, self-consistent field method (Chan et al., 2022). ### Rotation Rules We have considered rotation profiles for the NM from Hachisu (1986) and Yoshida (2019) including (1) the rigid rotation: \[\omega(s)_{2}^{2}=\Omega_{2}^{2}, \tag{3}\] and (2) the "Kepler" profile: \[\omega(s)_{2}^{2}\propto 1/(d^{3/2}+s^{3/2})^{2}, \tag{4}\] which resembles a rapidly rotating core surrounded by an envelope rotating at its Keplerian limit. Here, \(d\) is the rotating core radius. We integrate the angular velocity to obtain the effective potential of the rigid rotation: \[\psi_{2}=s^{2}/2. \tag{5}\] The effective potential for the Kepler rule is: \[\begin{split}\psi_{2}=-\frac{1}{9}\left[-\frac{6\sqrt{s}}{s^{ \frac{3}{2}}+d^{\frac{3}{2}}}+\frac{1}{d}\mathrm{ln}\left(\frac{(\sqrt{d}+ \sqrt{s})^{2}}{d+s-\sqrt{sd}}\right)\right.\\ \left.-\frac{2\sqrt{3}}{d}\mathrm{tan}^{-1}\left(\frac{1-2\sqrt{ s/d}}{\sqrt{3}}\right)\right].\end{split} \tag{6}\] ### Hydrodynamic Evolution To simulate DMAIC, we solve the two-dimensional Euler equations assuming axial symmetry: \[\begin{split}\partial_{t}\rho_{i}+\nabla\cdot(\alpha\rho_{i} \vec{v_{i}})=0,\\ \partial_{t}(\rho_{i}\vec{v_{i}})+\nabla\cdot[\alpha\rho_{i}( \vec{v_{i}}\otimes\vec{v_{i}})]+\nabla(\alpha P_{i})=\\ -\alpha(\rho_{i}-P_{i})\nabla\Phi.\end{split} \tag{7}\] Here, \(\alpha=\exp(-\phi/c^{2})\) is the lapse function with \(c\) being the speed of light. It is used to mimic general relativistic time-dilation effects and has been applied to study the first-order quantum chromodynamics phase transition in core-collapse supernovae (Zha et al., 2020). We also solve the advection equation for the NM total internal energy density \(\tau_{2}=\rho_{2}\epsilon_{2}+\rho_{2}v_{2}^{2}/2\) and electron fraction \(Y_{e}\): \[\begin{split}\partial_{t}\tau_{2}+\nabla\cdot[\alpha(\tau_{2}+P_ {2})\vec{v_{2}}]=-\alpha\rho_{2}\vec{v_{2}}\cdot\nabla\Phi,\\ \partial_{t}(\rho_{2}Y_{e})+\nabla\cdot(\alpha\rho_{2}Y_{e}\vec{v _{2}})=0.\end{split} \tag{8}\] The gravitational potential \(\Phi\) is solved by a multipole solver, for which we adopt the one by Couch et al. (2013) that can reduce error by computing the potential at the cell center while using the mass density at that point. To mimic general relativistic strong-field effects, we use the modified case A potential (Muller et al., 2008) as an additional correction to the Newtonian potential: \[\begin{split}\Phi\rightarrow\Phi-\langle\Phi\rangle+\Phi_{\mathrm{ TOV},1}+\Phi_{\mathrm{TOV},2},\\ \langle\Phi\rangle=-4\pi\int_{0}^{\infty}dr^{\prime}r^{\prime 2} \frac{\langle\rho_{1}+\rho_{2}\rangle}{|r-r^{\prime}|}.\end{split} \tag{9}\] Here, \(r\) is the radial distance and \(\langle\rho_{1}+\rho_{2}\rangle\) represents the angular average of the total density. \(\Phi_{\mathrm{TOV},i}\) for \(i=\)1, 2 are the relativistic corrections: \[\begin{split}\Phi_{\mathrm{TOV},i}=-4\pi\int_{0}^{\infty}\frac{ dr^{\prime}}{r^{\prime 2}}\frac{1}{\Gamma_{i}^{2}}\left(\frac{m_{\mathrm{TOV},i}}{4\pi}+r^{ \prime 3}P_{i}\right)\\ \left(1+\epsilon_{i}+\frac{P_{i}}{\rho_{i}}\right),\\ m_{\mathrm{TOV},i}=4\pi\int_{0}^{r}dr^{\prime}r^{\prime 2} \Gamma_{i}\rho_{i}(1+\epsilon_{i}),\\ \Gamma_{i}=\sqrt{1+v_{r,i}^{2}-\frac{2m_{\mathrm{TOV},i}}{r}}, \end{split} \tag{10}\] where \(v_{r,i}\) is the radial velocity. We adopt a finite-volume approach to solve the hydrodynamic equation in spherical coordinates (Mignone, 2014). We use the piecewise parabolic method (Colella and Woodward, 1984) to reconstruct primitive variables at the cell interface and the HLLE Riemann solver (Toro, 2009) to compute fluxes across cell boundaries. The reconstruction and flux evaluation are done on a dimension-by-dimension basis. We discretize the temporal evolution using the method of lines where the strong stability-preserving 5-step, 4th-order Runge-Kutta method is implemented (Gottlieb et al., 2011). In addition to the (modified) Euler equation, we also append the internal energy equation for the NM: \[\begin{split}\partial_{t}(\rho_{2}\epsilon_{2})+\nabla\cdot\{ \alpha[(\rho_{2}\epsilon_{2})+P_{2}]\vec{v_{2}}\}=\\ \vec{v_{2}}\cdot\nabla(\alpha P_{2})-\alpha P_{2}(\vec{v_{2}}\cdot \nabla\Phi).\end{split} \tag{11}\] It not only allows one to interpolate the internal energy density \(\epsilon\) to the cell interface so that computational cost could be reduced but also reduces the error of \(\epsilon\) due to advection (Zingale et al., 2020). We adopt a computational grid similar to that in Skinner et al. (2016), in which an analytic function describes the positions of the radial cell interfaces as: \[r_{i}=A_{t}\mathrm{sinh}(x_{t}i/A_{t}). \tag{12}\] Here, \(i\) is the cell index. We set \(x_{t}=0.5\) (in code unit)1 and \(A_{t}=150\) so that a central resolution of around 0.74 km is provided, while a total of 500 computational grids are used to contain the progenitor. We use 20 grids to resolve the polar direction, which we find sufficient for ensuring convergence in GW signals for both the NM and DM. Footnote 1: One code unit in length equals to 1.4766839 km ### Micro-physics After mapping the density profiles of the NM and DM components computed from Section 2.1, we assign an initial temperature profile to the NM (Dessart et al., 2006): \[T(\rho)=T_{c}(\rho/\rho_{c})^{0.35}. \tag{13}\] Here, \(T_{c}=10^{10}\) K and \(\rho_{c}=5\times 10^{10}\) gcm\({}^{-3}\) are the central temperature and density, respectively. The core electron capture process initiates the AIC. We implement the parameterized electron capture scheme described in Liebendorfer (2005) to simulate such a process. In their work, \(Y_{e}\) depends on \(\rho_{2}\) as: \[x(\rho_{2})=\max\left[-1,\min\left(1,\frac{2-\mathrm{log}\rho_{2 }-\mathrm{log}\rho_{\alpha}-\mathrm{log}\rho_{\beta}}{\mathrm{log}\rho_{ \alpha}-\mathrm{log}\rho_{\beta}}\right)\right], \tag{14}\] \[Y_{e}(x)=\frac{1}{2}(Y_{b}+Y_{a})+\frac{x}{2}(Y_{b}-Y_{a})+\] \[Y_{c}[1-|x|+4|x|(|x|-1/2)(|x|-1)].\] Here, log\(\rho_{\alpha}\), log\(\rho_{\beta}\), \(Y_{a}\), \(Y_{b}\), and \(Y_{c}\) are fitting parameters and are obtained by Leung et al. (2019) and Zha et al. (2019). We first assign an initial equilibrium \(Y_{e}\) profile to the NM using Equation 14. We then start the electron capture process by updating \(Y_{e}\) at each time step using the same equation. We force \(Y_{e}\) to strictly decrease with time. We terminate the electron capture process once the core bounce condition (Liebendorfer, 2005) is achieved, which is when the core NM entropy is larger than \(3k_{B}\), where \(k_{B}\) is the Boltzman constant. ### Gravitational-wave Signals We use the quadrupole formula in the weak-field approximation to compute the GW strain (Finn and Evans, 1990; Moenchmeyer et al., 1991): \[h_{+}=\frac{3}{2}\frac{G}{Dc^{4}}\mathrm{sin}^{2}\theta\frac{d^{2}}{dt^{2}}I _{zz}. \tag{15}\] Here, \(D=10\) kpc is the assumed distance and \(\theta\) is the orientation angle of the collapsing DMRWD, and \(I_{zz}\) is the moment of inertia tensor: \[I_{zz}=\frac{1}{3}\int_{\mathrm{All\ Space}}(\rho_{1}+\rho_{2})r^{2}P_{2}( \mathrm{cos}\theta)d\tau. \tag{16}\] ### Equations of State To simulate AICs, we first use the ideal degenerate Fermi gas EOS for equilibrium structure construction. Following the subsequent collapse dynamics, we use the nuclear matter EOS given by Shen et al. (2011), widely used in simulating core-collapse supernovae and neutron star dynamics. We adopt the ideal degenerate Fermi gas EOS for the DM component (Narain et al., 2006). ## 3 Results and Discussion We define \(\bar{t}\) as the time after the NM core bounce, and we terminate our simulations at \(\bar{t}=0.1\) s. ### The Diffusive Dark Matter Limit We have computed a series of DMRWD models as DMAIC progenitors. The stellar parameters of these progenitors have been listed in Table 1 for reference. The progenitors have DM mass fractions \(\epsilon_{\mathrm{DM}}=M_{\mathrm{DM}}/(M_{\mathrm{DM}}+M_{\mathrm{NM}})\), where \(M_{\mathrm{NM}}\) (\(M_{\mathrm{DM}}\)) is the NM (DM) mass, from 0.01 to 0.2 and include rigidly-rotating and differentially-rotating DMRWDs with different \(d\) as described in Equation 4. In particular, \(d\) is chosen so that \(\rho_{2}(r=d,\theta=\frac{\pi}{2})=\alpha_{d}\rho_{2c}\). We choose \(\alpha_{d}=0.1\) and 0.01. Another free parameter to be specified for these progenitors is the central angular velocity \(\Omega_{c}\). We adjust this value for rigidly-rotating DMRWDs so that the corresponding pure NM progenitor almost rotates at the Keplerian limit and that a total mass of \(\approx 1.8\)\(M_{\odot}\) is achieved for a pure NM, differentially rotating WD. We fix the DM particle mass to be 0.1 GeV for all of these progenitors. As shown in Chan et al. (2021), the fluid component formed by DM particles with such a mass will be more diffusive and comparable in size to that of the NM. #### 3.1.1 The Collapse Dynamics We first focus on the collapse dynamics of DMAIC. From Table 1, we observe that the admixture of DM delays the time of core bounce and reduces the proto-neutron star mass, which is similar to the results by Leung et al. (2019) and Zha et al. (2019). We show the maximum NM density evolution for the rigidly-rotating DMAIC models in the right panel of Figure 1. Despite having different initial and proto-neutron star masses (c.f. Table 1), the maximum NM density evolution is almost identical for all DMAIC models. The final maximum NM densities are also insensitive to the DM mass fraction \(\epsilon_{\rm DM}\). Furthermore, we find that AIC is successful for all DMAIC progenitors. This differs from the results presented by Leung et al. (2019) and Zha et al. (2019) because we assume different DM particle masses. Their work assumed a heavy (1 GeV) DM particle mass, leading to a more compact DM core with a large central density. Thus, it significantly impacts the NM density profile near its centre. The NM density decreases sharply due to the strong gravitational force provided by the compact DM core. Electron capture is less efficient in their model, so the NM component's effective adiabatic index remains near \(\frac{4}{3}\). We assumed a light (0.1 GeV) DM particle mass in our study. The DM component is more diffusive and extended. Hence it brings a less significant impact to the NM density profile near its core. We show how the NM density profile changes with increasing DM mass fraction \(\epsilon_{\rm DM}\) in the right panel of Figure 2. We observe that when more DM is admixed, the core of the NM component remains almost unchanged. Since the collapse dynamics of a WD are governed by the dense core, where \(\rho_{2}\) is large enough to initiate electron capture, it is natural to expect generic collapse dynamics for all rigidly-rotating DMRWDs. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Model & \(M_{\rm NM}\) & \(M_{\rm DM}\) & \(\alpha_{d}\) & log\({}_{10}\rho_{1c}\) & \(\Omega_{c}\) & \(R_{\rm eNM}\) & \(R_{\rm eDM}\) & \(\epsilon_{\rm DM}\) & \(t_{b}\) & log\({}_{10}\rho_{1b}\) & log\({}_{10}\rho_{2b}\) & \(M_{\rm PNS}\) \\ - & (\(M_{\odot}\)) & (\(M_{\odot}\)) & - & (gcm\({}^{-3}\)) & (s\({}^{-1}\)) & (km) & (km) & - & (ms) & (gcm\({}^{-3}\)) & (gcm\({}^{-3}\)) & (\(M_{\odot}\)) \\ \hline Rigid-NM & 1.477 & 0.000 & - & - & 10.8 & 1105 & - & 0 & 53.151 & - & 14.318 & 1.217 \\ Rigid-0.01 & 1.447 & 0.015 & - & 8.816 & 10.8 & 1098 & 400 & 0.01 & 53.424 & 11.078 & 14.317 & 1.194 \\ Rigid-0.03 & 1.416 & 0.044 & - & 8.980 & 10.8 & 1027 & 568 & 0.03 & 53.717 & 11.093 & 14.315 & 1.170 \\ Rigid-0.05 & 1.397 & 0.074 & - & 9.046 & 10.8 & 980 & 695 & 0.05 & 53.902 & 11.097 & 14.316 & 1.157 \\ Rigid-0.07 & 1.384 & 0.104 & - & 9.086 & 10.8 & 948 & 807 & 0.07 & 54.035 & 11.100 & 14.315 & 1.146 \\ Rigid-0.09 & 1.374 & 0.136 & - & 9.114 & 10.8 & 923 & 904 & 0.09 & 54.139 & 11.102 & 14.315 & 1.139 \\ Rigid-0.1 & 1.307 & 0.152 & - & 9.125 & 10.8 & 910 & 954 & 0.1 & 54.183 & 11.103 & 14.315 & 1.136 \\ Rigid-0.2 & 1.343 & 0.336 & - & 9.193 & 10.8 & 857 & 1379 & 0.2 & 54.496 & 11.107 & 14.313 & 1.119 \\ Kepler-NM-d001 & 1.770 & 0.000 & 0.01 & - & 32.5 & 1826 & - & 0 & 35.124 & - & 14.355 & 1.597 \\ Kepler-0.01-d001 & 1.725 & 0.017 & 0.01 & 8.853 & 32.5 & 1766 & 420 & 0.01 & 35.352 & 11.073 & 14.352 & 1.553 \\ Kepler-0.03-d001 & 1.662 & 0.051 & 0.01 & 9.016 & 32.5 & 1494 & 587 & 0.03 & 35.647 & 11.086 & 14.351 & 1.498 \\ Kepler-0.05-d001 & 1.623 & 0.085 & 0.01 & 9.082 & 32.5 & 1343 & 710 & 0.05 & 35.835 & 11.091 & 14.352 & 1.463 \\ Kepler-0.07-d001 & 1.596 & 0.120 & 0.01 & 9.122 & 32.5 & 1256 & 812 & 0.07 & 35.969 & 11.094 & 14.350 & 1.439 \\ Kepler-0.09-d001 & 1.577 & 0.156 & 0.01 & 9.150 & 32.5 & 1190 & 904 & 0.09 & 36.072 & 11.096 & 14.350 & 1.419 \\ Kepler-0.1-d001 & 1.569 & 0.174 & 0.01 & 9.162 & 32.5 & 1159 & 948 & 0.1 & 36.116 & 11.097 & 14.350 & 1.411 \\ Kepler-0.2-d001 & 1.518 & 0.379 & 0.01 & 9.232 & 32.5 & 1020 & 1352 & 0.2 & 36.424 & 11.101 & 14.349 & 1.362 \\ Kepler-NM-d001 & 1.771 & 0.000 & 0.1 & - & 45.2 & 1106 & - & 0 & 32.311 & - & 14.354 & 1.598 \\ Kepler-0.01-d01 & 1.727 & 0.017 & 0.1 & 8.860 & 45.2 & 1098 & 417 & 0.01 & 32.555 & 11.070 & 14.353 & 1.555 \\ Kepler-0.03-d01 & 1.677 & 0.052 & 0.1 & 9.026 & 45.2 & 1062 & 579 & 0.03 & 32.788 & 11.084 & 14.354 & 1.511 \\ Kepler-0.05-d01 & 1.647 & 0.087 & 0.1 & 9.094 & 45.2 & 1034 & 700 & 0.05 & 32.924 & 11.088 & 14.351 & 1.483 \\ Kepler-0.07-d01 & 1.625 & 0.122 & 0.1 & 9.135 & 45.2 & 1007 & 801 & 0.07 & 32.024 & 11.092 & 14.351 & 1.460 \\ Kepler-0.09-d01 & 1.609 & 0.159 & 0.1 & 9.164 & 45.2 & 987 & 892 & 0.09 & 33.101 & 11.095 & 14.352 & 1.446 \\ Kepler-0.1-d01 & 1.602 & 0.178 & 0.1 & 9.176 & 45.2 & 980 & 941 & 0.1 & 33.133 & 11.096 & 14.352 & 1.439 \\ Kepler-0.2-d01 & 1.556 & 0.389 & 0.1 & 9.247 & 45.2 & 916 & 1343 & 0.2 & 33.364 & 11.101 & 14.355 & 1.396 \\ \hline \end{tabular} Note. – In this table, \(R_{\rm eNM}\) (\(R_{\rm eDM}\)) is the equatorial radius of the progenitor for the NM (DM) component. \(\rho_{1c}\) is the DM central density, \(\epsilon_{\rm DM}\) is the DM fraction, and \(t_{b}\) is the bounce time. \(\rho_{2b}\) (\(\rho_{\rm 1b}\)) is the maximum NM (DM) density at the core bounce. \(M_{\rm PNS}\) is the proto-neutron star mass, defined as summing all the NM mass with \(\rho_{2}>10^{11}\) gcm\({}^{-3}\) at the end of the simulation. \end{table} Table 1: Stellar parameters for different DMAIC progenitors. They include rigid (labelled Rigid) and differentially (labelled Kepler) rotating DMRWDs. All progenitors have NM central density of \(5\times 10^{10}\) gcm\({}^{-3}\). The DM particle mass is 0.1 GeV. We find that the DM component collapses with the NM component to form a bound DM core. We show the DM density profile evolution in the left panel of Figure 3. The DM density evolves similarly to the NM density, but it remains stable after the NM core bounce. We show the DM density profile evolution for a particular model Rigid-0.01 in the right panel of Figure 3 as an example. The DM radius contracts from \(\sim 350\) km at \(\bar{t}=0.014\) s, to \(\sim 180\) km at \(\bar{t}=0.01\) s. Although the DM radius increases at \(\bar{t}=0.02\) s, the DM component gradually contracts to \(\sim 200\) km at \(\bar{t}=0.03\) s and pulsates around \(\sim 180-200\) km. This suggests that a bound DM component has formed with negligible mass loss. We show the DM velocity profile evolution of the same DMRWD model in the left panel of Figure 3. The post-bounce velocity shock breaks through the DM surface around \(\bar{t}=0.01\) s. However, the shock is too weak to unbind the DM component. The shock gradually weakens and becomes a sound wave that propagates inside the DM component. This also explains the pulsation of the DM component between \(\bar{t}=0.03\) and \(0.049\) s. #### 3.1.2 The Formation of DM-admixed Neutron Stars Figure 1: Evolution of the maximum density of the rigidly-rotating DMAIC models. The left (right) panel is for the DM (NM) component. Since there are only minimal deviations among different DM-admixed models, we show a magnified density evolution plot in each panel. Figure 4: NM density contour plot for two different rigidly-rotating DMAIC models at the end of the simulations. The right (left) plot is for the Rigid-NM (Rigid-0.2) model. Densities are in the \(\log_{10}\) scale of g cm\({}^{-3}\). The radial distance is in km. Figure 3: Evolution of the DM radial density and velocity profiles of the Rigid-0.01 DMAIC model. The left (right) panel is for the velocity (density). The upper (lower) sub-panel in each panel is for the polar (equatorial) profiles. Figure 2: Initial density profiles for the rigidly-rotating DMAIC progenitors. The left (right) panel is for the DM (NM) component. The upper (lower) sub-panel in each panel is for the polar (equatorial) density profiles. What are the astrophysical implications of our findings? DM-admixed neutron stars have been extensively studied in the past decade. For instance, Bhat and Paul (2020) showed that the admixture of DM can explain the cooling rate of some pulsars/neutron stars, such as PSR B0656+14, PSR B1706-44 and PSR B2334+61, which could not be explained if the popular APR equation of state (EOS) is assumed. Das et al. (2021) and Lee et al. (2021) discuss the anomalous 2.6 \(M_{\odot}\) object from the gravitational-wave event GW190814 (Abbott et al., 2020) as a possible DM-admixed neutron star. However, the formation channel of DM-admixed neutron stars has never been addressed in depth. Although Zha et al. (2019) performed DMAIC simulations, their work assumed that the DM is compact and static. Our self-consistent, two-fluid simulations show that the AIC of a DMRWD would produce a DM-admixed (rotating) neutron star, such that the DM component is gravitationally bound with negligible mass loss. The collapse of DM also happens with a time scale similar to that of NM. Therefore, we have shown numerically that it is possible to form a DM-admixed neutron star through DMAIC. #### 3.1.3 Gravitational-wave Signatures The non-luminous nature of the DM makes it difficult to be detected through conventional telescopes. The weak electromagnetic signatures from a typical AIC also hinder indirect DM detection by comparing AIC luminosities. Therefore, we rely on the GW signatures generated by both the NM and DM components. \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{ -} & \multicolumn{1}{c}{Rigid} & \multicolumn{1}{c}{Kepler-d001} & \multicolumn{1}{c}{Kepler-d01} \\ \hline DM-0.01 & 0.306 & 13.570 & 19.217 \\ DM-0.03 & 0.870 & 26.151 & 42.386 \\ DM-0.05 & 1.263 & 29.103 & 41.173 \\ DM-0.07 & 1.596 & 36.149 & 41.169 \\ DM-0.09 & 1.844 & 38.256 & 44.517 \\ DM-0.1 & 1.992 & 40.916 & 43.785 \\ DM-0.2 & 2.784 & 41.845 & 52.794 \\ \hline \end{tabular} \end{table} Table 2: GW mismatch (in %) with respect to the pure NM model for DMAICs with different initial rotation profiles. See Table 1 for the simulation parameters of these models. Figure 5: Total GW strains for the rigidly-rotating DMAIC models. We normalised all the GW strains to the corresponding maximum amplitude of the Rigid-NM model. The normalisation constant is \(7.53\times 10^{-21}\). Figure 6: Same as Figure 5, but for the Kepler-rotating and \(\alpha_{d}=0.1\) models with a normalisation constant of \(5.05\times 10^{-21}\). Figure 7: Power spectral density of DMAIC GWs for DM-RWDs rotating in the Kepler rule with \(\alpha_{d}=0.1\). Equation 14 suggests that the moment of inertia tensor \(I_{zz}\) is separable into individual DM and NM components: \[\begin{split} I_{zz}=I_{zz,1}+I_{zz,2},\\ I_{zz,i}=\frac{1}{3}\int_{\text{All Space}}\rho_{i}r^{2}P_{2}(\text {cos}\theta)d\tau.\end{split} \tag{17}\] Since the DM only interacts with NM through gravity, the Euler equation for the DM component does not contain any non-trivial NM-related terms except the gravitational potential \(\Phi\). Hence, the GW signature from the AIC of a DMRWD can be separated into the DM and NM contributions: \[\begin{split} h_{+}=h_{+,1}+h_{+,2},\\ h_{+,i}=\frac{3}{2}\frac{G}{De^{4}}\text{sin}^{2}\theta\frac{d^{ 2}}{dt^{2}}I_{zz,i}.\end{split} \tag{18}\] To compute \(h_{+,i}\), we make use of Equation (16) in Ott et al. (2004) and substitute all the components of \(\vec{v}\) and \(\rho\) by the corresponding DM/NM values. It is also a common practice to study GW strains by time-frequency analysis. To obtain the GW spectrogram, we perform a windowed Fourier transform: \[\tilde{h}^{*}(f,t)=\int_{-\infty}^{\infty}h_{+}(\tau)w(t,\tau)\text{exp}(-2 \pi if\tau)d\tau. \tag{19}\] Here, \(w(t,\tau)\) is the window function, and we choose the Hann window. We first show the AIC GWs generated by the rigidly-rotating DMRWD models in Figure 5. The GWs are all generic Type-I waveforms (Fryer and New, 2011). There are no considerable differences in the GW signature with respect to all DM-admixed models. This contrasts with the results presented by Zha et al. (2019), where they show enhanced amplitudes during \(\bar{t}=0\) s. This is because the contributions to the GW strains are mainly from the innermost core (\(\sim 10\) km). We have shown in the previous section that the effects of admixing 0.1 GeV DM on the NM density profile are mainly at the NM outer envelope. The NM collapse dynamics are also generic for all DM-admixed models. We append the NM density contour plots of models NM-Rigid and DM-Rigid-0.2 in Figure 4 for comparison. We observe that the dense core, which corresponds to the major part of the proto-neutron star of the DM-admixed model, is almost identical to that of the pure NM counterpart. This explains why the GW signatures from rigidly-rotating DMRWDs are all generic. However, the situation is different for differentially rotating progenitors. We show the GW strains of the Kepler-rotating and \(\alpha_{d}=0.1\) model in Figure 6. We find that the DM admixture indirectly suppresses the post-bounce 3rd and 4th peaks of the GW strains. This could also be observed as the gradual disappearance of the 3rd and 4th spectral peaks in Figure 7. Therefore, the GW strains of DMAIC are qualitatively different from that of the pure NM model. We find that the spectral peaks exist for the pure NM model because the reflected shock waves pass through the NM core and make it pulsate non-radially. The corresponding pulsation amplitudes for the DM-admixed models are smaller, resulting in weaker GW signatures. We find similar results for the Kepler-rotating and \(\alpha_{d}=0.01\) model, except that the 4th spectral peak never exists for the pure NM and hence, the DM-admixed models. The DM component is more diffusive for fermionic DM with a particle mass of 0.1 GeV when compared to those with heavier DM particle mass. As such, the collapse dynamics of the DM component could not produce GW amplitudes comparable to that of the NM component. The effects of DM admixture on the total GW signatures are, therefore, indirect. To quantitatively determine whether such effects could be observable, we compute the mismatch \(\mathfrak{M}\), which quantifies how dissimilar two waveforms are (Reisswig and Pollney, 2011; Richers et al., 2017): \[\mathfrak{M}=1-\text{max}\left(\frac{\langle h_{a},h_{b}\rangle}{\sqrt{\langle h _{a},h_{a}\rangle\langle h_{b},h_{b}\rangle}}\right). \tag{20}\] The second term here contains the match between two waveforms \(h_{a}\) and \(h_{b}\): \[\langle h_{a},h_{b}\rangle=\int_{0}^{\infty}\frac{4\tilde{h_{a}}^{*}\tilde{h_ {b}}}{s}df. \tag{21}\] Here, \(s\) is the estimated noise amplitude spectral density of the Advanced LIGO (Barsotti et al., 2018). \(\tilde{h}^{*}\) is the Fourier transform of the GW strain, which is just Equation 19 but with \(w(t,\tau)=1\). The mismatch is maximized over the relative phase, amplitudes, and arrival times. We follow Zha et al. (2019) to set the integration limit of Equation 21 to be from 100 Hz to 2000 Hz. The computations are facilitated through the open-source package PyCBC (Nitz et al., 2022). We extract GW waveforms for all the models listed in Table 1 with a time window of \(-0.01\) s \(<\bar{t}<0.05\) s and compute the mismatches with respect to the pure NM model. The results are listed in Table 2. The mismatches for the rigidly-rotating DMAIC models are small, which is no surprise because the GW waveforms of the DM-admixed models in such a scenario are very similar to that of the pure NM counterpart. The mismatches for the Kepler-rotating DMAIC models, however, are relatively large. The presence of a 1% of DM can be inferred from future GW detection produced by DMAIC if Advanced LIGO can distinguish two waveforms with an accuracy better than 14%. ### The Compact Dark Matter Limit The properties of a Fermionic DM-admixed compact star were shown to be sharply changing around DM particle mass of 0.1 GeV (Leung et al., 2022). To better capture the transitional effects from a sub-GeV to GeV mass, we include progenitor models admixed with fermionic DM of particle mass 0.3 GeV. Furthermore, the progenitors are all differentially-rotating DMRWDs with \(\alpha_{d}=0.5\). For reference, we include the parameters of our appended models in Table 3. We generally find similar collapse dynamics for the DM and NM components as those of the diffusive DM limit. For instance, we find a delay in the NM bounce time and the successful formation of a DM-admixed neutron star. The in-depth discussion of the collapse dynamics of DMAIC under the compact DM limit would therefore be omitted. related GW signals are emitted after \(\bar{t}=0.1-0.2\) s (Zha, 2019). Therefore, any low-frequency signals observed before \(\bar{t}=0.1\) s could be direct evidence of a compact DM admixture. The DM GW waveforms (see Figure 10) are consistent with the Type III collapsing polytrope waveforms presented in Fryer and New (2011). #### 3.2.2 Detection Prospect To study the detectability of the DM GW signals, we compute the dimensionless characteristic GW strain (Flanagan and Hughes, 1998): \[h_{\rm char}=\sqrt{\frac{2}{\pi^{2}}\frac{G}{c^{3}}\frac{1}{D^{2}}\frac{dE_{\rm GW }}{df}}. \tag{22}\] Here, \(\frac{dE_{\rm GW}}{df}\) is the GW spectral energy (Murphy et al., 2009): \[\frac{dE_{\rm GW}}{df}=\frac{3}{5}\frac{G}{c^{5}}(2\pi f)^{2}|\tilde{h}_{+}|^{ 2}. \tag{23}\] We compare \(h_{\rm char}f^{-1/2}\) with the Advanced LIGO noise spectral density \(\sqrt{s(f)}\) in Figure 11. In the same figure, we mark vertical lines corresponding to the peak frequencies of the DM GW waveforms (see Figure 12). We choose the sampling window as \(\bar{t}>0\) s. The DM characteristic GW strains corresponding to the frequency peaks are above the Advanced LIGO sensitivity curve, which is true for all of our considered models for all our considered models, assuming \(D=10\) kpc. Hence, the GW signature of a collapsing, compact DM in a Milky Way DMAIC event should be detectable by Advanced LIGO. Our results represent the first-ever numerical calculation of the GW waveforms of a collapsing DM core Figure 11: Scaled characteristic DM GW strains for the differentially-rotating DMAIC models with \(\alpha_{d}=0.5\). Peak frequencies obtained from the Fourier transform are marked as vertical black dotted lines. Figure 12: Fourier transformed amplitude of the DM GWs against frequency for 4 different DMAIC models with \(\alpha_{d}=0.5\). \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline Model & \(M_{\rm NM}\) & \(M_{\rm DM}\) & log\({}_{10}\rho_{1c}\) & \(R_{\rm eNM}\) & \(R_{\rm eDM}\) & \(\epsilon_{\rm DM}\) & \(t_{b}\) & log\({}_{10}\rho_{1b}\) & log\({}_{10}\rho_{2b}\) & \(M_{\rm PNS}\) & \(\mathfrak{C}\) \\ - & (\(M_{\odot}\)) & (\(M_{\odot}\)) & (gcm\({}^{-3}\)) & (km) & (km) & - & (ms) & (gcm\({}^{-3}\)) & (gcm\({}^{-3}\)) & (\(M_{\odot}\)) & (\(10^{-3}\)) \\ \hline Kepler-NM-405 & 1.771 & - & - & 916 & - & 0.00 & 29.087 & - & 14.351 & 1.604 & 5.71 \\ Kepler-0.01-d05 & 1.672 & 0.017 & 9.968 & 929 & 157 & 0.01 & 30.595 & 12.720 & 14.345 & 1.516 & 5.32 \\ Kepler-0.03-d05 & 1.525 & 0.047 & 10.206 & 948 & 187 & 0.03 & 32.426 & 12.826 & 14.339 & 1.379 & 4.75 \\ Kepler-0.05-d05 & 1.410 & 0.074 & 10.313 & 961 & 202 & 0.05 & 33.778 & 12.856 & 14.336 & 1.267 & 4.33 \\ Kepler-0.07-d05 & 1.313 & 0.099 & 10.383 & 967 & 212 & 0.07 & 34.899 & 12.866 & 14.332 & 1.170 & 4.01 \\ Kepler-0.09-d05 & 1.229 & 0.122 & 10.434 & 967 & 220 & 0.09 & 35.888 & 12.871 & 14.332 & 1.090 & 3.75 \\ Kepler-0.1-d05 & 1.191 & 0.132 & 10.454 & 967 & 223 & 0.1 & 36.348 & 12.873 & 14.333 & 1.052 & 3.64 \\ Kepler-0.2-d05 & 0.896 & 0.224 & 10.588 & 935 & 246 & 0.2 & 40.396 & 12.872 & 14.331 & 0.767 & 2.83 \\ \hline \end{tabular} \end{table} Table 3: Same as Table 1, but for differentially rotating DMAIC progenitors that have \(\alpha_{d}=0.5\), \(\Omega_{c}=45.2\) s\({}^{-1}\), and the DM particle mass of 0.3 GeV. We also append the NM compactness \(\mathfrak{C}=2GM_{\rm NM}/R_{\rm eNM}c^{2}\). in a compact star. Finally, we show the detectability of DM GWs from rigidly rotating progenitors in Appendix B. ## 4 Conclusion We presented two-dimensional simulations of DMAIC with self-consistent modelling of the DM dynamics. Regardless of the DM particle mass and compactness, the DM component follows the collapse of the NM component to become a bound DM core with a time scale comparable to that of the NM. This result demonstrates numerically, for the first time, how a DM-admixed neutron star could form through DMAIC. We also find that the NM bounce time is delayed, and the proto-neutron star mass is reduced when DM is admixed, similar as found in Leung et al. (2019) and Zha et al. (2019), where the DM component is modelled as a fixed compact core. Due to the weak electromagnetic signals produced by the gravitational collapse of WDs, GW becomes an important and reliable channel to detect and study AIC. We computed the GW signatures for the NM and DM components using the quadrupole formula. For DM with a particle mass of 0.1 GeV, the DM component is more diffusive and extended. Hence, the collapse of the DM component does not produce a significant GW signal. However, the admixture of such DM indirectly influences the NM signal by suppressing the NM GW spectral peaks after the NM core bounce. The significant alteration of the NM GW frequency spectrum also makes the DMAIC waveforms easily detectable by GW detectors, which show a 14% mismatch with the pure NM counterpart with only 1% of DM admixed. For DM with a particle mass of 0.3 GeV, the DM component is more compact when compared to those with a particle mass of 0.1 GeV. The admixture of DM greatly reduces the NM mass and hence its compactness. The NM GW signal at bounce is therefore decreased substantially. However, the DM component is massive and compact enough to produce a GW signal comparable to that of the NM counterpart during its dynamical collapse. The DM GW add to the NM GW and show up as secondary oscillations. These oscillations could be seen as continuous low-frequency (\(<1000\) Hz) signals in the GW spectrogram, occurring at \(\bar{t}<0.1\) s, which is before the time of low-frequency GW induced by prompt convection, providing direct evidence of the existence of DM. All the peak-frequency signals of the DM component in our models of a Milky Way DMAIC event are detectable by the Advanced LIGO. Our result is the first-ever computation of GW from a collapsing DM core, and these findings could provide the key features to identify DM in AIC events through future GW detections. There are possible future improvements to our calculations. First, we assumed the DM component to be non-rotating, which could be relaxed to allow the DM to have collective motion, such as rotation, should there be adequate self-interaction of the DM. Second, we omitted detailed neutrino-transport physics in the simulations. Whether the presence of DM would significantly affect neutrino-flavour production would be an interesting future study. Lastly, we only include ad-hoc relativistic corrections to the gravity and dynamical equations. A more accurate picture of the collapse dynamics and the GW signature would call for solving the dynamical equations in the full general relativistic framework. We thank Otto Akseli Hannuksela for his helpful discussion regarding gravitational-wave mismatch calculations. This work is partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region, China (Projects No. 14300320 and 14304322). Shing-Chi Leung acknowledges support from NASA grants HST-AR-15021.001-A and 80NSSC18K1017. ## Appendix A Formation of Dark Matter-Admixed White Dwarf We follow Chan et al. (2022) to consider the progenitor of DMRWD to be a star born with an inherent admixture of DM. We assume the DM and NM to be spherically symmetric clouds having constant densities \(\rho_{1}\) and \(\rho_{2}\), respectively. We consider the situation with the DM radius \(R_{1}\) being larger than that of the NM, \(R_{2}\). The total energy \(E\) is: \[E=-\left(\frac{3}{5}\frac{GM_{1}^{2}}{R_{1}}+\frac{3}{5}\frac{GM_{2}^{2}}{R_{2 }}+\frac{3}{2}\frac{GM_{1}M_{2}}{R_{1}}-\frac{3}{10}\frac{GM_{1}^{2}R_{1}^{2}} {R_{1}^{3}}\right)\] \[+\frac{3}{2}NkT+\frac{1}{2}M_{1}v_{1}^{2}.\] (A1) Here, \(v_{1}\) is the DM "thermal" velocity, \(N=M_{2}/m_{\rm H}\) is the total number of NM nuclei, and \(m_{H}\) is the molecular mass of hydrogen. Furthermore, we assume an extreme case of \(M_{1}\sim 0.1~{}M_{\odot}\), \(M_{2}\sim 10.0~{}M_{\odot}\). For a typical collapsing molecular cloud, we have \(T\sim 150\) K and \(\rho_{2}\sim 10^{8}m_{\rm H}~{}{\rm cm}^{-3}\), and hence \(R_{2}=3.05\times 10^{16}\) cm is smaller than the Jeans radius. We solve \(E(R_{2})=0\) to obtain the maximum DM velocity of \(v_{1\max}\sim 1.27\times 10^{6}\) cm s\({}^{-1}\). Any \(v_{1}<v_{1\max}\) would give us a set of solution for \(R_{1}\) and \(\rho_{1}\). However, the most probable DM speed (assuming a Maxwell distribution) is \(v_{\rm p1}\sim 10^{7}\) cm s\({}^{-1}\). To take the velocity of DM into account, the bounded DM fraction is given by \(f\): \[f=\frac{\int_{0}^{u_{1}}u^{2}{\rm exp}(-u^{2})du}{\int_{0}^{\infty}u^{2}{\rm exp }(-u^{2})du}.\] (A2) Here, \(u=v/v_{\rm p1}\), and \(u_{1}=v_{1}/v_{\rm p1}\). We take a particular \(v_{1}=1.23\times 10^{6}\) cm s\({}^{-1}\), and give two sets of solutions in \((R_{1},\rho_{1})\) for \(E<0\): \((1.71\times 10^{18}\) cm, 3860 GeV/cm\({}^{3})\) and \((6.10\times 10^{16}\) cm, \(8.48\times 10^{7}\) GeV/cm\({}^{3})\). The required DM density in the first set of solutions is based on the state-of-the-art simulations, which showed that the DM density at the galactic bulge could be \(\sim 3600\) GeV cm\({}^{-3}\)(Piffl et al., 2014). The required DM density in the other set of solutions is much larger. However, such a value is possible near the galactic centre, and values with a similar order of magnitude have been adopted in studying the effect of DM annihilation on main-sequence stars (Moskalenko & Wai, 2006; Iocco, 2008). In conclusion, our estimations considering the DM velocity dispersions show that it is possible to trap a DM of 0.1 \(M_{\odot}\) during the star-forming phase, provided that the molecular cloud is in the vicinity of the galactic centre. There might be concern about whether the DM would follow the collapse of the NM to form a composite bound object. We show in an earlier section that a collapsing NM component would eventually induce a collapsing DM component to form a DM-admixed stellar object. This would, in our case, be a DM-admixed neutron star. Also, the collapse of the DM component happens with a time scale comparable to that of the NM, regardless of its size and mass. By simple scaling FFOR EXArelations, we can qualitatively conclude that the same scenario should also hold for molecular cloud collapse. Therefore, a zero-age main sequence with an inherent DM admixture should be possible, though a detailed numerical simulation shall be employed to justify our conjecture. ## Appendix B Dark Matter Gravitational Waves for Rigidly-Rotating progenitors In section 3.2.2 we show the features of DM GWs from differentially-rotating progenitors and demonstrate that they are detectable by the Advanced LIGO, provided that the DM particle mass is 0.3 GeV. Here, we perform a similar analysis for rigidly-rotating progenitors. In Figure 13 (a), we show the GW spectrograms for rigidly-rotating progenitors. These progenitors are rotating at \(\sim 0.97\) that of the critical velocity and have increasing DM mass fraction \(\epsilon_{\rm DM}\) from 0 to 0.2. We observe that the DM GWs can also be captured as continuous low-frequency (\(<1000\) Hz) signals. In Figure 13 (b), we observe that all peak frequency signals of the DM GWs are detectable by the Advanced LIGO. Figure 13: (a) Power spectral density of DMAIC GWs for rigidly-rotating progenitors with increasing DM mass fraction \(\epsilon_{\rm DM}\). (b) Same as Figure 11, but for characteristic wave strains of models presented in (a) and their comparison with the Advanced LIGO sensitivity curve (dashed line).
2305.14160
Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning
In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks. However, the underlying mechanism of how LLMs learn from the provided context remains under-explored. In this paper, we investigate the working mechanism of ICL through an information flow lens. Our findings reveal that label words in the demonstration examples function as anchors: (1) semantic information aggregates into label word representations during the shallow computation layers' processing; (2) the consolidated information in label words serves as a reference for LLMs' final predictions. Based on these insights, we introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL. The promising applications of our findings again validate the uncovered ICL working mechanism and pave the way for future studies.
Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, Xu Sun
2023-05-23T15:26:20Z
http://arxiv.org/abs/2305.14160v4
# Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning ###### Abstract In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks. However, the underlying mechanism of how LLMs learn from the provided context remains under-explored. In this paper, we investigate the working mechanism of ICL through an information flow lens. Our findings reveal that label words in the demonstration examples function as anchors: (1) semantic information aggregates into label word representations during the shallow computation layers' processing; (2) the consolidated information in label words serves as a reference for LLMs' final predictions. Based on these insights, we introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL. The promising applications of our findings again validate the uncovered ICL working mechanism and pave the way for future studies. ## 1 Introduction In-context Learning (ICL) has emerged as a powerful capability alongside the development of scaled-up large language models (LLMs) Brown et al. (2020). By instructing LLMs using few-shot demonstration examples, ICL enables them to perform a wide range of tasks, such as text classification Min et al. (2022) and mathematical reasoning Wei et al. (2022). Since ICL does not require updates to millions or trillions of model parameters and relies on human-understandable natural language instructions Dong et al. (2023), it has become a promising approach for harnessing the full potentiality of LLMs. Despite its significance, the inner working mechanism of ICL remains an open question, garnering considerable interest from research communities Xie et al. (2022); von Oswald et al. (2022); Dai et al. (2022); Olsson et al. (2022). In this paper, we find that the label words serve as anchors that aggregate and distribute information in ICL. Specifically, we first visualize the attention interactive pattern between tokens with a GPT model Brown et al. (2020) on sentiment analysis. As shown in Figure 1, we have an intuitive observation that as the layer goes deeper, the label words in the demonstration will become more dominant for the prediction. To draw a clearer picture of this phenomenon, we compute two metrics based on saliency scores to portray the information flow in ICL and further propose the following hypothesis: _Information Flow with Labels as Anchors_ \(\mathcal{H}_{1}\): In shallow layers, label words gather the information of demonstrations to form semantic representations for deeper layers. \(\mathcal{H}_{2}\): In deep layers, the model extracts the information from label words to form the final prediction. We design two experiments to validate the hypothesis using GPT2-XL Radford et al. (2019) and Figure 1: Saliency visualization results of shallow and deep layers of a GPT model. Here, the depth of the line from the right word to the left word reflects the importance of the information flow in ICL. GPT-J (Wang and Komatsuzaki, 2021) on various text classification benchmarks. (1) By isolating the label words in certain layers to block the information aggregation path to the label words, we find that such isolation in shallow layers significantly impairs model performance. This indicates that label words indeed collect useful semantics during the forward propagation in shallow layers. (2) We examine the correlation between the attention distributions on the label words of the target position and the final prediction results. The results show that the prediction positively correlates with the attention weights on label words, i.e., the probability of a candidate is higher with more attention weights on the specific label. In summary, these experimental findings suggest that our hypothesis holds well with large language models on real-world datasets. Drawing on insights from the information flow perspective, we explore three approaches to enhance ICL's effectiveness, efficiency, and interpretability. (1) We introduce an anchor re-weighting method utilizing a learnable vector to adaptively adjust the significance of various label words in demonstration examples, achieving a 16.7 average accuracy improvement over vanilla ICL baselines. (2) To expedite ICL inference, we compress its input into pre-computed anchor representations, as model predictions primarily rely on label word activations. Experiments demonstrate that the inference can be accelerated 1.8 \(\times\) with negligible performance degradation. (3) We present an error analysis example using ICL on GPT2-XL, revealing that the label confusion matrix closely mirrors the distance distribution of anchor key vectors, suggesting errors may arise from indistinguishable anchor representations. These promising applications further validate our hypothesis and shed light on future ICL studies for better transparency of LLMs. ## 2 Label Words are Anchors In this section, we first confirm our intuitive findings with two metrics based on saliency scores in SS 2.1. Based on the quantitative results, we further propose the hypothesis to interpret the working mechanism of ICL: \(\mathcal{H}_{1}\): In shallow layers, label words aggregate information from demonstration examples to form semantic representations for later computations. \(\mathcal{H}_{2}\): In deep layers, the model makes predictions by extracting information from label words. We validate these two hypothesis in SS 2.2 and SS 2.3, respectively. ### Hypothesis Motivated by Saliency Scores In this part, we aim to visualize the attention interactive pattern between tokens for GPT2-XL, and find patterns behind such interaction. We utilize a common interpretation tool, saliency technique (Simonyan et al., 2013), to reveal the important token interactions. Following common practice, we use the Taylor expansion (Michel et al., 2019) to calculate the saliency score for each element of the attention matrix: \[\small I_{l}=\sum_{h}\left|A_{h,l}^{\top}\frac{\partial\mathcal{L}(x)}{ \partial A_{h,l}}\right| \tag{1}\] Here, \(A_{h,l}\) is the value of the attention matrix of the \(h\)-th attention head in the \(l\)-th layer, \(x\) is the input, and \(\mathcal{L}(x)\) is the loss function of the task, e.g., the cross-entropy objective for a classification problem. We average all attention heads to obtain the saliency matrix \(I_{l}\) for the \(l\)-th layer. \(I_{l}(i,j)\) represents the importance of the information flow from the \(j\)-th word to the \(i\)-th word for ICL. By observing \(I_{l}\), we can get an intuitive impression that as the layer goes deeper, demonstration label words will become more dominant for the prediction, as depicted in Figure 1. To draw a clearer picture of this phenomenon, we propose the following quantitative metrics base on \(I_{l}\). Let the positions of the label words in the input \(x\) be \(p_{1},...,p_{C}\), the target position be \(q\), and the saliency matrix of the \(l\)-th layer be \(I_{l}\). Then, we define three quantitative metrics: \(\mathbf{S}_{wp}\)**, the average importance of the information flow from other words to label words:** \[\small\begin{split} S_{wp}&=\frac{\sum_{(i,j)\in C _{wp}}I_{l}(i,j)}{|C_{wp}|},\\ C_{wp}&=\{(p_{k},j):k\in[1,C],j<p_{k}\}.\end{split} \tag{2}\] \(\mathbf{S}_{pq}\)**, the average importance of the information flow from label words to the target position:** \[\small\begin{split} S_{pq}&=\frac{\sum_{(i,j)\in C _{pq}}I_{l}(i,j)}{|C_{pq}|},\\ C_{pq}&=\{(p_{k},q):k\in[1,C]\}.\end{split} \tag{3}\] \(\mathbf{S}_{ww}\)**, the average importance of the information flow between any words (excluding \(S_{wp}\) and \(S_{pq}\)):** \[\small\begin{split} S_{ww}=&\frac{\sum_{(i,j)\in C _{ww}}I_{l}(i,j)}{|C_{ww}|},\\ C_{ww}=&\{(i,j):j<i\}\\ &-C_{wp}-C_{pq}.\end{split} \tag{4}\] Experimental SettingsWe choose GPT2-XL from the GPT series (Radford et al., 2019) as the main model to be examined, due to its moderate model size (of 1.5B parameters) that is suitable for our hardware resource and its decent ICL performance (Dai et al., 2022). For datasets, we use a sentiment analysis task, Stanford Sentiment Treebank Binary (SST-2) (Socher et al., 2013), a question type classification task, Text REtrieval Conference Question Classification (TREC) (Li and Roth, 2002; Hovy et al., 2001), a topic classification task, AG's news topic classification dataset (AGNews) (Zhang et al., 2015), an emotion classification task, EmoContext (EmoC) (Chatterjee et al., 2019). The templates used for constructing demonstrations of these datasets can be found in Appendix A. We extracted \(1000\) examples from the test set for evaluation. For ease of analysis, we sample one example for each class from the training set to form the demonstration, and the demonstration order follows a random order. All results are averaged over five random seeds. Results and AnalysisThe results are shown in Figure 2. It can be seen that (1) in the shallow layers, \(S_{pq}\), the importance of the information flow from label words to targeted positions is low, while \(S_{wp}\), the information flow from other words to label words is high; (2) in the deep layers, \(S_{pq}\), the importance of information flow from label words to the targeted position become the dominant one. Meanwhile, \(S_{pq}\) and \(S_{wp}\) are usually more than \(S_{ww}\), indicating that the interactions involving label words are more important than others. Proposed HypothesisBased on this, we propose the hypothesis that label words function as anchors in the ICL information flow. In shallow layers, label words gather information from demonstration examples to form semantic representations for deeper layers, while in deep layers, the model extracts the information from label words to form the final prediction. ### Shallow Layers: Information Aggregation In this part, we validate the first part of our hypothesis. We hypothesize that the aggregation of information in in-context learning relies on the flow of information from demonstration tokens to label tokens, which is facilitated by the attention mechanism in transformers. By manipulating attention to block this information flow and examining the changes in model behavior, we can validate whether the information aggregation process exists and how much it contributes to the final predictions. Experimental SettingsWe maintain the same sample size of 1000 inputs from the test set as Section 2.1. We use the same demonstration for a single random seed. To further validate our findings on larger models, we incorporate GPT-J (6B) (Wang and Komatsuzaki, 2021) in our experiments, which surpasses GPT2-XL in terms of model size and capacity. Implementation DetailsTo achieve the intervention on the information flow on the label words, we isolate label words in certain layers, by disabling the attention from the label words attending to the demonstrations. Specifically, we set \(A_{l,h}(p,i)(i<p)\) in the attention matrix \(A_{l,h}\) of each attention head in the \(l\)-th layer to 0, where \(p\) is Figure 2: Relationship between the relative sizes of \(S_{wp}\), \(S_{pq}\), and \(S_{ww}\) and the number of layers. (We normalized \((S_{wp},S_{pq},S_{ww})\)) Results of TREC and EmoC can be seen in Appendix D. Initially, \(S_{wp}\) occupies a significant proportion, but it gradually decays over the layers, while \(S_{pq}\) becomes the dominant one. the position of the label word, and \(i\) is the position of the word before the label word. Therefore, label words in the \(l\)-th layer cannot receive the information from previous demonstration examples via the attention mechanism. MetricsTo measure the effects of blocking the information flow from demonstration tokens to label tokens, we use the following metrics to examine the changes in the model predictions: **(1) Label Loydalty:** which measures the consistency between the outputs labels with and without isolation. Higher label loyalty indicates that the model predictions are less affected by the isolation. **(2) Word Loyalty:** which probes more fine-grained changes on the model predictions. Specifically, we adopt the Jaccard similarity between the top-\(5\) potential words drawn from the original vocabulary distributions and that after isolation. This metric could capture subtle changes in the output distributions, where higher word loyalty indicates better consistency between the language model outputs. We refer readers to Appendix B for a detailed discussion. Results and AnalysisWe isolate the label words in the first 5 layers to examine the existence and effect of information aggregation. We also isolate the label words in the last 5 layers for comparison. As shown in Table 1, isolating label words in the first \(5\) layers can cause serious interference to the model's task completion, while the impact of the last \(5\) layers is small. This verifies the existence of information aggregation in shallow layers, and demonstrates its significance in ICL. We also explore the isolation effects by varying the number of labels disabled in Appendix C and observe a similar trend. ### Deep Layers: Information Extraction We further validate the second part of our hypothesis, i.e., the model extracts the information from label words to form the final prediction. We examine the correlation between the attention distributions on the label words of the target position and the model's final prediction. We use the same dataset and model setup as in SS 2.2. #### 2.3.1 Experiments Suppose the positions of the label words in the input \(x\) are \(p_{1},...,p_{C}\), the targeted position is \(q\), and the sum of the attention matrices of each attention head at the \(l\)-th layer is \(A_{l}\). We postulate that there's a strong correlation between the attention distributions on the label words of the target position \((A_{l}(q,p_{1}),...,A_{l}(q,p_{C}))\) and the model's final prediction. We use the AUC-ROC score to quantify this correlation, which we denote as AUCROC\({}_{l}\). We use AUC-ROC for two reasons: (1) Considering the attention mechanism, the attention values are used to weigh the key vectors. The size of the attention cannot fully reflect the importance of the corresponding word; it must be combined with factors such as the norm of the key vector Kobayashi et al. (2020). The AUC-ROC metric can implicitly consider these factors to better discover the correlation. (2) The proportion of different labels output by the model may be unbalanced. Using AUC-ROC can to some extent alleviate this problem and prevent the influence of class imbalance from disturbing our analysis. Considering the residual mechanism that is common in Transformer models, the hidden state of each layer can be seen as the accumulation of the hidden states calculated separately by the previous layers. To measure the contribution of all layers up to the \(l\)th layer, we define the accumulated AUCROC score \(R_{l}\): \[R_{l}=\frac{\sum_{i=1}^{l}(\text{AUCROC}_{i}-0.5)}{\sum_{i=1}^{N}(\text{AUCROC }_{i}-0.5)}. \tag{5}\] Here, we quantify the positive contribution by calculating the difference between AUC-ROC and a baseline threshold of \(0.5\). The value of \(R_{l}\) represents the ratio of the contribution of the attention distributions on the label words of the target position in all layers up to the \(l\)-th layer. \begin{table} \begin{tabular}{c|c c} \hline \hline Isolation Layer & Label Loyalty & Word Loyalty \\ \hline GPT2-XL No isolation & 100.00 & 100.00 \\ First 5 layers & 44.03 & 6.30 \\ Last 5 layers & 99.61 & 99.52 \\ \hline GPT-J No isolation & 100.00 & 100.00 \\ First 5 layers & 62.13 & 53.77 \\ Last 5 layers & 99.01 & 97.76 \\ \hline \hline \end{tabular} \end{table} Table 1: Effects of isolating label words, results are averaged across the SST-2, TREC, AGNews, and EmoC datasets (percentage). Isolating the first 5 layers significantly reduces the loyalty. #### 2.3.2 Results and Analysis Figure 2(a) and Figure 2(b) show the correlation metrics of each layer for GPT2-XL and GPT-J, respectively. The result is averaged over four datasets. Firstly, the AUCROC\({}_{l}\) of the deep layers reaches a high score of \(0.8\), indicating a strong correlation between the attention distributions on the label words of the target position and the model's final prediction. Secondly, the cumulative contributions \(R_{l}\) of the first few layers are near \(0\), while the score increases significantly in the middle and later layers. This phenomenon demonstrates that the classification decision of the model mainly takes place in the deep layers. From these two points, we validate that the model extracts the information from the positions of the label words to form the final prediction. ## 3 Applications Derived from Our Anchor-Based Understanding With the insights drawn from the validated hypothesis, in this section, we propose applications to improve the effectiveness and inference efficiency. We introduce an anchor re-weighting method in SS 3.1 to adaptive adjust the contribution of demonstration examples. In SS 3.2, we explore a context compression technique that reduces original textual inputs to anchor hidden states for accelerating the ICL inference. Besides, using the anchor distances distribution, we also perform an analysis to better understand the errors ICL made in real-world scenarios (SS 3.3). These applications gain verify the proposed hypothesis, shedding light on new directions for future improvements of ICL. ### Anchor Re-Weighting In this part, we establish a connection between ICL and logistic regression based on the previous analysis. Inspired by such a connection, we further propose an approach for enhancing the accuracy of ICL by re-weighting the label anchors. #### 3.1.1 Analogy Between ICL and Logistic Regression By approximating the ICL model as a weighted combination of classifiers and leveraging the correlation between attention distributions and final predictions, we make an analogy of ICL to ICL the logistic regression. Specifically, we demonstrate that the attention mechanisms in ICL resemble the calculation principles of logistic regression, indicating a structural resemblance between the two frameworks. In SS 2.3, we show that the output category of the model is strongly correlated with the attention values \(\left(A\left(q,p_{1}\right),\ldots,A\left(q,p_{C}\right)\right)\) between the target position \(q\) and the label word positions \(p_{1},...,p_{C}\) at deep layers. Considering the residual mechanism of the Transformer model, the final output can be viewed as the sum of the results from previous layers. Besides, the results of each layer can be viewed as the sum of each single attention head. We can approximate the outputs of the ICL \(\mathbf{f}\) as: \[\mathbf{f}\approx\sum_{l=1}^{L}\sum_{h=1}^{H}\gamma_{lh}\mathbf{f}_{lh}, \tag{6}\] where \(\mathbf{f}_{lh}\) represents the classifier approximation of the \(h\)th attention head in the \(l\) th layer, and \(\gamma_{lh}\) denotes the weight of the classifier. \(\mathbf{f}_{lh}\) outputs a probability vector of each category for the input \(x\) as follows: \[\mathbf{f}_{lh}(x)\approx\left(A_{l}^{h}\left(q,p_{1}\right),\ldots,A_{l}^{h} \left(q,p_{C}\right)\right). \tag{7}\] The approximation \(\mathbf{f}_{lh}\) may differ from the actual one by a coefficient, but it does not affect our subsequent discussions and conclusions. Figure 3: AUCROC\({}_{l}\) and \(R_{l}\) of each layer in GPT models. The result is averaged over SST-2, TREC, AGNews, and Emoc. AUCROC\({}_{l}\) reaches 0.8 in deep layers, and \(R_{l}\) increases mainly in the middle and later layers. According to the calculation formula of the attention mechanism, for the \(h\)th head of the \(l\)th layer, we have: \[\begin{split}&\text{Pr}_{f_{lh}}(Y=i|X=x)\\ =& A_{l}^{h}(q,p_{i})\\ =&\frac{\exp(\mathbf{q}_{q}^{h}\mathbf{k}_{p_{i}}^{ hT}/\sqrt{d})}{\sum_{j=1}^{N}\exp(\mathbf{q}_{q}^{h}\mathbf{k}_{j}^{ hT}/\sqrt{d})},\end{split} \tag{8}\] where \(\mathbf{q}_{q}^{h}\) represents the query vector corresponding to the target position, \(\mathbf{k}_{p_{i}}^{h}\) represents the key vector corresponding to the label word, and \(d\) represents the dimension of the key vectors. By defining \(\mathbf{q}_{q}^{h}/\sqrt{d}=\tilde{\mathbf{x}}_{lh}\) and \(\mathbf{k}_{p_{i}}-\mathbf{k}_{p_{C}}=\mathbf{\beta}_{lh}^{i}\) (where \(l\) corresponds to the layer number), we can infer that: \[\log\frac{\text{Pr}_{f_{lh}}(Y=i|X=x)}{\text{Pr}_{f_{lh}}(Y=C|X=x)}=\mathbf{ \beta}_{lh}^{iT}\tilde{\mathbf{x}}_{lh}. \tag{9}\] This is similar to the logistic regression model, where \[\log\frac{\text{Pr}_{f}(Y=i|X=x_{t})}{\text{Pr}_{f}(Y=C|X=x)}=\beta_{0}^{i}+ \mathbf{\beta}^{iT}\mathbf{x}. \tag{10}\] \(\beta_{0}^{i}\) and \(\mathbf{\beta}^{iT}\) are learnable parameters, and \(\mathbf{x}\) is the feature vector corresponding to the input. Based on the preceding discussions, we have established an analogy between the ICL model and logistic regression, emphasizing their structural similarity. We approximate the ICL model as a combination of classifiers that employ attention distributions to generate predictions. Furthermore, we demonstrate that the computation of attention distributions exhibits similarities with the calculation principles of logistic regression. This correspondence implies that both frameworks operate on similar principles. #### 3.1.2 Anchor Re-Weighting Method Inspired by the relation ICL and logistic regression, we add a learnable \(\beta_{0}^{i}\) to \(\mathbf{f}_{lh}\) in Eq. (9): \[\log\frac{\text{Pr}\mathbf{f}_{lh}(Y=i|X=x)}{\text{Pr}\mathbf{f}_{lh}(Y=C|X=x)}=\beta _{0}^{i}+\mathbf{\beta}_{lh}^{iT}\tilde{\mathbf{x}}_{lh}. \tag{11}\] This is equivalent to adjust the weights of \(A_{l}^{h}(q,p_{i})\), which can be expressed as \[\mathbf{f}_{lh}(x)\approx(\exp(\beta_{lh}^{1})A_{l}^{h}\left(q,p_{1} \right),\\ \ldots,\exp(\beta_{lh}^{C})A_{l}^{h}\left(q,p_{C}\right)), \tag{12}\] where \(\beta_{lh}^{1},...,\beta_{lh}^{C}\) are learnable parameters. We manipulate the attention mechanism of the model to implement this re-weighting mechanism. Please refer to the details in Appendix F. To train a reweighting vector \(\mathbf{\beta}=\left\{\beta_{lh}^{i}\right\}\), we use an additional training set \((\mathbf{X}_{train},\mathbf{Y}_{train})\). On the training set, we concatenate normal examples to training data to perform ICL, and optimize \(\mathbf{\beta}\) with respect to the classification loss function \(\mathcal{L}\): \[\mathbf{\beta}=\arg\min_{\mathbf{\beta}}\mathcal{L}(\mathbf{X}_{train},\mathbf{Y}_{train}). \tag{13}\] Metaphorically, this is equivalent to "re-weighting the anchors" in ICL. It can also be seen as an adjustment of the contribution of demonstration examples since their information has been aggregated into the anchors as suggested by our previous analysis. #### 3.1.3 Experiments To verify the effect of re-weighting, we choose one sample per class as normal demonstrations, and choose \(4\) extra samples per class from the task training dataset to train \(\mathbf{\beta}\). Consistent with the setups in SS 2.2, we use \(5\) random seeds and report the average result. For each random seed, we fix the demonstration and sample \(1000\) test samples from the test datasets. To optimize \(\mathbf{\beta}\), we use gradient descent with the Adam optimizer (Kingma and Ba, 2015) with a learning rate of \(0.01\), \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and a batch size of 1 (due to memory constraint) for \(10\) epochs. Due to computational resource limitations, we cannot perform training on GPT-J, so we only adopt GPT2-XL for evaluation. We compare re-weighting with two baselines: (1) Vanilla ICL with the same demonstration (1 shot per class) (2) Vanilla ICL with the training set of \(\mathbf{\beta}\) added to the demonstrations (5-shot per class) for a fair comparison. #### 3.1.4 Results As shown in Table 2, the proposed anchor re-reweighting method significantly improves the performance of in-context learning. Particularly, the effect is remarkable on the SST-2 and EmoC datasets. Note that adding more demonstration examples for vanilla in-context learning may not bring a stable accuracy boost due to the potential noise introduced, as discussed in (Zhao et al., 2021). Different from vanilla ICL which utilizes the extra examples to form a demonstration, we train a re-weighting vector \(\mathbf{\beta}\) to adjust the contribution of different label anchors. In this way, we reduce the length of the input context and thus bring (almost) no extra cost to the inference speed. The consistent improvements of our method suggest that the re-weighting could be a better alternative to exploit the demonstration examples. ### Anchor-Only Context Compression We further explore a context compression technique that reduces original textual inputs to anchor hidden states for accelerating the ICL inference. #### 3.2.1 Method In SS 2.3, we verify that the model's output heavily relies on the label words, which collect information from demonstration examples during the forward propagation. We also notice that in auto-regressive language models such as GPT, the hidden state of each word depends only on the preceding words. In other words, the information aggregation process of label words is independent of the subsequent words. Therefore, we can compute the hidden states of the label words \(\mathbf{H}=\{\{\mathbf{h}_{l}^{i}\}_{l=1}^{N}\}_{i=1}^{C}\) (where \(h_{l}^{i}\) represents the \(l\)-th layer's hidden state of the \(i\)-th label word in the demonstration). We concatenate \(\mathbf{h}_{l}^{1},...,\mathbf{h}_{l}^{C}\) in front of the input at each layer during inference. In this way, the model can perform inference without requiring the entire input, and thus the inference can be sped up. In preliminary experiments, we find that the hidden states of the label words alone are not sufficient for the model to complete the ICL task. We speculate the reason is that the formatting information is also important for ICL to identify the output space on the target position. Therefore, we collect both the hidden states corresponding to the formatting and the hidden states corresponding to the label words and concatenate them together before inference, which we name as \(\textbf{Hidden}_{\textbf{anchor}}\). #### 3.2.2 Experiments We follow the same experimental settings as adopted in SS 2.2. For comparison with Textanchor, we also implement two baselines for input compression (referred to as Hiddenanchor in the text): **Textanchor**: Concatenating the formatting text and target words before the text to be predicted, instead of concatenating them with the hidden states at each layer. **Hiddenrandom**: Randomly selecting the same number of words as in the Hidden method from the entire input text and concatenating their corresponding hidden states in front of the hidden states at each layer. These two methods have the same efficiency as the Hiddenanchor method. We evaluate the proposed compression methods using the label loyalty and word loyalty introduced in SS 2.2, as well as the original classification accuracy. #### 3.2.3 Results The results obtained on two models are shown in Table 3. The proposed compression method, Hiddenanchor, achieves the best results among the three compression methods on all metrics and for both models. For example, with the GPT-J model, the compression method with anchor states only leads to a \(1.5\) accuracy drop compared to the uncompressed situation, indicating that the compression introduces negligible information loss. Further, we estimate the efficiency improvements over the original ICL. As shown in Table 4, the speed-up ratio ranges from \(1.1\times\) to \(2.6\times\), as the efficiency gain is influenced by the total length of the demonstrations \(L_{\text{demo}}\) and the length of the text to be predicted \(L_{\textbf{x}}\). We refer readers to Appendix G for a more elaborated analysis of the speed-up ratios. Besides, we observe that the acceleration effect is more pronounced in the GPT-J model compared to GPT2-XL, demonstrating its great potential to apply to larger language models. ### Anchor Distances for Error Diagnosis Lastly, we perform an error analysis for ICL by utilizing the relationships of the key vectors corresponding to the label words. #### 3.3.1 Method In SS 2.3, we verify the correlation between the attention weights \(\left(A_{l}\left(q,p_{1}\right),\ldots,A_{l}\left(q,p_{C}\right)\right)\) and the model's output results. Here, \(p_{1},...,p_{C}\) denotes the label word position indexes, and \(q\) is the target position. For a single attention head, the attention score is computed as: \[A_{l}^{h}(q,p_{i})=\frac{\exp(\mathbf{q}_{l}\mathbf{k}_{p_{i}}^{T}/\sqrt{d})}{ \sum_{j=1}^{N}\exp(\mathbf{q}_{l}^{h}\mathbf{k}_{j}^{T}/\sqrt{d})}. \tag{14}\] This implies that \(A_{l}^{h}(q,p_{i})\) is influenced by \(\mathbf{q}_{q}\mathbf{k}_{p_{i}}^{T}\), i.e., the similarity between the key and query vectors. Therefore, if the key vectors \(\mathbf{k}\) of label words \(p_{i}\) and \(p_{k}\) are close, \(A_{l}^{h}(q,p_{i})\) and \(A_{l}^{h}(q,p_{k})\) will be relatively close for any input. Furthermore, considering the distribution of query vectors \(\mathbf{q}_{q}\), we employ a PCA-like method to extract the components of the key vectors along the directions with significant variations in \(\mathbf{q}_{q}\), and concatenate all heads to get feature \(\hat{\mathbf{k}}\) (see Appendix H for details). The confusion between categories can then be measured by evaluating the distance between \(\hat{\mathbf{k}}\): \[\text{Confusion}^{\text{pred}}_{ij}=\frac{\|\hat{\mathbf{k}_{\mathbf{p_{1}}}}- \hat{\mathbf{k}_{\mathbf{p_{j}}}}\|}{\max_{s\neq t}\|\hat{\mathbf{k}_{\mathbf{p_{ s}}}}-\hat{\mathbf{k}_{\mathbf{p_{t}}}}\|}, \tag{15}\] where \(\text{Confusion}^{\text{pred}}_{ij}\) is a value that does not exceed 1. A larger \(\text{Confusion}^{\text{pred}}_{ij}\) indicates a lighter degree of confusion between label categories. #### 3.3.2 Settings We select GPT2-XL and the TREC dataset, as we observe that the model exhibits significant confusion between certain categories while little confusion in others. Here we use the whole 500 samples of the TREC test set, and sample 1 demonstration per class for convenience of analysis. #### 3.3.3 Experiments We measure the actual model confusion score \(\text{Confusion}_{ij}\) between category \(i\) and category \(k\) using the AUC-ROC metric (detailed in Appendix I). The heatmap with \(\text{Confusion}^{\text{pred}}_{ij}\) and \(\text{Confusion}_{ij}\) is plotted for comparison. #### 3.3.4 Results As shown in Figure 4, the proposed approximation metric based on the anchor vectors can identify the most severe confusion case (Description-Entity) and performs reasonably well for relatively high confusion categories (Entity-Abbreviation, Description-Abbreviation). This high correlation indicates that ICL makes errors in categories with \begin{table} \begin{tabular}{c|c c c} \hline \hline Model & SST-2 & TREC & AGNews & EmoC \\ \hline GPT2-XL & \(1.1\times\) & \(2.5\times\) & \(1.5\times\) & \(1.4\times\) \\ GPT-J & \(1.5\times\) & \(2.6\times\) & \(2.2\times\) & \(1.9\times\) \\ \hline \hline \end{tabular} \end{table} Table 4: Acceleration ratios after applying the \(\text{Hidden}_{\text{random}}\) method on GPT2-XL and GPT-J. \begin{table} \begin{tabular}{c|c c c|c|c} \hline \hline Method & \multicolumn{1}{c}{SST-2} & TREC & AGNews & EmoC & Average \\ \hline Vanilla In-context Learning ( 1-shot per class ) & \(61.28\) & \(57.56\) & \(73.32\) & \(15.44\) & \(51.90\) \\ Vanilla In-context Learning ( 5-shot per class ) & \(64.75\) & \(60.40\) & \(52.52\) & \(9.80\) & \(46.87\) \\ Anchor Re-weighting (1-shot per class) & **90.07** & **60.92** & **81.94** & **41.64** & **68.64** \\ \hline \hline \end{tabular} \end{table} Table 2: The effect after adding parameter \(\beta_{0}^{i}\). For AGNews, due to the length limit, we only include three examples for each class as a demonstration. Our Anchor Re-weighting method achieves the best performance overall tasks. Figure 4: Predicted confusion matrix and real confusion matrix on TREC. We set the undefined to \(1\) diagonal for better visualization. Two heatmaps are similar for confusing category pairs, especially in light-color blocks. \begin{table} \begin{tabular}{c|c c c} \hline \hline Method & Label Loyalty & Word Loyalty & Acc. \\ \hline ICL (GPT2-XL) & \(100.00\) & \(100.00\) & \(51.90\) \\ \hline \(\text{Text}_{\text{anchor}}\) & \(51.05\) & \(36.65\) & \(38.77\) \\ \(\text{Hidden}_{\text{random}}\) & \(44.25\) & \(6.62\) & \(31.80\) \\ \(\text{Hidden}_{\text{anchor}}\) & \(\textbf{79.47}\) & \(\textbf{62.17}\) & **45.04** \\ \hline ICL (GPT-J) & \(100.00\) & \(100.00\) & \(56.82\) \\ \hline \(\text{Text}_{\text{anchor}}\) & \(53.45\) & \(43.85\) & \(40.83\) \\ \(\text{Hidden}_{\text{random}}\) & \(47.84\) & \(1.12\) & \(39.72\) \\ \(\text{Hidden}_{\text{anchor}}\) & \(\textbf{89.06}\) & \(\textbf{75.04}\) & \(\textbf{55.59}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results of different compression methods on GPT2-XL and GPT-J (averaged over SST-2, TREC, AGNews, and EmoC). Acc. denotes accuracy. The highest value excluding Vanilla ICL is marked in bold. Our method achieves the best performance among compression methods. similar label anchors. Overall, this result demonstrates that our anchor-based analysis framework could become an interpretable tool for better understanding the errors of ICL. ## 4 Related Work The existing literature on in-context learning analysis can be broadly divided into two streams, each focusing on different aspects. The first stream explores the influencing factors of ICL based on input perturbation, such as the order [22], the formatting [14, 15], and the selection of the demonstration [16]. Designing proper demonstration construction strategies [23, 17] and calibration techniques [24, 16] could bring clear boosts to the ICL performance. The second stream investigates the inner working mechanism of ICL through different conceptual lenses, such as making an analogy of ICL to gradient descent [25, 16] and viewing the process of ICL as a Bayesian inference [23]. In this paper, we provide a novel perspective by examining the information flow in language models to gain an understanding of ICL. Our approach offers new insights and demonstrates the potential for leveraging this understanding to improve the effectiveness, efficiency, and interpretability of ICL. ## 5 Conclusion In this paper, we propose a hypothesis that label words serve as anchors in in-context learning for aggregating and distributing the task-relevant information flow. Experimental results with attention manipulation and analysis of predictions correlation consolidate the hypothesis holds well in GPT2-XL and GPT-J models. Inspired by the new understanding perspective, we propose three practical applications. First, an anchor re-weighting method is proposed to improve ICL accuracy. Second, we explore a demonstration compression technique to accelerate ICL inference. Lastly, we showcase an analysis framework to diagnose ICL errors on a real-world dataset. These promising applications again verify the hypothesis and open up new directions for future investigations on ICL.
2309.03911
Identifying Essential Hub Genes and Protein Complexes in Malaria GO Data using Semantic Similarity Measures
Hub genes play an essential role in biological systems because of their interaction with other genes. A vocabulary used in bioinformatics called Gene Ontology (GO) describes how genes and proteins operate. This flexible ontology illustrates the operation of molecular, biological, and cellular processes (Pmol, Pbio, Pcel). There are various methodologies that can be analyzed to determine semantic similarity. Research in this study, we employ the jack-knife method by taking into account 4 well-liked Semantic similarity measures namely Jaccard similarity, Cosine similarity, Pairsewise document similarity, and Levenshtein distance. Based on these similarity values, the protein-protein interaction network (PPI) of Malaria GO (Gene Ontology) data is built, which causes clusters of identical or related protein complexes (Px) to form. The hub nodes of the network are these necessary proteins. We use a variety of centrality measures to establish clusters of these networks in order to determine which node is the most important. The clusters' unique formation makes it simple to determine which class of Px they are allied to.
Mamata Das, Selvakumar K., P. J. A. Alphonse
2023-08-09T05:10:41Z
http://arxiv.org/abs/2309.03911v1
Identifying Essential Hub Genes and Protein Complexes in Malaria GO Data using Semantic Similarity Measures ###### Abstract Hub genes play an essential role in biological systems because of their interaction with other genes. A vocabulary used in bioinformatics called Gene Ontology (GO) describes how genes and proteins operate. This flexible ontology illustrates the operation of molecular, biological, and cellular processes (\(\mathbb{P}_{mol},\mathbb{P}_{bio},\mathbb{P}_{cl}\)). There are various methodologies that can be analyzed to determine semantic similarity. Research in this study, we employ the jackknife method by taking into account 4 well-liked Semantic similarity measures namely Jaccard similarity, Cosine similarity, Pairwise document similarity, and Levenshtein distance. Based on these similarity values, the protein-protein interaction network (PPI) of Malaria GO (Gene Ontology) data is built, which causes clusters of identical or related protein complexes (\(P_{x}\)) to form. The hub nodes of the network are these necessary proteins. We use a variety of centrality measures to establish clusters of these networks in order to determine which node is the most important. The clusters' unique formation makes it simple to determine which class of \(P_{x}\) they are allied to. keywords: Protein interaction, Gene Ontology, Malaria, Hub node. + Footnote †: journal: ## 1 Introduction Genes are segments of deoxyribonucleic acid (DNA) that contain instructions for encoding particular proteins. The actions that proteins take are governed by genes. Gene annotation is the process of identifying the coding areas and functionality of genes. It aids in locating structural components and connecting their corresponding function to each gene's location. It contains every required biological information to create any kind of living thing. Gene annotation enables the identification and prediction of \(P_{x}\) functions, enabling additional comparative research [1]. A crucial role in biological processes is played by proteins. Understanding \(P_{x}\) is beneficial for understanding how an organism is put together, but it also aids in disease prediction and the identification of potential target cells [2][3]. Essential proteins are crucial for preserving cellular life. The largest PPI network's (PPIN) topology has been found to have a property [4], which means that the nodal degree distributions of the network are power-law distributions (very close to the cutoff points) [5; 6]. The result is that degrees are not scaled according to any specific scale. Nonetheless, researchers studying PPINs have long established an arbitrary cutoff point above which all proteins with degrees greater than this cutoff are considered to be uniquely unique and are called hub proteins (\(P_{hub}\)) [7; 8]. While \(P_{hub}\) are arbitrarily defined, they often have unique biological characteristics, making them appealing for the introduction. \(P_{hub}\) play a key role in forming a modular protein interaction network and, as some studies suggest, may also be more evolutionarily conserved than non-hub proteins (\(P_{\neg hub}\)) [9; 10]. They are therefore frequently found to be more crucial than \(P_{\neg hub}\). It is possible to refer to it as a biological lexicon that illustrates the functions of the 3 major divisions of \(\mathbb{P}_{mol}\), \(\mathbb{P}_{bio}\), and \(\mathbb{P}_{cel}\). If there is a resemblance between two statements, it is assumed that both of the sentences express the same meaning. Similarity between two sentences is defined based on the structure and syntax of the sentences. We perform semantic similarity checks on GO phrases in order to assign a numerical value (a "measuring value") to the GO term. Similarity approaches can be used to locate related genes based on their functions using the ontological data resource (nominal data) [11]. Jaccard's similarity, Cosine similarity, pairwise similarity, Levenshtein, based on measurements of distance and ratio, and are some of the popular text similarity metrics. Most of these measures look for terms that are used frequently in both texts in order to quantify how important they are to the sentence.These measures often yield results in the form of a numeric value between 0 and 1. These metrics enable us to compare two texts and determine how similar the phrases are. The PPIN, which may be either directed or undirected, is a graphical representation of proteins related to one another by edges. Typically, these networks show a biological process. To identify the node in a network that has the most influence, centrality measurements are used. Degree centrality(DC), closeness centrality(CC), betweenness centrality(BC), eigenvector centrality(EC), harmonic centrality, second order centrality, group centrality, and many more are examples of centrality measurements. These metrics provide each node in a network with a numerical number to indicate their contribution to the network. The central node is the node with the highest centrality value and is therefore connected to the majority of the network's nodes. The biological purpose of the hub node is specified by the PPIN, where hub nodes are treated as clusters. By placing the data in a certain group or category to which it belongs, clustering aids in data analysis. The clustering coefficient can be used to determine the degree to which nodes in a network cluster together. The average clustering across the entire network is shown via average coefficient clustering, along with the cluster's degree of completion. The creation of clusters of identical or related \(P_{x}\) is suggested in this research. We applied and assessed multiple similarity algorithms to discover the best one to detect related genes (\(P_{x}\)) using the gene annotation dataset as the training dataset, which comprises gene names and their capabilities or behaviors. We create the PPI network based on the outcomes of related genes. We utilised and examined multiple centrality measures to build clusters of these networks in order to determine the hub node, or the node with the greatest influence. The clusters' unique formation makes it simple to determine which class of \(P_{x}\) they belong to. Let \(V\) be the set of vertices and \(E\) be the set of edges in the graph \(G(\,V,E)\). When discussing a general graph, we use the terms vertices or nodes and edges, and when discussing a protein interaction network, we use the terms protein and interaction. Proteins, nucleic acids, and tiny molecules are necessary for a cell to build a dense network of molecular interactions. Nodes and edges are how molecules interact with one another. The molecular structure's network architecture provides information on the organisation and function of proteins. Protein clusters are created when strongly linked nodes interact to build protein networks [2]. In 2007, [12] suggested a fresh approach in which an algorithm is put into practise to ascertain how semantically comparable GO concepts are. using this approach to assess the functional similarity of genes. Utilizing online-based methods for gene similarity, results of gene grouping based on similarity values are obtained. We also looked at [11], where they assessed how advances in genomics research and technology have revealed the dynamic structure and function of genes. Genome annotation is the process of identifying genetic components and their purpose. It is possible to store this in text format. We can thus investigate the query or view the genomics data as a result. Comparable gene expression patterns suggest that the biological functions of the genes are likely to be similar. Bringing together genes with similar functions is the basic goal of clustering. An ontological annotation of data resources serves as the foundation for a semantic similarity metric [13]. Data and annotations can be found in several bioinformatics resources. The foundation study [1] offers a method for identifying \(P_{x}\). Given how complex the structure is, there is an algorithm for estimating \(P_{x}\), and considerable work has gone into its creation. With the help of a probabilistic Bayesian Network (BN), they create the complex subgraph. The parameters of the BN model are learned using training sets of well-known complexes. It extracts the traits that are used to separate complicated objects from simpler ones. This experiment demonstrates that EGCPI can detect \(P_{x}\) more accurately when utilising evolutionary graph clustering. A research [14] that was published in 2011 made the suggestion that node centrality measurements, which are crucial for a variety of graph applications and biological network analysis, should be used instead. Many different approaches for defining centrality have been proposed, from the most basic (like node degree) to the most complex and scalable. For evaluating the relative significance of nodes within a graph of small nodes, centrality is frequently used. The many centrality measurements include eigenvector centrality, betweenness centrality, near centrality, and centrality degree. A article [15] for predicting crucial proteins by combining network topology for cell function was published in 2018 as well. Network levels matter based on PPIs. GO similarity measurements and centrality approaches are used to identify important proteins on PPI networks. The relationship between hub proteins and essentiality in the S. cerevisiae physical interaction network has been explored, but the characteristics of essential modules and the differences in topological properties between essential and non-essential proteins are not well understood. In [16], they have found that essentiality is a modular property, with the number of intra-complex or intra-process interactions being a better predictor of essentiality than overall interaction count. Furthermore, essential proteins within essential complexes have a higher number of interactions, particularly within the complex itself. Identifying key proteins from PPI networks is crucial, but high false positive rates hinder current computational methods. In paper [17], they have proposed a strategy to construct reliable PPI networks by using Gene Ontology (GO)-based semantic similarity measurements. Author have calculated confi dence scores for protein pairs under three annotation terms namely MF, BP, and CC (Molecular function, Biological process, and Cellular component) using five semantic similarity metrics (Jiang, Lin, Rel, Resnik, and Wang). Low-confidence links are filtered out, resulting in refined PPI networks. Six centrality methods are applied and compared, showing that the performance under refined networks is better than under the original networks. Among the metrics, Resnik with a BP annotation term performs the best, highlighting its favorable choice for measuring the reliability of protein links. PPIs are crucial for cellular processes, and hubs play a vital role in maintaining the structure of protein interaction networks. [18] This study introduces a novel measure to identify and differentiate two types of hubs, party hubs and date hubs, based on semantic similarity and Gene Ontology data. By combining this measure with centrality measures, the study demonstrates accurate detection of potential party hubs and date hubs, matching confirmed hubs with high accuracy. In the field of molecular biology, identifyin PPIs is crucial. While experimental methods have limitations, computational approaches using semantic similarity from GO annotation have gained attention. [19] study proposed a GO-based method for predicting protein-protein interactions by integrating different similarity measures derived from the GO graph structure. By combining information from both the ascending and descending parts of the three ontologies, the method achieved the best performance in predicting PPIs, demonstrating its effectiveness in this area. ## 2 Materials and methods We have used 24 Malaria GO data [20]. The data has been taken from UniProtKB/Swiss-Prot database which are reviewed [21]. The used dataset has been mentioned in Table 1 where term Entry define Unique and stable entry identifier and Gene Names define Name(s) of the gene(s) encoding the protein. We have done similarity analysis on the data and get the similarity matrix. The edge information (we can say protein interaction data) get from similarity matrix. We have created the PPI network from these edge information. In the PPIN, the nodes represent proteins and edges denote biological interactions between protein pairs. The global properties of each PPIN has been mentioned in Table 3. We may see the similarity matrix in Fig. 2 to 3 where cosine similarity contained maximum no. of 1. Fig. 6 to 8 are showing the PPI network generated from four similarity measure. We have used Networkx, a well-liked software programme written in Python, to create a PPI network. Here, the PPI networks are undirected. As cosine similarity matrix is containing maximum 1 values, we will measure the centrality score for this network only. We have chosen four most important centrality score measure DC, CC, BC, EC with additional PR features. The centrality scores are showed in 4.The threshold values for each category were derived by averaging the values of each centrality measure (\(th_{value}\)). By using the \(th_{l}[value]\) we can locate the node that acts as the hub in the network. The centrality score \(<th_{value}\) of any protein is less influential than the protein's own centrality score \(\geq th_{value}\). The hub protein was then identified by getting the intersection of all the significant proteins in each category (DC, CC, BC, EC, and PR). ### Similarity Analysis The GO term "semantic similarity analysis" is used to group related genes. A few well-known similarity approaches were applied to the dataset in order to determine the optimum similarity approach to be used on the gene ontology data. We are following standard procedures for all similarity tests: 1. The same data resource was used for all similarity tests. 2. If the similarity value exceeds the criterion of 0.60 (or 60%), then similarity is 1, else it is 0. \begin{table} \begin{tabular}{l l l l} \hline Entry & Gene Names & Entry & Gene Names \\ \hline P58753 & TIRAP MAL & P16671 & CD36 GP3B GP4 \\ O60603 & TLR2 TIL4 & P04921 & GYPC GLPC GPC \\ Q9NSE2 & CISH G18 & P31994 & FCGR2B CD32 FCG2 IGFR2 \\ P02724 & GYPA GPA & P35228 & NOS2 NOS2A \\ P68871 & HBB & P35613 & BSG UNQ6505/PRO21383 \\ P17927 & CR1 C3BR & O14931 & NCR3 1C7 LY117 \\ P05362 & ICAM1 & P02730 & SLC4A1 AE1 DI EPB3 \\ P11277 & SPTB SPTB1 & Q08495 & DMTN DMT EPB49 \\ P11413 & G6PD & Q16570 & ACK1 DARC FY GPD \\ P16157 & ANK1 ANK & QSTCT6 & SPPL3 IMP2 PSL4 \\ P16284 & PECAM1 & QSTCT7 & SPPL2B IMP4 KIAA1532 PSL1 \\ Q99836 & MYD88 & P01375 & TNF TNFA TNFSF2 \\ \hline \end{tabular} \end{table} Table 1: 24 Malaria Gene name 3. Two-dimensional matrices with similarity values of 1 and 0 between the genes were created. #### 2.1.1 Cosine similarity The similarity between two numerical sequences can be calculated by using the cosine metric. Cosine similarity (\(similarity_{c}\)), or the cosine of the angle between the vectors, is calculated by dividing the dot product of the vectors by their product. In a space called the inner product, the sequences are regarded as vectors. As a result, the cosine similarity only takes into account the angle of the vectors and not their magnitudes. The cosine similarity resides in the \([-1,1]\) range. Cosine similarity has the benefit of being simple, especially for sparse vectors where just the non-zero coordinates need to be taken into account. Using CountVectorizer or TfidfVectorizer (which also provides frequency counts for each gene) supplied by SciKit Learn, we first determine the word count in the sentence in Python before using the cosine similarity algorithm. A Pandas dataframe or sparse matrix can be used as inputs for this method. The output is a matrix of similarity values as a result. A vector with two non-zero values can be obtained by using the Euclidean dot product formula: \(C(A\cdot B)=||X||\;||Y||\cdot\cos\theta\). \(similarity_{c}\) between two n-dimensional attribute vectors \(X\) and \(Y\) is represented using a dot product and magnitude as: \[C_{S}(X\cdot Y)=cos\theta=\frac{X\cdot Y}{||X||\;||Y||}=\frac{\sum_{i=1}^{n}X _{i}Y_{i}}{\sqrt{\sum_{i=1}^{n}}X_{i}^{2}\sqrt{\sum_{i=1}^{n}}Y_{i}^{2}}, \tag{1}\] where \(X_{i}\) and \(Y_{i}\) are components of vector \(X\) and \(Y\), respectively. The resulting similarity spans from 0 indicating orthogonality or decorrelation to 1 suggesting precisely the same, with in-between values denoting intermediate similarity or dissimilarity. The resultant similarity can be expressed as a ratio between \(-1\) and 1, where 1 means exactly the same. #### 2.1.2 Jaccard similarity coefficient A statistic for evaluating the diversity and similarity of sample sets is the Jaccard index [22], commonly referred to as the Jaccard similarity coefficient [23]. The size of the intersection divided by the size of the union of the sample sets defines the Jaccard coefficient, which assesses similarity between finite sample sets: \[J_{S}(J_{X},J_{Y})=\frac{|J_{X}\cap J_{Y}|}{|J_{X}\cup J_{Y}|}=\frac{|J_{X} \cap J_{Y}|}{|J_{X}|+|J_{Y}|-|J_{X}\cap J_{Y}|}, \tag{2}\] e aware that \(0\leq J_{S}(J_{X},J_{Y})\leq 1\) exists by purpose. \(J_{S}(J_{X},J_{Y})=0\) if \(J_{X}\) intersection \(J_{Y}\) is empty. In fields like computer science, ecology, genetics, and other studies that work with binary or binarized data, the Jaccard coefficient is frequently employed. This allows us to compare two gene annotations for similarities. "0" and "1" are two values that are used to express degrees of similarity. Values of "0" and "1" indicate differences between the two sets of gene data. Each gene's corpus Jaccard similarity only contains one specific group of genes. Lemmatization is first used to condense gene data into a single root word, which is then used to calculate Jaccard similarity, which measures similarity. Figure 1: The Cosine similarity score Figure 2: The Jaccard similarity score #### 2.1.3 Levenshtein ratio and distance measure An information theory and computer science metric called Levenshtein distance (\(L_{d}\)) is used to compare two sequences. Here, one word is transformed into another by adding, deleting, or substituting single characters [24]. Strings are used to demonstrate the\(L_{d}\) with an unequal length. \(L_{d}\) between two strings can range from "0" to "1". The Levenshtein distance employs dynamic programming techniques like spell-checking and string matching. Let \(Lev(L_{X},L_{Y})\) be the Levenshtein distance between two gene-annotated data of lengths \(|L_{X}|\) and \(|L_{Y}|\), respectively. Then conditionally we may represent as: where \(s[n]\) is the string's \(n^{th}\) character (counting from 0), and \(s[n]\) is the tail of some string \(s\), where the tail of some string is a string made up of all characters except for the first. #### 2.1.4 Pairwise document similarity A textual document similarity method based on the weights of terms in each document and the common terms (information) shared by two docu \begin{table} \begin{tabular}{c c} \hline Condition (if) & Value \\ \hline \(L_{X}\) = 0 & \(|L_{Y}|\) \\ \(L_{Y}\) = 0 & \(|L_{X}|\) \\ \(L_{X}[0]=L_{Y}[0]\) & \(Lev(tail(L_{X}),tail(L_{Y}))\) \\ otherwise & 1 + min \{\(l_{1}\),\(l_{2}\), \(l_{3}\)\} \\ \hline \end{tabular} \end{table} Table 2: Levenshtein ratio and distance measure Figure 3: The Levenshtein ratio and distance measure similarity score ments is known as the pairwise document similarity method (PDSM) [25]. A weighting method determines a term's weight, which represents its importance in the document. The TF-IDF (Term Frequency-Inverse Document Frequency) form was utilised to calculate pairwise similarity. We have used the Scikit Learn TfidfVectorizer module to implement PDSM. Pairwise document similarity is defined by: \[PDSM(X,Y)=\left(\frac{X\cap Y}{X\cup Y}\right)\times\frac{PF(X,Y)+1}{M-AF(X,Y)+1}, \tag{3}\] The following formula is used to determine the intersection (\(X\cap Y=\sum_{1}^{M}Min(w_{xi},w_{yi})\)) and union (\(X\cup Y=\sum_{1}^{M}Max(w_{xi},w_{yi})\)) of two documents (where \(w_{ji}>0\) denotes the \(i^{th}\) term weight in document \(j\)). The number of phrases that are present and those that are absent are denoted by \(PF(d1,d2)\) and \(AF(d1,d2)\), respectively. 1 is added to the numerator and denominator in order to prevent a Divide-by-Zero error. Conclusion:The methodology with the highest number of \(1^{\prime}s\) was determined to be the best method for similarity analysis based on the output of similarity values (\(1^{\prime}s\) and \(0^{\prime}s\)), as more \(1^{\prime}s\) would indicate more comparable genes. In our work, the Cosine Similarity metric yielded the most similar values in relation to the data resource employed. As shown in Fig. 2 to 3 which are the results obtained, the similar genes have a value of 1, while the dissimilar genes have a value of 0. Figure 4: The Pairwise document similarity score ### Protein Protein Interaction Network PPIs play an essential role in almost every cell process, so understanding their function in normal and disease states is crucial [26]. PPI is important in predicting target protein protein function and molecule drug ability. As a set of interactions, the majority of genes and proteins realise phenotype functions. A PPIN is a mathematical representation of a protein's physical interaction with its environment [27]. GO data of Malaria has been used to test our criteria for defining protein interaction hubs. ### Centrality Analysis This section examines network centrality metrics, which we employ to pinpoint nodes (proteins) of structural significance. The term "centrality" initially referred to a node's position within a network's hierarchy. Its topological roots have been abstracted as a phrase, and it now very broadly refers to how significant nodes are to a network. Although topological centrality has multiple operationalizations, it has a precise definition. On the other hand, there are numerous operationalizations and meanings of "importance" in the network. Here, we'll look at various operationalizations and interpretations of centrality with page rank. DC, BC, CC, and EC are the four widely used centrality measurements; each has advantages and disadvantages. Apart from these four centralities, we have also mentioned load centrality and page rank for better results. The term "centrality" describes how crucial a node or edge is to the network's connection or movement within a graph. We used centrality measurements to determine the hub or powerful node in the PPI network that was built. It is crucial to identify hub nodes since they will be connected to the majority of other nodes in the network and exert a significant influence over all the others. It is possible to think of the hub node's functionality as the network's overall functionality. Let assume that \(G(V,E)\) is a graph with \(|V|\) vertices and \(|E|\) edges. Suppose \(A=(a_{v,u})\) be the adjacency matrix. If vertex \(v\) is linked to the vertex \(u\) then \(a_{v,u}=1\) otherwise \(a_{v,u}=0\). A partial output of each centrality measure based on the data resource is given in each centrality section. #### 2.3.1 Degree Centrality Degree centrality is one of the easiest centrality measurements to calculate. The number of edges incident to a vertex in a graph, counted twice using loops, is known as the vertex's degree. With regard to the data resources needed, this is a portion of the near centrality measure's output. \[C_{d}(v)=\frac{deg(v)}{\max\,\deg_{u\in v}(u)}. \tag{4}\] Degree centrality ranges from 0 to 1, and a value near 1 indicates that the node is likely to have a maximum degree. Nodes in a network can be ranked according to their degree centrality in order to determine those that are the most prominent or influential. #### 2.3.2 Closeness Centrality Closeness centrality (CC) determined as the total length of the shortest paths connecting it to every other node in the graph, is a measure of a node's centrality in a network. How near a node is to every other node in the network is indicated by its centrality. This is a portion of the closeness centrality measure's result in relation to the data resource that was used. \[C_{c}(v)=\frac{|V|-1}{\sum_{u\in V-\{v\}}d(u,v)} \tag{5}\] A node's number is represented by \(|V|\) and its distance from another node is indicated by \(d(u,v)\) (\(u\) and \(v\) are two different node). A node with a high CC value is considered to be of higher quality. Epidemic modeling uses the measure to examine or restrict disease spread. #### 2.3.3 Betweenness Centrality A node's importance is determined by its betweenness centrality (BC). The number of edges the path passes through, the total of the weighted edges, or every pair of vertices with at least one shortest path between them. \[C_{b}(v)=\sum_{xy\in V-\{v\}}\frac{\sigma_{xy}(v)}{\sigma_{xy}} \tag{6}\] where the frequency of shortest paths in the network between nodes \(x\) and \(y\) is indicated by \(\sigma_{xy}\) and \(\sigma_{v}\) denotes the same passing through \(v\). If \(x=1\), then \(\sigma_{xy}=1\). An epidemiological analysis of disease spreading can benefit from the BC by identifying super spreaders. #### 2.3.4 Eigenvector centrality Eigenvector centrality (ER), often known as eigen centrality, is a metric for a node's power within a network. A node will have a high eigenvector centrality if it is directed by a large number of other nodes.The Eigenvector centrality of vertex \(v\) can be defined as: \[x_{(}v)=\frac{1}{\lambda}\sum_{u\in M(v)}x_{u}=\frac{1}{\lambda}\sum_{u\in V }a_{v,u}x_{u} \tag{7}\] Where \(M_{(}v)\) is the set of neighbors of \(v\) and \(\lambda\) is a constant. With a small rearrangement, this can be rewritten in vector notation as the eigenvector equation: \(Ax=\lambda x\). In general, a non-zero eigenvector solution will exist for a wide range of various eigenvalues lambda. However, the Perron-Frobenius theorem [28] indicates that only the largest eigenvalue produces the desired centrality measure due to the extra requirement that all items in the eigenvector be non-negative. \[C_{l}=\sum_{x,y\in V}\sigma_{x,y}(v) \tag{8}\] Where typically, it is assumed that \(\sigma_{x,y}=1\) and that \(x\notin y\), \(x\notin v\), \(y\notin v\) #### 2.3.5 Page Rank The Page Rank (PR)[29] algorithm ranks web content by looking at how links between sites link to each other. Protein interaction networks, as well as any other type of network, can be use it. It uses random walks to identify individuals who are commonly encountered along such walks. Those individuals are viewed as central. Mathematically, it can be defined as: \[C_{PR}(v_{i})=\frac{1-d}{|V|}+d\sum_{(v_{t})\in Inneighbor(v_{i})}\frac{C_{PR}( v_{t})}{outdeg(v_{t})} \tag{9}\] A damping factor called \(d\) is considered a constant value, and is usually defined as 0.85. ### Clustering The degree to which nodes in a graph tend to cluster together is quantified by the clustering coefficient (CCo). The transitivity of a graph has a direct impact on its clustering coefficient. Let's we are computing the Clustering Coefficients (CCo) for our unweighted graphs \(G\), the clustering of a node \(x\) is the fraction of possible triangles through that node that exist: \[CCo_{x}=\frac{2T(x)}{deg(x)(deg(x-1))} \tag{10}\] where \(T(x)\) is the number of triangles through node \(x\) and \(deg(x)\) is the degree of \(x\). ## 3 Results A network of nodes made up of comparable genes can be created, with the nodes representing the genes and the edges between the nodes only being constructed if the similarity weight is 1. Similar genes were gathered into one network based on the findings of the different similarity measures. In order to gain a better understanding of a network, a variety of network analyses can be conducted. There are several topological features that can be found in network topology, such as degree distribution (Fig. 10 to 12), diameter (\(N_{1},N_{2},N_{3}:2\)), and the clustering coefficient (Table 3) of interaction networks. An indicator of the relationship between a node's neighbours is its clustering coefficient, which ranges from 1 to 0. The global properties of the malaria GO network are shown in table 3. All four network except \(N_{4}\) has an average node degree greater than 11. The 3 network (\(N_{1},N_{2},N_{3}\)) has density \(\geq 0.49\). The highest density is 0.967 (\(N_{1}\)) and the lowest density is \(N_{4}\) (0.022). The average LCC is pretty good (highest 0.977). The centrality scores of \(N_{1}\) are provided in Table 4, which enables us to determine the significance of the protein. The network's average Local Clustering Coefficient (LCC) is 0.977 and its maximum degree is 23. Fig. 10 displays the degree distribution. The LCC represent the density of connections among neighbours and vary from 0 to 1. Nodes with higher values are part of clusters that are closely related. If a node has a value of 1, it is regarded as a member of the clique. Because they are a part of the clique, the proteins \(P01375\) and \(P04921\) in Table reftab4 possess CCo value as 1. There were 9 hub proteins found among 24 hub proteins namely 'P11413', 'P16284', 'P16671', 'P35228', 'P68871', 'Q08495', 'Q16570', 'Q8TCT6', 'Q8TCT7' and these are visualized in Fig. 14 with red colour. We have compared the hub node getting from the proposed approach with the vote rank algorithm in Table 5. We can see the annotation cluster in Table 6 which contains 4 clusters. #### 3.0.1 VoteRank Algorithm VoteRank [30] uses a voting system to determine the order of the nodes in a graph \(G\). In the actual world, if person M has assisted person N, M's ability to support others would typically wane. This article presents VoteRank, a vote-based method for identifying influential spreaders, from this point of view. The basic goal of VoteRank is to select a group of spreaders one at a time in accordance with the voting scores that nodes receive from their neighbours. Here, we have got 14 influential spreaders according to the vote ranking algorithm namely 'P35228', 'Q8TCT6', 'P35613', 'P68871', 'P16157', 'P16671', 'P11277', 'Q16570', 'Q8TCT7', 'P16284', 'P02730', 'Q08495', 'P05362', and 'P11413'. The protein is visualized in green colour in Fig. 13. To ensure that the hub node that had been constructed was the same across all four centrality measurements, we used four distinct metrics. Figure 11: DD of network 7 Figure 12: DD of network 8 \begin{table} \begin{tabular}{c c c c c c} \hline S.A Type & \#Edge & Max. Degree & Avg. Node Degree & Density & Avg. LCC \\ \hline 2.1.1 & 267 & 23 & 22.25 & 0.967 & 0.977 \\ 2.1.2 & 147 & 24 & 12.25 & 0.49 & 0.845 \\ 2.1.3 & 141 & 23 & 11.75 & 0.511 & 0.845 \\ 2.1.4 & 6 & 3 & 0.5 & 0.022 & 0.167 \\ \hline \end{tabular} \end{table} Table 3: Global properties of five network Figure 14: The hub node (red node) from proposed approach Figure 13: The hub node (green node) from voterank Figure 15: 4 clusters (denoted by red, green, blue, and yellow) \begin{table} \begin{tabular}{c c c c c c c} \hline **Entry** & **DC** & **CC** & **BC** & **EC** & **PR** & **CCo** \\ \hline O14931 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ O60603 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ P01375 & 0.913 & 0.920 & 0.000 & 0.195 & 0.040 & 1.000 \\ P02724 & 0.957 & 0.958 & 0.002 & 0.201 & 0.041 & 0.970 \\ P02730 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P04921 & 0.652 & 0.742 & 0.000 & 0.140 & 0.030 & 1.000 \\ P05362 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P11277 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P11413 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P16157 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P16284 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P16671 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P17927 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ P31994 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ P35228 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P35613 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ P58753 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ P68871 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ Q08495 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ Q16570 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ Q8TCT6 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ Q8TCT7 & 1.000 & 1.000 & 0.002 & 0.209 & 0.043 & 0.964 \\ Q99836 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ Q9NSE2 & 0.957 & 0.958 & 0.000 & 0.203 & 0.041 & 0.996 \\ \hline \end{tabular} \end{table} Table 4: Centrality measure and some important score \begin{table} \begin{tabular}{l l} \hline VoteRank & ‘P11413’, ‘P16284’, ‘P16671’, ‘P68871’, ‘Q08495’, ‘P35228’, ‘Q16570’, ‘Q8TCT6’, ‘Q8TCT7’, ‘P35613’, ‘P16157’, ‘P11277’, ‘P02730’, ‘Q08495’, ‘P05362’ \\ Proposed approach & ‘P11413’, ‘P16284’, ‘P16671’, ‘P35228’ ‘P68871’, ‘Q08495’, ‘Q16570’, ‘Q8TCT6’ \\ \hline \end{tabular} \end{table} Table 5: Comparing hub node of Voterank and proposed approach ### Applications Gene annotation is a representation of the gene's functional information. Sequence similarity and semantic similarity are correlated, which aids in predicting protein function. Genes with comparable expression patterns can be grouped together, which allows for further study. An important purpose of annotation is gene prediction, which aids in the investigation of a genome's protein binding sites. Any ontology approach's key drawback is the inability to employ partial GO annotation to cover any statistical data. A portion of database with functions are included in 7. ## 4 Conclusion In order to identify functionally related genes for the data resource employed, we applied and analysed semantic text similarity methods to obtain the best and most optimal similarity methods. Additionally, PPI networks of related genes were created, and various centrality measures were used to determine the hub nodes of the protein complexes. In order to classify and organise the genes and proteins into their appropriate groupings, this technique can be applied to a bioinformatics dataset. Further gene behaviour can be expected based on the traits of the discovered cluster group. \begin{table} \begin{tabular}{l l l} \hline \hline & count & cluster (Bold entry denotes the hub node) \\ \hline \(C_{1}\) & 1 & ‘P02724’ \\ \(C_{2}\) & 2 & ‘P01375’, ‘P04921’ \\ \(C_{3}\) & 7 & ‘O14931’, ‘O60603’, ‘P17927’, ‘P31994’, ‘P58753’, ‘Q99836’, ‘Q9NSE2’ \\ \(C_{4}\) & 14 & ‘P02730’, ‘P05362’, ‘P11277’, ‘**P11413’, ‘P16157’, ‘**P16284’, ‘**P16671’, P35228**, P35613, **P68871**, **Q08495**, \\ & & **Q16570**, **Q8TCT6**, **Q8TCT7** \\ \hline \hline \end{tabular} \end{table} Table 6: Annotation cluster \begin{table} \begin{tabular}{l l} \hline \hline Gene & Functions \\ \hline \hline NOS2 & This molecule serves as a messenger throughout the body by producing nitric oxide. \\ SPPL3 & LT domain borders of type II membrane protein substrates are where I-CLiP cleaves those proteins. \\ NCR3 & Stimulates NK cell cytotoxicity against nearby cells and by controlling NK, for instance, tumor cells are destroyed. \\ MYD88 & Signaling pathway involving Toll-like receptors and IL-1 receptors in the innate immune system. \\ TNF & Macrophages secrete it largely, and it is capable of causing tumor cell death \\ FCGR2B & In addition to phagocytosis, it modulates antibody production by B cells. \\ \hline \hline \end{tabular} \end{table} Table 7: Gene function of Portion of data. ## 5 Compliance with Ethical Standards * **Funding:** This is the work of the first author under her doctoral. This research received no external funding. * **Disclosure of potential conflicts of interest:** On behalf of all authors, the corresponding author states that there is no conflict of interest. * **Research involving human participants and/or animals:** This article does not contain any studies with human participants or animals performed by any of the authors. ## 6 Author Contributions Mamata Das designed and performed research; Mamata Das and K. Selvakumar analyzed data; Mamata Das wrote the paper; P.J.A. Alphonse read the paper.
2310.08099
ClimateNLP: Analyzing Public Sentiment Towards Climate Change Using Natural Language Processing
Climate change's impact on human health poses unprecedented and diverse challenges. Unless proactive measures based on solid evidence are implemented, these threats will likely escalate and continue to endanger human well-being. The escalating advancements in information and communication technologies have facilitated the widespread availability and utilization of social media platforms. Individuals utilize platforms such as Twitter and Facebook to express their opinions, thoughts, and critiques on diverse subjects, encompassing the pressing issue of climate change. The proliferation of climate change-related content on social media necessitates comprehensive analysis to glean meaningful insights. This paper employs natural language processing (NLP) techniques to analyze climate change discourse and quantify the sentiment of climate change-related tweets. We use ClimateBERT, a pretrained model fine-tuned specifically for the climate change domain. The objective is to discern the sentiment individuals express and uncover patterns in public opinion concerning climate change. Analyzing tweet sentiments allows a deeper comprehension of public perceptions, concerns, and emotions about this critical global challenge. The findings from this experiment unearth valuable insights into public sentiment and the entities associated with climate change discourse. Policymakers, researchers, and organizations can leverage such analyses to understand public perceptions, identify influential actors, and devise informed strategies to address climate change challenges.
Ajay Krishnan, V. S. Anoop
2023-10-12T07:48:50Z
http://arxiv.org/abs/2310.08099v2
# ClimateNLP: Analyzing Public Sentiment Towards Climate Change using Natural Language Processing ###### Abstract Climate change's impact on human health poses unprecedented and diverse challenges. Unless proactive measures based on solid evidence are implemented, these threats will likely escalate and continue to endanger human well-being. The escalating advancements in information and communication technologies have facilitated the widespread availability and utilization of social media platforms. Individuals utilize platforms such as Twitter and Facebook to express their opinions, thoughts, and critiques on diverse subjects, encompassing the pressing issue of climate change. The proliferation of climate change-related content on social media necessitates comprehensive analysis to glean meaningful insights. This paper employs natural language processing (NLP) techniques to analyze climate change discourse and quantify the sentiment of climate change-related tweets. We use ClimateBERT, a pretrained model fine-tuned specifically for the climate change domain. The objective is to discern the sentiment individuals express and uncover patterns in public opinion concerning climate change. Analyzing tweet sentiments allows a deeper comprehension of public perceptions, concerns, and emotions about this critical global challenge. The findings from this experiment unearth valuable insights into public sentiment and the entities associated with climate change discourse. Policymakers, researchers, and organizations can leverage such analyses to understand public perceptions, identify influential actors, and devise informed strategies to address climate change challenges. Climate change Sentiment analysis ClimateBERT Public discourse Natural language processing ## 1 Introduction According to the World Health Organization (WHO), the greatest health threat to people in the twenty-first century is climate change. Between 2030 and 2050, this risk is expected to cause an additional 250,000 fatalities annually and express itself in various ways. Many of these health concerns can be decreased or avoided with prompt and effective adaptation, but doing so necessitates in-depth research and policies that are multi-sectoral, multi-system, and collaborative at several scales[14]. One of the most important issues that needs significant attention is climate change. The majority of scientists agree that human activity is accelerating the Earth's climate change, which is having a disastrous effect on the world and its population. The consequences of climate change are more clear. In recent years, extreme weather occurrences like hurricanes, tornadoes, hail, lightning, fires, and floods have increased in frequency and intensity. As the world's ecosystems change quickly, access to the natural resources and agricultural methods that support humanity is in danger. [1]. The problem of climate change is complicated, and there is no quick fix. But to identify solutions, it's critical to comprehend the issue. Large volumes of text data can be analyzed using natural language processing which may unearth interesting patterns[20]. The natural language processing approaches can be applied to the climate change domain as well for finding the causes and leveraging patterns such as public sentiment and discourse towards this global issue. Recent years have witnessed many people using social media to share their views, concerns, and public opinions on any topic under the skyAnoop et al. (2023)Jickson et al. (2023). This has caused a huge amount of unstructured but dynamic data to be generated in such platforms, which are goldmines for social science researchersJohn et al. (2023)Anoop and Sreelakshmi (2023). Collecting, curating, and analyzing such data is crucial for finding public perceptions and viewpoints on socially relevant discussionsVarghese and Anoop (2022)Lekshmi and Anoop (2022). Similarly, understanding public opinions and sentiments on climate change is crucial for public policymakers, governments, and other administrators to devise better policies and intervention measures to address the challenges. In this research, natural language processing is used to examine tweets that discuss climate change. We use Climate-BERTWebersinke et al. (2021), a pre-trained language model trained on a large set of climate change-related documents, and fine-tune the same for sentiment classification tasks. The findings may be used better to understand the public's understanding of climate change, and it can also be used to identify the key stakeholders in the climate change debate. The interesting insights from this project will provide a foundation for informed decision-making and policy formulation regarding climate change. Additionally, the findings will contribute to advancing NLP techniques and their application in climate change analysis. ### Effects of Climate Change The consequences of climate change are extensive and have a significant impact on many facets of our planet. Several important aspects that shed information on the effects of climate change have been highlighted in research articles. Climate change is already manifesting globally, with a notable increase in extreme weather events. The frequency and intensity of hurricanes, floods, and droughts have amplified, causing widespread destruction and loss of life. Coastal areas are also seriously threatened by increasing sea levels, which might result in massive population displacement, increased erosion, and flooding. As glaciers continue to melt, water supplies diminish, affecting regions dependent on glacial meltwater for agricultural, industrial, and domestic purposes. Climate change-induced shifts in environmental conditions are causing profound changes in plant and animal life. Species are forced to adapt or face extinction as they grapple with altered ecosystems and changing habitats. This disruption to biodiversity has cascading effects on ecosystem functioning and services, with implications for food security, ecosystem stability, and human well-being. Another consequence of climate change is the heightened risk of diseases. As temperatures rise, disease-carrying organisms, such as mosquitoes, expand their geographic range, exposing previously unaffected regions to vector-borne illnesses. This poses a significant public health challenge, necessitating the development of effective strategies for disease prevention, control, and surveillance. The impacts of climate change extend beyond the natural environment, affecting societies and economies globally. Disruptions to ecosystems and weather patterns have severe social and economic repercussions, with vulnerable communities being disproportionately affected. Climate-induced events, such as extreme heatwaves, prolonged droughts, and intense storms, lead to the displacement of populations, loss of livelihoods, and increased socioeconomic inequality. Consequently, countries face significant challenges in managing the economic and social ramifications of climate change, including the need for adaptation measures and the transition to sustainable practices. In summary, climate change is causing a range of effects that reverberate across multiple dimensions of our planet. The effects are widespread and provide serious difficulties for human society and ecosystems, from extreme weather events and increasing sea levels to melting glaciers and biodiversity loss. Addressing climate change requires concerted global efforts to mitigate greenhouse gas emissions, enhance resilience, and foster sustainable development practices to safeguard the future of our planet and its inhabitants. ### NLP for Climate Change Analysis Natural Language Processing (NLP) is an emerging discipline in computer science that concentrates on the creation of algorithms and models to facilitate computers in comprehending, analyzing, and producing human language. In the context of climate change analysis, NLP techniques have proven to be invaluable in extracting meaningful insights from vast amounts of textual data, offering a new perspective on this critical global issue. One application of NLP in climate change analysis is the ability to analyze public opinion on the topic. By leveraging sentiment analysis techniques, researchers can gauge the prevailing sentiments, attitudes, and beliefs surrounding climate change. This understanding of public opinion is crucial for policymakers, as it helps them tailor communication strategies, design effective interventions, and foster public engagement in addressing climate change challenges. Furthermore, NLP techniques allow for identifying key stakeholders involved in the climate change debate.Through the extraction and examination of written information from various outlets, including news articles, social media platforms, and scientific journals, scholars can discern the key individuals, organizations, and institutions influencing the conversation surrounding climate change. This knowledge provides valuable insights into the various perspectives, interests, and motivations shaping climate change discussions, facilitating informed decision-making and targeted engagement with relevant stakeholders. NLP also enables the tracking of the progress of climate change negotiations. By analyzing texts from international agreements, policy documents, and meeting transcripts, researchers can monitor the evolution of climate change discussions, assess the effectiveness of existing frameworks, and identify areas of convergence or divergence among different stakeholders. This monitoring capability helps policymakers and negotiators evaluate the efficacy of climate change policies, identify potential barriers to progress, and inform future negotiations and policy development. Additionally, NLP techniques can be applied to monitor the impact of climate change on different regions of the world. By analyzing textual data from scientific reports, environmental assessments, and socio-economic surveys, researchers can gain insights into the specific vulnerabilities, risks, and adaptation strategies associated with climate change in different geographic areas. This information is crucial for policymakers and local communities to prioritize resources, implement targeted interventions, and build resilience against the impacts of climate change. In summary, NLP techniques offer a powerful toolkit for analyzing climate change-related textual data, enabling researchers to gain valuable insights into public opinion, identify key stakeholders, track the progress of climate change negotiations, and monitor the impact of climate change on different regions. By harnessing the potential of NLP, policymakers and researchers can enhance their understanding of climate change dynamics and develop evidence-based strategies for mitigation, adaptation, and effective decision-making in the face of this global challenge. The major contributions of this research may be summarized as follows: * Conducts a detailed study on different approaches reported in the natural language processing literature on sentiment analysis using social media data. * a pre-trained model on climate data, for the sentiment analysis of tweets on climate change. * Conducts extensive experiments and reports the experimental comparisons with different machine learning algorithms on sentiment analysis using ClimateBERT ## 2 Related Studies This section provides an overview of recent and influential research papers in machine learning and natural language processing, specifically on climate change analysis. It also discusses relevant studies that explore sentiment text classification approaches that are pertinent to the proposed project. The reviewed studies have demonstrated the effectiveness of sentiment analysis and named entity recognition techniques in the context of climate change analysis. NLP models such as BERT and attention mechanisms have shown promising results in capturing contextual information and improving performance. These studies provide valuable insights and methodologies to guide our approach in implementing sentiment analysis and named entity recognition on climate change-related tweets and texts using the ClimateBERT pre-trained model. A study was conducted to assess the effectiveness of ML algorithms in predicting long-term global warming. The research examined algorithms such as LR, SVR, lasso, and ElasticNet to connect average annual temperature and greenhouse gas factors. By analyzing a dataset spanning 100-150 years, the study found that carbon dioxide (CO2) had the most significant impact on temperature changes, followed by CH4, N2O, and SF6. Using this information, the researchers were able to forecast temperature trends and greenhouse gas levels for the next decade, providing valuable insights for mitigating the consequences of global warming. The research analyzes public sentiments regarding climate change by studying Twitter data. The study aims to tackle the problems of polarization and misinformation that often arise during climate change discussions on social media platforms. To achieve this, the researchers introduce a multi-task model named MEMOCLiC, which combines stance detection with additional tasks like emotion recognition and offensive language identification. By employing various embedding techniques and attention mechanisms, the proposed framework effectively captures specific characteristics and interactions related to different modalities. Experimental findings highlight the superior performance of the MEMOCLiC model in enhancing stance detection accuracy compared to baseline methods. This research paper examines the issue of polarization and belief systems prevalent in climate change discussions on Twitter. The paper proposes a framework that aims to identify statements denying climate change and classify tweets into two categories: denier or believer stances. [22]The framework focuses on two interconnected tasks: stance detection and sentiment analysis. Combining these tasks, the multi-task model utilizes feature-specific and shared-specific attention frameworks to acquire comprehensive features. Experimental results demonstrate that the proposed framework enhances stance detection accuracy by leveraging sentiment analysis, outperforming uni-modal and single-task approaches. This research paper utilizes the BERT model and convolutional neural network (CNN).[14] The study analyzes public opinions on climate change by examining Twitter data. The results indicate that the proposed model surpasses conventional machine learning methods, accurately identifying climate change believers and deniers. The authors suggest this model has significant potential for monitoring and governance purposes, particularly in smart city contexts. Additionally, future work involves investigating alternative deep learning algorithms and expanding the analysis to encompass other social media platforms. This research paper (Ceylan, 2022)investigates the application of AI and NLP models to analyze extensive unstructured data concerning climate change. The study primarily aims to develop an information management system capable of extracting pertinent information from diverse data sources, particularly technical design documentation. By utilizing pre-trained AI-based NLP models trained on textual data and integrating non-textual graphical data, the researchers showcase the system's effectiveness in swiftly and efficiently retrieving precise information. The ultimate objective is to promote knowledge democratization and ensure the accessibility of information to a broad user base. This research paper examines people's emotions and opinions concerning the conflict between Russia and Ukraine by employing ML and DL techniques. (Sirisha and Bolem, 2022)The study introduces a novel hybrid model combining sequence and transformer models, namely ROBERTa, ABSA, and LSTM. To conduct the analysis, a large dataset of geographically tagged tweets related to the Ukraine-Russia was is collected from Twitter, and sentiment analysis is performed using the proposed model. The findings indicate that the hybrid model achieves a remarkable accuracy of 94.7, surpassing existing approaches in sentiment analysis. The study underscores the significance of social media platforms such as Twitter in gaining insights into public sentiment and opinions regarding global events. This research paper aims to overcome the limitations of general language models in effectively representing climate-related texts. The authors introduce CLIMATEBERT, a transformer-based language model that undergoes pretraining on a vast corpus of climate-related paragraphs extracted from diverse sources such as news articles, research papers, and corporate disclosures.(Webersinke et al., 2021) Comparative evaluations reveal that CLIMATEBERT surpasses commonly used language models, exhibiting a substantial 48 enhancement in a masked language model objective. The improved performance of CLIMATEBERT contributes to lower error rates in various climate-related downstream tasks. To encourage further research at the intersection of climate change and natural language processing, the authors provide public access to the training code and weights of CLIMATEBERT. This research paper uses ML algorithms to analyze and predict climate change. The authors emphasize the significance of comprehending and adapting to the impacts of climate change on both human society and the environment. The study discusses the application of ML methods in analyzing historical temperature data and carbon dioxide concentrations dating back to the 18th century. It emphasizes the potential advantages of employing machine learning and artificial intelligence in interpreting and harnessing climate data for simulations and predictions. Multiple machine learning algorithms, such as DT, RF, and ANN, are examined for climate change risk assessment and prediction. The authors conclude that integrating machine learning techniques can enhance climate modeling, enabling informed decision-making concerning climate change mitigation and adaptation strategies. This research paper introduces the CimaText dataset(Varini et al., 2020), specifically developed to detect sentence-level climate change topics within textual sources. The authors emphasize the significance of automating the extraction of climate change information from media and other text-based materials to facilitate various applications, including content filtering, sentiment analysis, and fact-checking. Through a comparative analysis of different approaches for identifying climate change topics, they find that context-based algorithms like BERT outperform simple keyword-based models. However, the authors also identify areas that require improvement, particularly in capturing the discussion surrounding the indirect effects of climate change. The authors anticipate this dataset will be a valuable resource for further research in natural language understanding and climate change communication. (Upadhyaya et al., 2022)It underscores the importance of comprehending public perception and acceptance of climate change policies. The study examines diverse data sources, such as social media, scientific papers, and news articles, to perform sentiment analysis. ML techniques, specifically SVM, are evaluated for extracting valuable insights from these data sources. The paper concludes that supervised machine learning techniques exhibit effectiveness in sentiment analysis, highlighting that ensemble and hybrid approaches yield superior outcomes compared to individual classifiers. ## 3 Materials and Methods ### Label Studio Label Studio is an open-source data annotation tool (available at [https://labelstud.io/](https://labelstud.io/)) that provides a user-friendly interface for creating labeled datasets by annotating data for machine learning and artificial intelligence tasks. The tool supports various annotation types, including text classification, NER, object detection, image segmentation, and more. Label Studio allows users to import data from various sources, such as CSV files, JSON, or databases, and annotate them using a customizable interface. It provides a collaborative environment where multiple annotators can collaborate on a project, with features like task assignment, annotation review, and inter-annotator agreement measurement. One of the key features of Label Studio is its extensibility. It provides a flexible architecture that allows users to customize the annotation interfaces and incorporate custom labeling functions using JavaScript and Python. This enables the tool to adapt to different annotation requirements and integrate with existing machine-learning workflows. Label Studio also supports active learning, where the tool can suggest samples to be annotated based on a model's uncertainty, helping to optimize the annotation process and improve model performance. ### snscrape _snscrape_ is a Python library and command-line tool (available at [https://github.com/JustAnotherArchivist/snscrape](https://github.com/JustAnotherArchivist/snscrape)) for scraping social media content. It lets you retrieve public data from various social media platforms, including Twitter, Instagram, YouTube, Reddit, etc. With snscrape, you can fetch posts, comments, likes, followers, and other relevant information from social media platforms. It provides a flexible and customizable way to search for specific keywords, hashtags, usernames, or URLs and extract the desired content. The library supports scraping recent and historical data from social media platforms, enabling you to gather insights, perform analysis, monitor trends, and conduct research based on social media content. snscrape offers a command-line interface that allows you to search for and scrape social media data interactively. You can specify various parameters, such as the number of results, date range, and output format, to customize your scraping process. In addition to the command-line interface, snscrape provides a Python API that allows you to integrate social media scraping into your own Python scripts and applications. The API offers more advanced functionalities, giving you fine-grained control over the scraping process and allowing you to process the scraped data programmatically. One of the key advantages of snscrape is its ability to work with multiple social media platforms, providing a unified interface for scraping different types of content. It handles the intricacies of each platform's APIs and HTML structures, making it easier for developers to extract data without needing to learn the specific details of each platform. It's important to note that snscrape respects the terms of service and usage restrictions of each social media platform. It is primarily intended for scraping publicly available content and should be used responsibly and in compliance with the platform's policies. ### Newspaper 3k Newspaper3k is a Python library and web scraping tool (available at [https://newspaper.readthedocs.io/](https://newspaper.readthedocs.io/)) that allows you to extract and parse information from online news articles. It provides a simple interface to automate the fetching and processing of news articles from various online sources. With Newspaper3k, you can retrieve article metadata such as the title, author, publish date, and article text from news websites. It also supports extracting additional information like keywords, summaries, and article images. The library uses advanced NLP techniques to extract relevant information from the HTML structure of the news articles. Newspaper3k is designed to handle various complexities of news websites, including different article formats, pagination, and content extraction. It has built-in functionality to handle newspaper-specific features like multi-page articles, article pagination, and RSS feeds. One of the advantages of Newspaper3k is its ease of use. It abstracts away the complexities of web scraping and provides a clean and intuitive API. It also handles various encoding and parsing issues that often arise when dealing with news articles from different sources. Newspaper3k is widely used for various applications, including content analysis, sentiment analysis, and data mining. It offers a convenient way to gather news data for research, data analysis, and machine learning projects. ### ClimateBERT ClimateBERT is a specialized variant of the BERT model specifically trained and tailored for addressing climate change-related language tasks. Building upon the foundation of BERT, ClimateBERT is pre-trained on a large corpus of climate change-related documents and text sources, enabling it to capture the nuances and domain-specific knowledge relevant to climate science. This fine-tuning process equips ClimateBERT with a deep understanding of climate-related concepts, terminology, and contextual dependencies. [Iqbal et al., 2023] By leveraging ClimateBERT, researchers and practitioners in climate change analysis can effectively tackle various NLP tasks, such as sentiment analysis on climate-related tweets or named entity recognition on climate change articles. Integrating domain-specific knowledge into the pre-training process makes ClimateBERT a powerful tool for extracting insights, identifying patterns, and extracting valuable information from climate-related text data. Its application in climate change analysis can aid in improving decision-making, facilitating research, and enhancing our understanding of the complex challenges climate change poses. ## 4 Proposed Approach This section deals with the proposed methodology for the sentiment analysis of climate-related tweets from Twitter using ClimateBERT embeddings and Random Forest Classifier. The overall workflow of the proposed approach is given in Figure 1. ### Dataset The methodology begins with data collection from Twitter using the snc scrape library. The collected data is loaded into a pandas DataFrame for further processing. The dataset consists of climate change-related tweets, valuable for sentiment analysis. The tweets gathered between 1 January 2022 and 2 February 2023. However, it's important to note that the collected data may contain class imbalance, where certain sentiment categories are overrepresented while others are underrepresented. This could potentially bias the model's predictions. Initially, the dataset consisted of 4410 data points. After data augmentation, the final dataset consists of 5506 data points, with three labels, Positive, Negative and Neutral. The data set is available at [https://github.com/appliednlp-duk/nlp-climate-change](https://github.com/appliednlp-duk/nlp-climate-change). Table 1 shows the example of labeling for each tweet, whether it is Positive, Negative, or Neutral. ### Data Preprocessing To prepare the data for sentiment analysis, several preprocessing steps are applied. Special characters and digits are removed, and the text is converted to lowercase to ensure consistency. Tokenization is performed, which involves splitting the words into individual units. Stopwords (common words with little contextual meaning) are removed, and \begin{table} \begin{tabular}{|l|l|} \hline Content & Labels \\ \hline Researchers use deep learning to simulate chlorophylla \& & \\ phyycocyanin with an internet of things system to detect \& & \\ quantify \#cyanobacteria, to improve \#eutrophication & \\ management schemes for freshwater reservoirs. & & \\ \#algae \#microbiology \#environment \#iot & \\ eeer.org/journal/view.pa\&\^{} & \\ \hline Why is our \#Conservatives government so evil? & \\ \#RishishSunak \#climateChange \#Conservatives \#FuckingThieves [https://t.co/ccGyylmYlf](https://t.co/ccGyylmYlf) & Negative \\ \hline Sierra snowpack 205\% of its historical average \& Climate & \\ Change... - San Francisco Examiner dlvr.it/ShpGVN & Neutral \\ \#ClimateChange & & \\ \hline \end{tabular} \end{table} Table 1: A snapshot of the dataset used for the experiment Figure 1: Overall workflow of the sentiment analysis on climate Change tweets stemming or lemmatization techniques may be applied to normalize the words. This preprocessing step ensures the text data is cleaned and ready for further analysis. ### Experimental setup This section describes the experiment for implementing the proposed approach detailed in section 3. All the experiments were executed on NVIDIA A100 with 80 GB GPU memory and 1,935 GB/Second bandwidth. The ChatGPT Sentiment tweets were pre-processed to make them ready for experimentation. All scripts were written in Python 3.9, and the Machine Learning Models were used from the Scikit-Learn library available at [https://scikitlearn.org/stable/](https://scikitlearn.org/stable/). ## 5 Results and Discussions This section presents the results obtained from the experiment using the proposed approach outlined in Section 4. The results, along with a detailed discussion, are given in this section. DT has 68.89%, 67.13%, 68.89%, and 67.59%, and LR has 63.81%, 63.48%, 63.81%, and 63.60% for the A, P, R, and F values. Table 4 shows the Accuracy, Precision, Recall, and F-measure values for RF, SVM, DT, and LR algorithms. For ClimateBERT embeddings, RF has 85.22%, 85.73%, 85.22%, and 83.33%, SVM has 75.66%, 76.20%, 75.66%, and 75.07%, DT has 80.62%, 79.88%, 78.62%, and 77.47%, and LR has 73.84%, 72.92%, 73.84%, and 75.69% for the A, P, R, and F values. After training, the model's performance is evaluated on the test set to assess its ability to predict sentiment. The model is switched to evaluation mode, and predictions are made on the test set. Accuracy, precision, recall, and F1-score are calculated to measure the model's performance. The results obtained from the evaluation metrics are reported. Accuracy provides an overall measure of correctness, precision measures the proportion of correctly predicted positive sentiments, recall captures the ability to identify all positive sentiments, and the F1-score provides a balanced measure between precision and recall. These metrics provide insights into how well the model predicts sentiment on climate change-related tweets. By following this experimental setup, the methodology ensures that the collected data is cleaned, balanced, and used effectively to train a sentiment analysis model. The results and discussions provide valuable insights into the model's performance and its ability to analyze sentiment in climate change discussions on Twitter. ## 6 Conclusions Climate change, a pressing global concern, necessitates thorough analysis and understanding across diverse domains to mitigate its impacts effectively. In recent years, the fusion of NLP techniques and machine learning algorithms has emerged as a promising approach for comprehending the complexities and nuances of climate change through the lens of textual data. This paper utilized the advancements in domain-specific large language models to harness the potential of NLP in addressing the challenges posed by climate change through sentiment analysis. By leveraging advanced NLP methodologies, we could identify climate change discourse that may enable uncovering valuable insights and facilitate informed decision-making. \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & \multicolumn{4}{c}{TF-IDF + word2Vec} \\ & Accuracy & Precision & Recall & F-measure \\ \hline RF & 79.21 & 79.45 & 79.21 & 79.29 \\ SVM & 85.39 & 85.34 & 85.39 & 85.22 \\ DT & 62.43 & 61.28 & 62.43 & 61.60 \\ LR & 74.77 & 74.33 & 74.77 & 74.28 \\ \hline \hline \end{tabular} \end{table} Table 6: Precision, Recall, Accuracy, and F-Measure values for TF-IDF + word2Vec \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & \multicolumn{4}{c}{CountVectorizer + word2Vec} \\ & Accuracy & Precision & Recall & F-measure \\ \hline RF & 79.03 & 79.45 & 79.03 & 79.18 \\ SVM & 81.30 & 81.10 & 81.30 & 81.09 \\ DT & 62.06 & 61.01 & 62.06 & 61.37 \\ LR & 81.21 & 81.04 & 81.21 & 80.92 \\ \hline \hline \end{tabular} \end{table} Table 7: Precision, Recall, Accuracy, and F-Measure values for CountVectorizer + word2Vec \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & \multicolumn{4}{c}{BERT} \\ & Accuracy & Precision & Recall & F-measure \\ \hline RF & 76.78 & 77.46 & 76.78 & 76.93 \\ SVM & 64.35 & 63.65 & 64.35 & 63.70 \\ DT & 68.89 & 67.13 & 68.89 & 67.89 \\ LR & 63.81 & 63.48 & 63.81 & 63.60 \\ \hline \hline \end{tabular} \end{table} Table 8: Performance evaluation of SVM, LR, RF, and DT algorithms using BERT
2305.14606
Taylor Learning
Empirical risk minimization stands behind most optimization in supervised machine learning. Under this scheme, labeled data is used to approximate an expected cost (risk), and a learning algorithm updates model-defining parameters in search of an empirical risk minimizer, with the aim of thereby approximately minimizing expected cost. Parameter update is often done by some sort of gradient descent. In this paper, we introduce a learning algorithm to construct models for real analytic functions using neither gradient descent nor empirical risk minimization. Observing that such functions are defined by local information, we situate familiar Taylor approximation methods in the context of sampling data from a distribution, and prove a nonuniform learning result.
James Schmidt
2023-05-24T01:10:58Z
http://arxiv.org/abs/2305.14606v1
# Taylor Learning ###### Abstract Empirical risk minimization stands behind most optimization in supervised machine learning. Under this scheme, labeled data is used to approximate an expected cost (risk), and a learning algorithm updates model-defining parameters in search of an empirical risk minimizer, with the aim of thereby approximately minimizing expected cost. Parameter update is often done by some sort of gradient descent. In this paper, we introduce a learning algorithm to construct models for real analytic functions using neither gradient descent nor empirical risk minimization. Observing that such functions are defined by local information, we situate familiar Taylor approximation methods in the context of sampling data from a distribution, and prove a nonuniform learning result. ## 1 Introduction Empirical risk minimization forms the backbone of supervised machine learning: a finite labeled data set is used to search a function space for a model which fits both the data and the distribution generating it. Intuition for why fitting empirical risk ought to fit expectation may crudely derive from Law of Large Numbers reasoning, but generally uniform guarantees, such as PAC learnability, provide rigorous grounds for its use. In the absence of uniform guarantees and presence of particularized knowledge of data, one may consider other schemes for constructing a model. We offer one, using insight from calculus that local information for a class of real analytic functions provides global information. Together with the definition of derivative as limit, we observe that sampled function evaluations may provide arbitrarily fine approximations of derivatives to arbitrarily high order, and sampling enough is guaranteed to produce samples close enough to a point of interest. Thus we turn Taylor polynomial principles into a learning algorithm and show that under some conditions on the measure generating data, this procedure is guaranteed to produce a well-approximating function in probability. The result presents a preliminary effort to merge sampling in numerical analysis with probabilistic sampling. More fundamentally, it makes use of _nonuniform_ learnability notions ([1], akin to [2, Ch. 7]), namely precision guarantees with confidence whose sample complexity may not be independent of the data-generating distribution. While weaker than e.g. PAC learnability, this version of learnability requires careful attention to peculiarities of both the hypothesis class and the measure, and we anticipate that other concrete and interesting examples abound showing its use. In preliminary detail, we will show that a real analytic function may be well approximated by taking finitely many function evaluations to form an approximation of a sufficiently high degree Taylor polynomial. Where this result diverges from ordinary Taylor approximation statements is that 1. derivatives \(f(p),f^{\prime}(p),\ldots,f^{(n)}(p)\) are to be _approximated_ via 2. sampled data which comes from a distribution. In bounding the expected error, we shall decompose the integral into two components, the tail of the integral and the integral on a compact domain. We argue that under mild assumptions on the distribution, the tail may be made arbitrarily small. On the compact domain, we show that with high probability, the derivatives \(f^{(j)}(p)\) for \(j=0,\ldots,n\) may be approximated well enough provided enough data is sampled. In section 2, we review relevant notions from calculus and learning theory. We start in section 2.1 with Taylor series and convergence, and by extension Taylor polynomials and their approximation properties. In section 2.2 we then review a variant of PAC learnability ([1]), which removes assumptions and guarantees of uniformity with respect to measure. We apply this modified notion to regression tasks, where both models and costs may be unbounded. The background prepares for section 3 where we present and prove the main theorem which says that any analytic function may be learned using the class of polynomials provided noiseless data is generated from a distribution without fat tails. ## 2 Background ### Taylor Series and Analytic Functions Taylor series and polynomials contain a kernel of calculus ([3], [4]), and distill much of integration and differentiation to knowledge of such on monomials: \[\int x^{n}dx=\frac{x^{n+1}}{n+1}\quad\text{and}\quad\frac{d}{dx}x^{n}=nx^{n-1}.\] Particularly of use is that monomials \(x^{n}\)--and by extension, polynomials generally--are computable. Moreover, Taylor computation well-captures the very concept of derivative. While derivatives are often introduced through a limiting procedure on secant approximations for a function's slope, a better perspective is that they induce a line of best fit at a point ([5]): \[f(h)=f(0)+f^{\prime}(0)h+o(h), \tag{1}\] where little-oh is defined implicitly by satisfaction of the limit \(\lim_{h\to 0}\frac{o(h)}{h}=0\); the little-oh term captures how quickly the error term dissipates. Higher order Taylor polynomials accentuate this point: \[f(h)=f(0)+f^{\prime}(0)h+\frac{1}{2}f^{\prime\prime}(0)h^{2}+o(h^{2})\] is a quadratic of best fit at 0, with error term \(o(h^{2})\) that approaches zero much faster: \[\lim_{h\to 0}\frac{o(h^{2})}{h^{2}}=0.\] Speed of convergence as \(h\to 0\) is one way to think about higher order derivatives: they define the best fit polynomial of degree \(n\) at a point whose fit is quantified at that point by error term \(o(h^{n})\). Yet another perspective is that the Taylor polynomial approximates a function _away_ from \(h\approx 0\): consider plots in fig. 1 which indicate approximations far from point of expansion for various order Taylor polynomials. Supposing that function \(f:R\to R\) has derivatives of all orders at point \(p\), the _Taylor series_\(T_{p}(f)\) of \(f\) about point \(p\in R\) is defined by \[T_{p}(f)(x)\coloneqq\sum_{j=0}^{\infty}\frac{f^{(j)}(p)}{j!}(x-p)^{j},\] where \(f^{(j)}(p)\) denotes the \(j\)th derivative of \(f\) evaluated at point \(p\). When \(f(x)=T_{p}(f)(x)\) for all \(x\in\mathbb{R}\), we say that \(f\) is _real analytic_. The degree \(n\)_Taylor polynomial_ is defined \[T_{p,n}(f)(x)\coloneqq\sum_{j=0}^{n}\frac{f^{(j)}(p)}{j!}(x-p)^{j}\] and for real analytic functions approximates \(f\) away from \(p\) as \(n\) grows. The convergence is uniform: for any bounded interval \([a,b]\subset\mathbb{R}\) and precision specification \(\varepsilon\), there is a number \(N_{\varepsilon}>0\) for which \[\max_{x\in[a,b]}|T_{p,n}(f)(x)-f(x)|<\varepsilon \tag{2}\] as long as \(n\geq N_{\varepsilon}\). Thus knowledge of derivatives at a single point provides both quantified local (\(o(h^{n})\)) and global (uniform convergence on arbitrarily large bounded intervals) information. We import this perspective into a learning scheme, which emphasizes using data concentrated about a point, instead of all data, as a traditional empirical risk minimization learning algorithm otherwise would. ### \(\mathcal{P}\)-Learnability The familiar notion of probably approximately correct (PAC) learnability guarantees generalization performance (precision) of an algorithm for a hypothesis class \(\mathcal{H}\subset\mathcal{Y}^{\mathcal{X}}\) with high enough probability (confidence), provided enough data is fed into a learning algorithm ([2], [6], and [7]). Both precision and confidence are arbitrarily specifiable, and the varying dependent parameter is sample complexity (amount of data required) to satisfy these specifications. In this framework, satisfaction is guaranteed with fixed sample complexity, independent of distribution. Formally, for cost function \(c_{(\cdot)}:\mathcal{H}\to\mathbb{R}^{\mathcal{X}\times\mathcal{Y}}\), a hypothesis class \(\mathcal{H}\) is PAC learnable if there is algorithm \(\hat{g}:(\mathcal{X}\times\mathcal{Y})^{\alpha}\to\mathcal{H}\) and sample complexity \(u:(0,1)^{2}\to\mathbb{N}\) for which \[\mathbb{P}_{(\mathcal{X}\times\mathcal{Y})^{\alpha}}\left(E(c_{\hat{g}_{( \cdot)}})-\inf_{\hat{g}\in\mathcal{H}}E(c_{\hat{g}})>\varepsilon\right)<\delta\] whenever \(m\geq\mu(\varepsilon,\delta)\), for all measures \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\) on \(\mathcal{X}\times\mathcal{Y}\). This formal notion of learning is particularly suitable in the context of classification \(\hat{g}:\mathcal{X}\to\{0,1\}\) with cost function \(c_{\hat{g}}(x)\coloneqq\mathbbm{1}_{y\neq\hat{g}(x)}\). Figure 1: Taylor polynomials for \(\sin\) In this case, the expected cost is bounded above by \(1\), and it is reasonable to seek convergence uniform over measures. With regression, by contrast, cost may grow without bound \(\sup_{\mathsf{X}\in\mathcal{X}}c_{\mathsf{g}}(\mathsf{x})=\infty\) and as such speed of learning \((\mathsf{\mu}(\cdot,\cdot)\) from PAC) may very much be measure-dependent. In any event, often a primary aim in learning is to ensure that given any measure \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\), we can well model _it_ with enough data, not: given enough data, we can well model _any_ measure. Some tasks just are harder to learn than others. Moreover, learnability may be possible only for certain measures. We collect this observation into a nonuniform notion of learning ([1]): **Definition 2.1**.: Let \(\mathcal{P}\) be a collection of probability measures on \(\mathcal{X}\times\mathcal{Y}\). We say that a hypothesis class \(\mathcal{H}\subset\mathcal{Y}^{\mathcal{X}}\) is _\(\mathcal{P}\)-learnable_ if there is sample complexity \(\mathsf{\mu}:(0,1)\to\mathbb{N}\) and algorithm \(\hat{\mathfrak{g}}:(\mathcal{X}\times\mathcal{Y})^{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}} \rightarrow\mathcal{H}\) for which whenever \(\mathsf{m}>\mathsf{\mu}(\varepsilon,\delta)\) and \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\in\mathcal{P}\). We say that \(\mathcal{H}\) is _\(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\)-learnable_ if \(\mathcal{H}\) is \(\{\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\}\)-learnable, and that \(\mathcal{H}\) is _\(\mathcal{P}\)-nonuniform learnable_ if \(\mathcal{H}\) is \(\{\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\}\)-learnable for each \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\in\mathcal{P}\). Compare this definition with a similar notion of nonuniform learnability in [2]. **Remark 2.1**.: Observe again a difference in quantification from PAC, where we allow specification of the probability measure before the sample complexity. This is not an accident: we want a notion of learning which is rich enough to allow for learning of each task, and still makes sense of generalization, but leaves open the possibility that some tasks may require more data than others. We will impose some restriction on measures (definition 2.1) for the statement and proof of the main theorem (theorem 1). Specifically, we require that marginal \(\mathbb{P}_{\mathcal{X}}\) be _subgaussian_ (definition 2.2), and that conditional \(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}\) be deterministic for some subexponential real analytic function (definition 2.3), for all \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\in\mathcal{P}\). **Definition 2.2**.: A probability measure \(\mathbb{P}_{\mathbb{R}}\) on \(\mathbb{R}\) is said to be _subgaussian_ if there is \(c>0\) for which \[\mathbb{P}_{\mathcal{X}}\left(|\mathsf{x}|>\mathsf{T}\right)\leq e^{-c\mathsf{ T}^{2}}\] for all \(\mathsf{T}\geq 0\) ([8]). The subgaussian condition translates loosely as "the measure does not have fat tails." Subgaussian measures are fairly common: gaussian measures are subgaussian, as is any measure whose support is bounded (e.g. a uniform measure on \([a,b]\subset\mathbb{R}\)), and many engineering techniques rely on gaussianity of some underlying process (Kalman filters, e.g.). Subexponentiality of real-analytic function guarantees that a function may be bounded by an exponential one: **Definition 2.3**.: A real analytic function \(\mathsf{f}:\mathbb{R}\rightarrow\mathbb{R}\) is subexponential if, for Taylor expansion \(\mathsf{f}=\sum_{\mathsf{n}=0}^{\infty}a_{\mathsf{n}}(\cdot)^{\mathsf{n}}\), there is some \(\mathsf{K}>0\) for which \(\mathsf{n}!|a_{\mathsf{n}}|\leq\mathsf{K}^{\mathsf{n}}\) for all \(\mathsf{n}\in\mathbb{N}\). With this setup, we proceed to the learning theorem. ## 3 Taylor Learning Theorem In this section, we state, discuss, and prove theorem 1, which provides a \(\mathcal{P}\)-nonuniform learnability guarantee for a hypothesis class of polynomials. In section 3.1 we introduce the main theorem, and outline its proof. We then state three lemmas and use them to prove the main result. Finally, in section 3.2, we provide proofs of the lemmas. ### The Theorem In what follows, we take the cost function to be \(c_{\bar{9}}(x,y):=\big{|}y-\bar{9}(x)\big{|}\). **Theorem 1**.: Let \(\mathcal{P}\) be class of measures \(\mathbb{P}_{\mathcal{X}\times\mathcal{Y}}\) on \(\mathcal{X}\times\mathcal{Y}\) satisfying the following two properties: 1. The marginal \(\mathbb{P}_{\mathcal{X}}\) is subgaussian (definition 2.2), and 2. the conditional \(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}\) is deterministic: \(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}(y=f(x)|x)=1\) for some real-analytic subexponential function \(f:\mathbb{R}\to\mathbb{R}\) (definition 2.3). Let \(\mathcal{H}=\mathbb{R}[x]\) be collection of polynomial functions. Then \(\mathcal{H}\) is \(\mathcal{P}\)-nonuniform learnable (definition 2.1). **Remark 3.1**.: By condition 2 in the statement of theorem 1, we may write cost as \(c_{\bar{9}}(x)\) to denote \(|y-\bar{9}(x)|\) under supposition that \(y=y(x)\). We thereby express the expectation of cost as \(\mathbb{E}(c_{\bar{9}})=\int_{\mathcal{X}}c_{\bar{9}}(x)\mathrm{d}\mathbb{P}_ {\mathcal{X}}(x)\) because \(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}(y|x)=\mathbb{1}_{y=f(x)}\). We outline the proof of theorem 1 as follows. To bound the expected cost, we split the integral into two components: a tail component consisting of the integral from some arbitrarily large endpoint to infinity (similarly for the negative axis tail), and a body component consisting of the integral on a compact domain. We then show that each can be made arbitrarily small. We start with the tail and use subgaussianity with exponential boundedness of \(y\)--which implies that \(|y(x)|\leq e^{n\chi}\)-- to show that the tail integral \(\int_{T}^{\infty}c_{\bar{9}}(x)\mathrm{d}\mathbb{P}_{\mathcal{X}}(x)\) converges, and therefore can be made arbitrarily small with sufficiently large endpoint \(T\). Then we are left to deal with the body component \(\int_{-T}^{T}c_{\bar{9}}(x)\mathrm{d}\mathbb{P}_{\mathcal{X}}(x)\). Bounding this integral amounts to bounding the integrand: \[\int_{-T}^{T}c_{\bar{9}}(x)\mathrm{d}\mathbb{P}_{\mathcal{X}}(x)\leq 2T\sup_{x \in[-T,T]}\big{|}y(x)-\bar{9}(x)\big{|}.\] The term \(|y(x)-\bar{9}(x)|\) is further bounded by \[\sum_{j=0}^{N}\frac{\big{|}y^{(j)}(0)-\bar{9}^{(j)}(0)\big{|}}{j!}\big{|}x|^{ j}+\left|\sum_{j=N+1}^{\infty}\frac{y^{(j)}(0)}{j!}x^{j}\right|, \tag{3}\] the sum of differences of Taylor polynomials and the tail of a Taylor series. The tail of the series can be made arbitrarily small by taking enough terms for the Taylor polynomial (\(y(x)\) is real anaytic). Up to this point, our argument makes no reference to or use of sampling data. It is in finally bounding the error of derivative approximations \(\bar{9}(j)(0)\) that "learning from data" occurs. For this bound we need a collection of lemmas, interesting in their own right. The first one (lemma 3.1) says that we can, with arbitrarily high probability, guarantee sampling an arbitrary number of sampled data points from a positive probability event, provided we sample enough. The next lemma (lemma 3.2) guarantees that we can make use of lemma 3.1: there is a point around which any arbitrary interval has positive probability measure. And the last one (lemma 3.3) ensures that we can arbitrarily well approximate any \(n\)-th order derivative as long as we take enough function evaluations close enough to the point to be approximated. Taken together, lemmas 3.1, 3.2, and 3.3 imply that with arbitrarily high probability we may arbitrarily well approximate any \(n\)-th order derivative. **Lemma 3.1**.: Suppose \(\mathbb{P}_{\mathcal{X}}\big{(}(p-h,p+h)\big{)}>0\), and let \(m\in N\), \(\delta>0\). Then there exists \(M\in N\) such that for \(x_{1},\dots,x_{M}\sim_{\mathrm{i}\mathrm{d}}\mathbb{P}_{\mathcal{X}}\) independently sampled data points, at least \(m\) of them \(x_{i_{1}},\dots,x_{i_{m}}\in(p-h,p+h)\) with probability at least \(1-\delta\). **Remark 3.2**.: Formally writing this statement as an event, \[\mathbb{P}_{\mathcal{X}^{M}}\left(\bigcup_{1\leq i_{1}\leq\dots\leq i_{m} \leq M}\big{\{}(x_{1},\dots,x_{M})\in\mathcal{X}^{M}:(x_{i_{1}},\dots,x_{i_{m}} )\in(p-h,p+h)^{m}\big{\}}\right)\geq 1-\delta.\] The next lemma guarantees the general applicability of lemma 3.1. **Lemma 3.2**.: Let \(\mathbb{P}_{\mathcal{X}}\) be a probability measure on \(\mathbb{R}\). Then there is a point \(p\in\mathbb{R}\) such that \[\mathbb{P}\big{(}(p-h,p+h)\big{)}=\int_{p-h}^{p+h}dP(x)>0\] for every \(h>0\). **Lemma 3.3**.: Let \(f:\mathbb{R}\to\mathbb{R}\) be a real analytic function. For every \(\varepsilon>0\) and \(n\in\mathbb{N}\), \(f^{(n)}(0)\) may be estimated to arbitrary precision with \(n+1\) sampled data points \(f(h_{0}),\ldots,f(h_{n})\), i.e. \[f^{(n)}(0)=\sum_{j=0}^{n}a_{j}f(x_{j})+o(h)\] with \(\sup_{i,j}|h_{j}-h_{i}|\leq Ch\) for some constant \(C\). In fact, a tighter error term \(o(h^{p})\) may be achieved by taking at least \(k+p\) sampled points \(h_{0},\ldots,h_{p+k-1}\) ([9], see also [10], [11], [12]). Having introduced the lemmas, we now prove theorem 1. Proof of theorem 1.: Suppose first, without loss of generality, that \(0\) is a point which satisfies lemma 3.2, namely \(\mathbb{P}\big{(}(-\gamma,\gamma)\big{)}>0\) for each \(\gamma>0\). Fix \(\delta,\varepsilon>0\). We wish to show that there is integer \(M_{\delta,\varepsilon}>0\) such that sampling \(x_{1},\ldots,x_{M}\sim_{\mathrm{iid}}\mathbb{P}_{\mathcal{X}}\) data points guarantees, with probability at least \(1-\delta\), a bound on error \[\int_{-\infty}^{\infty}c_{\tilde{g}}(x)dP(x)=\int_{-\infty}^{\infty}|y(x)- \tilde{g}(x)|\,dP(x)<\varepsilon. \tag{4}\] We split the integral in (4) into three components: \[\int_{-\infty}^{\infty}c_{\tilde{g}}(x)dP(x)=\int_{-\infty}^{-T}c_{\tilde{g}} (x)dP(x)+\int_{-T}^{T}c_{\tilde{g}}(x)dP(x)+\int_{T}^{\infty}c_{\tilde{g}}(x)dP (x)\] and bound each. Since the first and third are conceptually identical, we present the argument bounding the third. By assumption there is some bound \(K\) such that \(|y^{(j)}(0)|<K^{j}\) for all \(j\geq 0\). Therefore, \[|y(x)-\tilde{g}(x)| \leq|y(x)|+|\tilde{g}(x)|\] \[=\left|\sum_{j=0}^{\infty}\frac{y^{(j)}(0)}{j!}x^{j}\right|+| \tilde{g}(x)|\] \[\leq\sum_{j=0}^{\infty}\frac{|y^{(j)}(0)|}{j!}|x|^{j}+|\tilde{g} (x)|\] \[\leq\sum_{j=0}^{\infty}\frac{|Kx|^{j}}{j!}+|\tilde{g}(x)|\] \[=\mathrm{e}^{|Kx|}+|\tilde{g}(x)|.\] A similar bound can be made for \(|\tilde{g}(x)|\). Therefore, we may bound \(|c_{\tilde{g}}(x)|<\mathrm{e}^{K|x|}\) for all \(|x|>T\). As \(\mathbb{P}_{\mathcal{X}}(x)\) is subgaussian, we claim that the integral \[\int_{T}^{\infty}e^{K|x|}dP_{\mathcal{X}}(x)<\infty \tag{5}\] integral converges, so that \[\lim_{T\to\infty}\int_{T}^{\infty}e^{K|x|}d\mathbb{P}_{\mathcal{X}}(x)=0,\] and each tail can be bound by \(\varepsilon/4\) for sufficiently large \(T\). To show convergence of integral in eq. (5), without loss of generality suppose that \(T=0\) and observe that \(x=\int_{0}^{x}dt=\int_{0}^{\infty}\mathbb{1}_{x\geq t}dt\). We then write \[\int_{0}^{\infty}e^{Kx}d\mathbb{P}_{\mathcal{X}}(x)=\int_{0}^{\infty}\int_{0}^{ \infty}\mathbb{1}_{e^{Kx}\geq t}dt\,d\mathbb{P}_{\mathcal{X}}(x)=\int_{0}^{ \infty}\int_{0}^{\infty}\mathbb{1}_{e^{Kx}\geq t}dP_{\mathcal{X}}(x)\,dt=\int_ {0}^{\infty}\mathbb{P}_{\mathcal{X}}\big{(}e^{Kx}\geq t\big{)}\,dt\] (6) where the second equality follows by Tonelli, and the third since \(\mathbb{E}(\mathbb{1}_{E})=\mathbb{P}(E)\). Now fix \(p>1\) and \(c\) as in definition 2.2, let \(t^{*}=\max\big{\{}1,e^{\frac{K^{2}p}{c}}\big{\}}\), and observing that \[-c\left(\frac{\log(t)}{K}\right)^{2}=-c\left(\frac{\log(t)}{K^{2}}\right) \log(t)=-\log\left(t^{c\frac{\log(t)}{k^{2}}}\right),\] (7) we split the last integral in eq. (6) and bound as \[\int_{0}^{t^{*}}\mathbb{P}\big{(}e^{Kx}\geq t\big{)}\,dt+\int_{t^{*}}^{\infty }\mathbb{P}\big{(}e^{Kx}\geq t\big{)}\,dt\leq t^{*}+2\int_{t^{*}}^{\infty}e^{- c\left(\frac{\log(t)}{K}\right)^{2}}dt=t^{*}+2\int_{t^{*}}^{\infty}\frac{1}{t^{c \frac{\log(t)}{k^{2}}}}dt.\] (8) The inequality holds by bound \(\mathbb{P}(E)\leq 1\) and subgaussianity of \(\mathbb{P}_{\mathcal{X}}\) (definition 2.2), and the equality by eq. (7). Finally, for \(t>t^{*}\), \(t^{c\frac{\log(t)}{k^{2}}}\geq t^{p}\), so the final integral in eq. (8) is bounded above as \[\int_{t^{*}}^{\infty}\frac{1}{t^{c\frac{\log(t)}{k^{2}}}}dt\leq\int_{t^{*}}^{ \infty}\frac{1}{t^{p}}dt,\] which converges by the integral \(p\)-test, proving that integral (5) converges and indeed \[\lim_{T\to\infty}\int_{T}^{\infty}e^{Kx}dP_{\mathcal{X}}(x)=0.\] (9) We now bound the integral \[\int_{-T}^{T}c_{\mathfrak{H}}(x)d\mathbb{P}_{\mathcal{X}}(x)\leq\int_{-T}^{T} \sum_{j=0}^{N}\frac{\big{|}y^{(j)}(0)-\mathfrak{H}^{(j)}(0)\big{|}}{j!}|x|^{j }+\left|\sum_{j=N+1}^{\infty}\frac{\mathfrak{H}^{(j)}(0)}{j!}x^{j}\right|\,d \mathbb{P}_{\mathcal{X}}(x),\] (10) recalling eq. (3), where \(T\) is chosen so that the sum of tail integrals is bounded by \(\varepsilon/2\) from (9). We write eq. (10) as \[\underbrace{\int_{-T}^{T}\sum_{j=0}^{N}\frac{\big{|}y^{(j)}(0)-\mathfrak{H}^{ (j)}(0)\big{|}}{j!}|x|^{j}\,d\mathbb{P}_{\mathcal{X}}(x)}_{\mathbb{I}_{1}(N)}+ \underbrace{\int_{-T}^{T}\left|\sum_{j=N+1}^{\infty}\frac{\mathfrak{H}^{(j)}( 0)}{j!}x^{j}\right|\,d\mathbb{P}_{\mathcal{X}}(x)}_{\mathbb{I}_{2}(N)},\] and bound each \(\mathbb{I}_{j}(N)\) individually. On compact \([-T,T]\) convergence of Taylor series for \(\mathfrak{H}:\mathbb{R}\to\mathbb{R}\) is uniform (see eq. (2)) which in particular implies that \[\lim_{N\to\infty}\sup_{x\in[-T,T]}\left|\sum_{j=N+1}^{\infty}\frac{\mathfrak{H }^{(j)}(0)}{j!}x^{j}\right|=0,\] and hence that \(\lim_{N\to\infty}I_{2}(N)=0\). Select \(N\) so that \(I_{2}(N)<\varepsilon/4\); then we are left with bounding \(I_{1}(N)\). Up to this point, we have not used the fact that \(\tilde{y}=\tilde{y}_{S}\) is a model constructed from data. Indeed, we have yet to specify how \(\tilde{y}\) is constructed. To bound the estimate \(I_{1}(N)\), we cite lemma 3.2, lemma 3.1, and lemma 3.3. First, we isolate the terms to bound: \[\int_{-T}^{T}\sum_{j=0}^{N}\frac{|y^{(j)}(0)-\tilde{y}^{(j)}(0)|} {j!}|x^{j}|\,dP(x) \leq 2T\sup_{x\in[-T,T]}\sum_{j=0}^{N}\frac{|y^{(j)}-\tilde{y}^{(j )}(0)|}{j!}|x|^{j}\] \[\leq 2T^{N+1}\sum_{j=0}^{N}\frac{|y^{(j)}(0)-\tilde{y}^{(j)}(0)|} {j!}\] \[\leq 2T^{N+1}(N+1)\cdot\max_{j=0,\ldots,N}\frac{|y^{(j)}(0)- \tilde{y}^{(j)}(0)|}{j!},\] where we use \(\mathbb{P}_{X}([-T,T])\leq 1\) for the first inequality and suppose without loss of generality that \(T\geq 1\) in the second. Thus we must show that bound \[\max_{j=0,\ldots,N}\frac{|y^{(j)}(0)-\tilde{y}^{(j)}(0)|}{j!}\leq\tilde{ \varepsilon}\coloneqq\frac{\varepsilon}{8(N+1)T^{N+1}}\] is achievable in probability, which bound is certainly satisfied if for each \(j=0,\ldots,N\), \[|y^{(j)}(0)-\tilde{y}^{(j)}(0)|\leq\tilde{\varepsilon}.\] Indeed, lemma 3.3 guarantees that this bound may be achieved with enough data sampled, say \(m\) points, in an interval \((-\delta_{j}(\tilde{\varepsilon}),\delta_{j}(\tilde{\varepsilon}))\), lemma 3.2 guarantees the existence of point with positive probability for every interval about the point (we suppose without loss of generality that \(0\) is such a point), and lemma 3.1 guarantees the existence of \(M>0\) for which \(m\) points \(x_{i_{1}},\ldots,x_{i_{m}}\) of \(x_{1},\ldots,x_{M}\) are with probability at least \(1-\delta\) in \((-\delta_{j}(\tilde{\varepsilon}),\delta_{j}(\tilde{\varepsilon}))\). **Remark 3.3**.: The restriction on class of functions (subexponential) and marginal distribution (subgaussian) are not particularly strong, but in nearly every engineering application the conditional probability \(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}\) will not be deterministic (\(\mathbb{P}_{\mathcal{Y}|\mathcal{X}}(y_{i}=y(x_{i})|x_{i})\neq 1\)) because the underlying process is inherently non-deterministic or--what amounts to much of the same--because there is noise in measurement. In such case, any realization of noise (as a random variable) will (most likely) be everywhere discontinuous, and therefore not smooth, and very much therefore not real analytic. This observation does not automatically render theorem 1 inapplicable to engineering problems. A class of engineering's accomplishments is the development of sensors: with one we may directly observe e.g. position, and with another directly observe acceleration. To the extent that mathematics can make requests on the engineering community, we here have another instance: more sensors to directly measure higher order derivatives may directly inform and support our algorithms. ### Proofs of Lemmas We start by proving the lemmas and end with the proof of theorem 1. Proof of lemma 3.1.: Set \(\gamma:=\mathbb{P}_{\mathcal{X}}\big{(}(p-h,p+h)\big{)}\) and let \(E\subset\mathbb{R}^{M}\) be the event that at most \(m-1\) elements \(\{x_{j_{1}},\ldots,x_{j_{m-1}}\}\) are contained in \((p-h,p+h)\): \[E:=\big{\{}q\in\mathbb{R}^{M}:\text{ at most }m-1\text{ factors }q_{i_{1}},\ldots,q_{i_{m-1}}\in(p-h,p+h)\big{\}},\] and \(E_{k}\) the event that exactly \(k\) elements \(\{x_{j_{1}},\ldots,x_{j_{k}}\}\subset(p-h,p+h)\). Let \(\mathbb{P}_{\mathcal{X}}\) denote the product (independent) probability measure (induced by \(\mathbb{P}_{\mathcal{X}}\)) on \(\mathbb{R}^{j}\) and observe that since \(E=\bigsqcup_{j=0}^{m-1}E_{j}\) is a disjoint union, we have \[\mathbb{P}_{\mathcal{X}^{M}}(E) =\sum_{j=0}^{m-1}\mathbb{P}_{\mathcal{X}^{M}}(E_{j})\] \[=\sum_{j=0}^{m-1}\binom{M}{j}\left(\mathbb{P}_{\mathcal{X}}\left( \big{\{}x\not\in(p-h,p+h)\big{\}}\right)\right)^{M-j}\left(\mathbb{P}_{ \mathcal{X}}\left(\big{\{}x\in(p-h,p+h)\big{\}}\right)\right)^{j}\] \[=\sum_{j=0}^{m-1}\binom{M}{j}\left(1-\gamma\right)^{M-j}\gamma^{j}\] \[\leq\mathrm{m}K(1-\gamma)^{M-m}\cdot 1\] \[\leq\mathrm{c}m\mathrm{M}^{m}(1-\gamma)^{M}\xrightarrow{M\to \infty}0,\] where \(K=\max\left\{\binom{M}{j}:j=0,\ldots,m-1\right\}\leq\mathrm{c}M^{m}\) (which \(\max\) will in general be realized at \(m-1\) when \(M>>m\)). Proof of lemma 3.2.: By continuity of measure ([13]), there is some bounded interval \(I=[a,b]\subset\mathbb{R}\) with \(\mathbb{P}(I)>0\). We argue inductively by selecting descending sequence of positive measure sets: at stage \(j=0\), \(I_{j}=I\), with positive measure. At stage \(j+1\), we have interval \(I_{j}=[a_{j},b_{j}]\), and let \(A_{j}=\left[a_{j},\frac{b_{j}-a_{j}}{2}\right]\) and \(B_{j}=\left[\frac{b_{j}-a_{j}}{2},b_{j}\right]\). Since \(\mathbb{P}(I_{j})>0\), at least one of \(\mathbb{P}(A_{j})>0\) or \(\mathbb{P}(B_{j})>0\), and set \(I_{j+1}\) to be either interval of positive measure. Observe that \(I_{j+1}\subset I_{j}\) and that we therefore have decreasing sequence \[I_{0}\supset I_{1}\supset\cdots\supset I_{j}\supset I_{j+1}\] of nonempty closed sets. By Cantor's Intersection Theorem ([14]), there is at least one point \[p\in\bigcap_{j=0}^{\infty}I_{j}.\] Since any interval \((p-h,p+h)\) about \(p\), for \(h>0\), contains at least one \(I_{j}\) (in fact, infinitely many), we conclude that \(\mathbb{P}\big{(}(p-h,p+h)\big{)}>0\).
2308.14822
CASCO: Cosmological and AStrophysical parameters from Cosmological simulations and Observations -- I. Constraining physical processes in local star-forming galaxies
We compare the structural properties and dark matter content of star-forming galaxies taken from the CAMELS cosmological simulations to the observed trends derived from the SPARC sample in the stellar mass range $[10^{9}, 10^{11}]\,\textrm{M}_{\odot}$, to provide constraints on the value of cosmological and astrophysical (SN- and AGN-related) parameters. We consider the size-, internal DM fraction-, internal DM mass- and total-stellar mass relations for all the 1065 simulations from the IllustrisTNG, SIMBA and ASTRID suites of CAMELS, and search for the parameters that minimize the $\chi^{2}$ with respect to the observations. For the IllustrisTNG suite, we find the following constraints for the cosmological parameters: $\Omega_{\textrm{m}} = 0.27_{-0.05}^{+0.01}$, $\sigma_{8} = 0.83_{-0.11}^{+0.08}$ and $S_{8} = 0.78_{-0.09}^{+0.03}$, which are consistent within $1\sigma$ with the results from the nine-year WMAP observations. SN feedback-related astrophysical parameters, which describe the departure of outflow wind energy per unit star formation rate and wind velocity from the reference IllustrisTNG simulations, assume the following values: $A_{\textrm{SN1}} = 0.48_{-0.16}^{+0.25}$ and $A_{\textrm{SN2}} = 1.21_{-0.34}^{+0.03}$, respectively. Therefore, simulations with a lower value of outflow wind energy per unit star formation rate with respect to the reference illustrisTNG simulation better reproduce the observations. Simulations based on SIMBA and ASTRID suites predict central dark matter masses substantially larger than those observed in real galaxies, which can be reconciled with observations only by requiring values of $\Omega_{\textrm{m}}$ inconsistent with cosmological constraints for SIMBA, or simulations characterized by unrealistic galaxy mass distributions for ASTRID.
Valerio Busillo, Crescenzo Tortora, Nicola R. Napolitano, Leon V. E. Koopmans, Giovanni Covone, Fabrizio Gentile, Leslie K. Hunt
2023-08-28T18:20:21Z
http://arxiv.org/abs/2308.14822v2
CASCO: Cosmological and AStrophysical parameters from Cosmological simulations and Observations - I. Constraining physical processes in local star-forming galaxies ###### Abstract We compare the structural properties and dark matter content of star-forming galaxies taken from the camels cosmological simulations to the observed trends derived from the SPARC sample in the stellar mass range [\(10^{9}\), \(10^{11}\)] M\({}_{\odot}\), to provide constraints on the value of cosmological and astrophysical (SN- and AGN-related) parameters. We consider the size-, internal DM fraction-, internal DM mass- and total-stellar mass relations for all the 1065 simulations, all having different cosmological and astrophysical parameters, from the IllustrisTNG, SIMBA and ASTRID suites of camels, and search for the parameters that minimize the \(\chi^{2}\) with respect to the observations. For the IllustrisTNG suite, we find the following constraints for the cosmological parameters: \(\Omega_{\rm m}=0.27^{+0.01}_{-0.05}\), \(\sigma_{8}=0.83^{+0.08}_{-0.11}\) and \(S_{8}=0.78^{+0.03}_{-0.09}\), which are consistent within \(1\sigma\) with the results from the nine-year WMAP observations. SN feedback-related astrophysical parameters, which describe the departure of outflow wind energy per unit star formation rate and wind velocity from the reference IllustrisTNG simulations, assume the following values: \(A_{\rm SN1}=0.48^{+0.25}_{-0.16}\) and \(A_{\rm SN2}=1.21^{+0.03}_{-0.34}\), respectively. Therefore, simulations with a lower value of outflow wind energy per unit star formation rate with respect to the reference illustrisTNG simulation better reproduce the observations. Variation of AGN feedback parameters, on the other hand, show negligible effects on the scaling relation trends in the mass range probed. Simulations based on SIMBA and ASTRID suites predict central dark matter masses substantially larger than those observed in real galaxies, which can be reconciled with observations only by requiring values of \(\Omega_{\rm m}\) inconsistent with cosmological constraints for SIMBA, or simulations characterized by unrealistic galaxy mass distributions for ASTRID. keywords: galaxies: formation - galaxies: evolution - dark matter - methods: numerical ## 1 Introduction In the \(\Lambda\)CDM paradigm of structure formation, the large scale structure of the Universe (LSS) originates from tiny random fluctuations of the primordial dark matter density field, which are suppressed or grow according to various properties of these primordial overdensities, such as their scale. These growing fluctuations of dark matter then collapse from the ambient background, and start accreting primordial gas within them, sparking the formation of the primordial galaxies. In a 'bottom-up' scenario of galactic formation, these primordial objects then start to merge with one another under the influence of gravity, gradually forming the most massive structures of the universe, such as galaxies and cluster of galaxies (Springel et al., 2001). Scaling relations are a result of the physics behind galaxy formation and evolution: if gravity is the predominant process, then theoretical models predict simple scaling relations between various basilar halo properties, such as the Tully-Fisher relation (Tully & Fisher, 1977), which relates the rotational velocity of spiral galaxies, \(V_{\rm max}\), to their intrinsic luminosity, \(L\), which is itself proportional to mass (baryonic Tully-Fisher relation, McGaugh et al., 2000); the Faber-Jackson relation (Faber & Jackson, 1976), which relates the central velocity dispersion of passive galaxies with their intrinsic luminosity; and the Fundamental Plane (Djorgovski & Davis, 1987), a three-dimensional manifold which relates effective radius \(R_{\rm e}\), mean surface brightness at \(r=R_{\rm e}\) and central velocity dispersion, \(\sigma_{\rm c}\), for passive galaxies. There is general consensus (McNamara and Nulsen, 2007; Dutton and van den Bosch, 2009) that secondary, baryonic processes, such as active galactic nuclei (AGN) and supernovae (SN) feedback, need to be included in order to correctly reproduce the observed relations between galaxy parameters. Outflows driven by stellar winds and SN explosions are expected to dominate in the lower-mass regime, while at higher masses, outflows tend to be powered by feedback from active galactic nuclei (Tremonti et al., 2004; Zahid et al., 2014; Tortora et al., 2019; Lara-Lopez et al., 2019). Galactic winds generated by stars and SN, for example, are regulating the baryon cycle, e.g. the star formation and the metallicity in the interstellar medium (Tortora et al., 2022), shaping the "main sequence" correlation between \(M_{*}\) and the star-formation rate (SFR, Brinchmann et al., 2004) and the mass-metallicity relation (MZR, Tremonti et al., 2004). AGN feedback is instead required in massive galaxies to efficiently quench the star formation and make these galaxies passive (Lagos et al., 2008). Various studies have reported deviations for low-mass systems from the trends expected from simple models in which gravity is the only dominant process: these deviations could indicate that non-gravitational processes may significantly impact the evolution of these systems (Gastaldello et al., 2007; Sun et al., 2009; Eckmiller et al., 2011). What is the relative contribution of these processes, however, is still debated. These findings have triggered a renewed interest in turning to cosmological simulations, to try and take such processes into account, for example succeeding in simulating the feedback between the central super-massive black hole (SMBH) of a galaxy and its global properties (Puchwein et al., 2008). In this context, simulations prove to be very useful tools. They can be used for instance in conjunction to machine learning algorithms to predict galaxy properties via the use of scaling relations (Shao et al., 2022), and the best cosmological parameter combinations given the physical properties of a sample of galaxy clusters (Qiu et al., 2023) or for single galaxies as in Villaescusa-Navarro et al. (2022) and Echeverri et al. (2023), or to determine the effects of feedback mechanisms on the morphology of galaxies (Okamoto et al., 2005), on the relation between total mass density profile and dark matter fraction within the half-mass radius of galaxies (Remus et al., 2017), on the generation of galactic winds (Hopkins et al., 2012) and on structural and dynamical properties of galaxies (Irodotou et al., 2022). The variation of scaling relation trends with the underlying cosmology and astrophysical recipes proves to be a promising tool for cosmological tests based on simulations, in that cosmological parameters of simulations can be easily modified. Work on comparing observations to simulated data has been performed on the concentration-mass relation (Shao et al., 2022), the baryonic Tully-Fisher relation (Goddy et al., 2023) and dark matter fraction and mass density slope in massive galaxies (Mukherjee et al., 2018, 2021, 2022). But, to our knowledge, there is still no attempt to use these comparisons as a tool to constrain cosmology or astrophysics in the required fine details. This is because simulations such as IllustrisTNG often assume a fixed cosmology and sub-grid parameters, and are also calibrated on some observed relations. The "Cosmology and Astrophysics with Machine Learning Simulations" (camels, Villaescusa-Navarro et al., 2021) cosmological simulations provide for the first time the chance to investigate the impact of a wide range of cosmologies and physical processes on observed scaling relations. the camels simulations do not fix the values of the cosmological and astrophysical parameters, but vary them very finely without requiring any calibration with the observations (except for one of them, the fiducial cosmology). This is done because camels is mainly used to train machine learning algorithms for predicting a certain set of cosmological and astrophysical parameter set from the observations, and as such it is perfectly suited also for standard statistical analysis such as the one proposed in this work. In this paper, we present the project _CASCO: Cosmological and AStrophysical parameters from Cosmological simulations and Observations_. We start testing the predictive power encoded in various scaling relations of star-forming galaxies, by comparing camels simulations to observed trends inferred from the Spitzer Photometry & Accurate Rotation Curves (SPARC, Lelli et al., 2016) sample, to constrain cosmological and astrophysical parameters (SN- and AGN-related ones). We demonstrate the potentiality of this method using half-mass radius and dark-matter-related quantities in local star-forming galaxies, planning to extend the analysis to other galaxy types, redshifts and galaxy parameters in future papers. The paper is organized as follows: in Section 2, we present an overview of the camels simulations, the selection criteria of the simulated galaxies that we will consider in the analysis and the sample of observed SPARC galaxies that we considered for comparison with the simulated data. In Section 3, we compare the scaling relations observed in the simulated data to the respective observed trends, and give constraints for the cosmological and astrophysical parameters. We provide a physical interpretation of the results in Section 4, and give our conclusions in Section 5. ## 2 Observations and Camels Simulations In this section, we describe the data samples used in this paper. in Section 2.1 we introduce the SPARC sample, a catalog of local star-forming galaxies, while in Section 2.2 we introduce the camels cosmological simulations. ### SPARC data The observational data used in the analysis come from the sample of 175 disc galaxies with near-infrared photometry and H i rotation curves (SPARC, Lelli et al., 2016). This sample is neither statistically complete, nor volume-limited, but it is nevertheless representative of the population of disc galaxies in the local Universe. The SPARC sample's morphological types range from irregular (Im/BCD) to lenticular (S0), and cover a large range of effective radii (\(\sim 0.3\) to \(\sim 15\) kpc), rotation velocities (\(\sim 20\) to \(\sim 300\) km s\({}^{-1}\)) and gas contents (\(\sim 0.01\) to \(\sim 10\)\(M_{\rm HI}/L_{[3.6\,\mu{\rm m}]}/({\rm M}_{\odot}/{\rm L}_{\odot})\)). The radial velocity curves \(v(r)\) for the galaxies have been obtained based primarily on HI measurements. Total mass enclosed within a sphere of radius \(r\) is determined via the radial velocity curves, using the formula \(M(r)=v^{2}r/G\). Total stellar mass is obtained from the total luminosity assuming a constant stellar mass-to-light ratio at 3.6 \(\mu\)m equal to \(\Upsilon_{*}=0.6\)\(\Upsilon_{\odot}\) (for details, see Tortora et al., 2019). Given that we are comparing these data with simulations, for which only tridimensional structural quantities are available, we cannot perform a comparison by using directly the effective radius, which is a projected quantity. We thus converted the SPARC galaxies' effective radii into stellar half-mass radii, \(R_{*,1/2}\), by multiplying the respective effective radii by a constant factor of \(\sim 1.35\) (Wolf et al., 2010). For a discussion on the impact of fixing \(\Upsilon_{*}\) to \(0.6\,\Upsilon_{\odot}\) and of converting between projected and 3D radii, see Appendix A1. Following Tortora et al. (2019), of the 175 galaxies in the SPARC sample we consider only those with inclinations larger than 30\({}^{\circ}\), because rotation velocities for face-on systems are highly uncertain. This procedure does not introduce a selection bias, because the galaxies' orientation in the sky is random. We also omit from the final sample those galaxies for which the effective radius \(R_{\rm g}\) is not covered by the rotation curve, in order to avoid extrapolations. The final sample thus consists of 152 galaxies, for which we consider the total stellar mass (\(M_{\rm*}\)), the stellar half-mass radius (\(R_{\rm*,1/2}\)) and the total, stellar and gas mass within the stellar half-mass radius (\(M_{\rm 1/2}\), \(M_{\rm*,1/2}\) and \(M_{\rm g,1/2}\), respectively). The dark matter mass within the stellar half-mass radius, \(M_{\rm DM,1/2}\), is obtained by subtracting the stellar and gas mass contributes from \(M_{\rm 1/2}\). Total (virial) masses are taken from Posti et al. (2019), obtained by modelling the rotation curves with a baryonic component plus a Navarro-Frenk-White (Navarro et al., 1996) model for DM. None of the observables depend directly on the cosmological parameters because SPARC is a catalog of local galaxies, for which distances are measured with direct methods. ### camels simulations The simulated galaxy data come from camels, a suite of 6325 cosmological simulations of an Universe volume equal to 25 \(h^{-1}\) Mpc (Villaescusa-Navarro et al., 2021). Approximately half of these are gravity-only N-body simulations, while the other half are hydrodynamical simulations, which are obtained by implementing three different hydrodynamical sub-grid models: IllustrisTNG (Pillepich et al., 2018), SIMBA (Dave et al., 2019) and ASTRID (Bird et al., 2022; Ni et al., 2022). The mass resolution for the dark matter (DM) particles is \(M_{\rm DM,\,min}=6.49\times 10^{7}(\Omega_{\rm m}-\Omega_{\rm b})/0.251\ h^{-1}\, \rm M_{\odot}\) (which, for a simulation having \(\Omega_{\rm m}=0.3\), is equal to \(9.67\times 10^{7}\ \rm M_{\odot}\)), while for the gas particles is \(M_{\rm g,\,min}=1.89\times 10^{7}\ \rm M_{\odot}\). These values are the same for all camels suites. Galaxies/subhalos are identified using the superband subhalo finder algorithm (Springel et al., 2001). In camels, the following cosmological parameters are fixed: \(\Omega_{\rm b}=0.049\), \(\Omega_{\rm k}=0\), \(n_{\rm s}=0.9624\), \(h=0.6711\) and \(M_{\nu}=0.0\) eV, where \(h=H_{0}/100\ \rm km\,s^{-1}\,Mpc^{-1}\). The assumed equation of state of dark energy is \(P(\rho)=w\rho\), with \(w=-1\). The values of the matter density parameter, \(\Omega_{\rm m}\), and of the amplitude of the linear matter density fluctuations, \(\sigma_{\rm B}\), are instead free parameters that depend on the particular simulation considered. For all camels simulation suites, a Chabrier (2003) initial mass function (IMF) is assumed. As detailed in Villaescusa-Navarro et al. (2021), for each simulation six parameters are varied: two cosmological parameters (\(\Omega_{\rm m},\sigma_{\rm g}\)) and four astrophysical parameters (\(A_{\rm SN1}\), \(A_{\rm AGN1}\), \(A_{\rm SN2}\) and \(A_{\rm AGN2}\)), each related to a different astrophysical process. In particular, \(A_{\rm SN1}\) and \(A_{\rm SN2}\) are related to the supernovae feedback mechanisms, while \(A_{\rm AGN1}\) and \(A_{\rm AGN2}\) are related to AGN feedback. It should be noted that the astrophysical parameters have different physical meanings for each of the three sites, and should be considered as completely different parameters. As such, from here on, we will refer to the four astrophysical parameters associated to the SIMBA simulations with a tilde (e.g. \(\dot{A}_{\rm SN1}\)) and to those of ASTRID with a hat (e.g. \(\dot{A}_{\rm SN1}\)), in order to avoid confusion. For our analysis, we used all three of the hydrodynamical simulation suites, whose specific properties will be discussed more in detail in the following sections. #### 2.2.1 IllustrisTNG suite IllustrisTNG utilizes the arepo code (Springel, 2010) to solve the coupled gravity and magneto-hydrodynamics equations for each particle, in addition to sub-grid physics models for astrophysical processes such as star-formation, supernovae feedback, growth of supermassive black holes and AGN feedback. The gravitational softening length for dark matter is equal to \(\epsilon_{\rm min}=2\ \rm kpc\) comoving. In the IllustrisTNG suite, the \(A_{\rm SN1}\) and \(A_{\rm SN2}\) parameters both contribute to the wind mass loading factor at injection, \(\eta_{\rm w}:=\dot{M}_{\rm g}/\dot{M}_{\rm SFR}\), where \(\dot{M}_{\rm g}\) is the rate of gas mass inside a galaxy converted into ejected wind mass, and \(\dot{M}_{\rm SFR}\) is the local instantaneous star formation rate. This is an important parameter for describing the effects of galactic winds on the chemical evolution of galaxies, because the wind mass loading characterizes the dominance of bulk outflows over gas accretion, and comes into play in the equilibrium condition between inflows and outflows for a galaxy (Tortoa et al., 2022). Following Pillepich et al. (2018), the wind mass loading in IllustrisTNG can be written as: \[\eta_{\rm w}=\frac{2}{v_{\rm w}^{2}}e_{\rm w}\left(1-\tau_{\rm w}\right), \tag{1}\] where \(\tau_{\rm w}=0.1\) is the thermal fraction, \(e_{\rm w}\) is the galactic wind energy per unit star formation rate, written as: \[e_{\rm w}= A_{\rm SN1}\times\overline{e}_{\rm w}\left[f_{\rm w,Z}+\frac{1-f_{ \rm w,Z}}{1+(Z/Z_{\rm w,ref})^{\gamma_{\rm w,Z}}}\right]\] \[\times N_{\rm SNII}E_{\rm SNII}5_{\rm SNII}5_{\rm SNII}\,51\times 1 0^{51}\ \rm erg\,M_{\odot}^{-1}, \tag{2}\] with \(Z\) metallicity of gas cells, \(\overline{e}_{\rm w}\) wind energy factor, \(f_{\rm w,Z}\)\(Z\)-dependence reduction factor, \(Z_{\rm w,\,ref}\)\(Z\)-dependence reference metallicity, \(\gamma_{\rm w,Z}\)\(Z\)-dependence reduction power, \(N_{\rm SNII}\) number of SNII per formed stellar mass and \(E_{\rm SNII,51}\) available energy per core-collapse SNe in units of \(10^{51}\) erg, as reported in Pillepich et al. (2018), while \(v_{\rm w}\) is the galactic wind speed at injection, given by1: Footnote 1: Notice that we modified equation (3) with respect to the version reported in Pillepich et al. (2018), according to Ni et al. (2023) (Appendix A1, footnote 4). \[v_{\rm w}=\max\left[A_{\rm SN2}\,\kappa_{\rm w}\,\sigma_{\rm DM}\left(\frac{H_ {0}}{H(z)}\right)^{1/3},\ v_{\rm w,\,min}\right], \tag{3}\] where \(\kappa_{\rm w}\) is the wind velocity factor, also reported in Pillepich et al. (2018), \(\sigma_{\rm DM}\) is the 1D local dark matter velocity dispersion and \(v_{\rm w,\,min}=350\ \rm km\,s^{-1}\) is the wind velocity floor at injection. The AGN feedback parameters, instead, modulate the low accretion rate kinetic SMBH feedback mode, with \(A_{\rm AGN1}\) influencing the power injected in the kinetic mode: \[\dot{E}_{\rm low}=A_{\rm AGN1}\times\min\left[\frac{\rho}{0.05\,\rho_{\rm SF,\, thresh}},\ 0.2\right]\times\dot{M}_{\rm BH}c^{2}, \tag{4}\] where \(\rho\) is the gas density around the SMBH, \(\rho_{\rm SF,\,thresh}\) is the density threshold for star formation and \(\dot{M}_{\rm BH}\) is the accretion rate of the central galactic supermassive black hole, while \(A_{\rm AGN2}\) influences the 'burstiness' of the central black hole, that is, the rate at which the supermassive black hole ejects energy, which happens every time the accreted energy equals the following threshold value: \[E_{\rm inj,\,min}=A_{\rm AGN2}\times f_{\rm Fe}\times\frac{1}{2}m_{\rm enc}\hat{ \sigma}_{\rm DM}^{2}, \tag{5}\] where \(f_{\rm Fe}=20\) is a constant of the fiducial TNG model, \(\hat{\sigma}_{\rm DM}\) is the one-dimensional dark matter velocity dispersion around the central SMBH, and \(m_{\rm enc}\) is the gas mass inside the feedback sphere. #### 2.2.2 SIMBA suite SIMBA relies on the gizmo(Hopkins, 2015) code for solving the equations, in its 'Meshless Finite Mass' (MFM) mode. The gravitational softening length in SIMBA is an adaptive parameter: as a conservative choice, we decided to consider a fixed minimum gravitational softening length of \(\epsilon_{\rm min}=0.75\) kpc comoving (see Dave et al., 2019, Table 1 for details). In the SIMBA suite, the wind mass loading factor is directly regulated only by the parameter \(\tilde{A}_{\rm SN1}\), via a power law fit based on Angles-Alcazar et al. (2017) FIRE 'zoom-in' simulations: \[\eta_{\rm w}=\tilde{A}_{\rm SN1}\times\begin{cases}9\left(\frac{M_{*}}{M_{0}} \right)^{-0.317}&\text{, if }M_{*}<M_{0},\\ 9\left(\frac{M_{*}}{M_{0}}\right)^{-0.761}&\text{, if }M_{*}>M_{0},\end{cases} \tag{6}\] where \(M_{0}=5.2\times 10^{9}\)\(\mathrm{M_{\odot}}\). It should be noted that the wind mass loading trend of Angles-Alcazar et al. (2017) differs from the one used in Muratov et al. (2015), which is also based on the FIRE simulations, in that the former tracks individual particles in order to quantify the mass outflow rates out of the star-forming region, while the latter computes outflow rates based on mass advection across a boundary at one quarter of the virial radius. The consequence of this is that the slope of the relation in Angles-Alcazar et al. (2017) is similar to the one in Muratov et al. (2015), but the former shows roughly double the amplitude of the latter, and is much steeper above \(M_{0}\). The parameter \(\tilde{A}_{\rm SN2}\), instead, regulates the outflow wind velocity as a function of the circular velocity (Muratov et al., 2015): \[v_{\rm w}=\tilde{A}_{\rm SN2}\times 1.6\left(\frac{v_{\rm circ}}{200\ \mathrm{km \,s}^{-1}}\right)^{0.12}v_{\rm circ}+\Delta v(0.25R_{\rm vir}), \tag{7}\] where \(\Delta v(0.25R_{\rm vir})\) is the velocity corresponding to the potential difference between the launch point and \(0.25R_{\rm vir}\). Finally, the AGN feedback parameters for the SIMBA suite regulate the total momentum flux of the ejected gas, in the form of relativistic jets, via the relation: \[\dot{P}_{\rm out}\equiv\dot{M}_{\rm out}v_{\rm out}=\tilde{A}_{\rm AGN1} \times 20\,L_{\rm bol}/c, \tag{8}\] where \(L_{\rm bol}=\epsilon_{\rm f}\tilde{M}_{\rm BHc}{}^{2}\) is the bolometric luminosity and \(\epsilon_{\rm f}=0.1\) is the radiative efficiency, and the outflow velocity of the SMBH jet emissions: \[v_{\rm out}=\begin{cases}v_{\rm rad}+\tilde{A}_{\rm AGN2}\times v_{\rm jet}& \text{, if }\begin{cases}\lambda_{\rm Edd}<0.2\\ M_{\rm BH}>10^{7.5}\,\mathrm{M_{\odot}}\end{cases}\\ v_{\rm rad}&\text{, otherwise.}\end{cases}, \tag{9}\] #### 2.2.3 ASTRID suite ASTRID uses a new version of the mp-gadget code, a modified version of gadget-3(Springel, 2005), to solve gravity with an \(N\)-body tree-particle-mesh (TreePM) approach, hydrodynamics with smoothed particle hydrodynamics (SPH) method and astrophysical processes with a series of subgrid models. The gravitational softening length in ASTRID is \(\epsilon_{\rm min}=2.2\) kpc comoving. In ASTRID, the parameters \(\tilde{A}_{\rm SN1}\) and \(\tilde{A}_{\rm SN2}\) have a similar role to the SIMBA parameters \(\tilde{A}_{\rm SN1}\) and \(\tilde{A}_{\rm SN2}\), but the formula for the wind mass loading is different: in the case of ASTRID, \(\tilde{A}_{\rm SN1}\) directly controls the wind mass loading, but via the formula: \[\eta_{\rm w}=\tilde{A}_{\rm SN1}\times\begin{pmatrix}\sigma_{0,\rm fid}\\ \hline v_{\rm w}\end{pmatrix}^{2}, \tag{10}\] where \(\sigma_{0,\rm fid}=353\) km/s (Bird et al., 2022). The parameter \(\hat{A}_{\rm SN2}\), similarly to \(A_{\rm SN1}\) in equation (3) instead, regulates the wind velocity through the following formula: \[v_{\rm w}=\tilde{A}_{\rm SN2}\times\kappa_{\rm w}\ \sigma_{\rm DM}, \tag{11}\] where \(\kappa_{\rm w}=3.7\) and \(\sigma_{\rm DM}\) is the same as in equation (3), following the IllustrisTNG model, but without the wind velocity floor at injection and the dependency on \(H(z)\). For the AGN feedback in ASTRID, the parameters \(\tilde{A}_{\rm AGN1}\) and \(\hat{A}_{\rm AGN2}\) regulate the kinetic and thermal feedback modes, respectively, via the following equations: \[\begin{cases}\Delta\dot{E}_{\rm low}=\hat{A}_{\rm AGN1}\times\epsilon_{\rm f,kin}\,\dot{M}_{\rm BHc}{}^{2}&\text{, }\lambda_{\rm Edd}<\chi_{\rm thr},\\ \Delta\dot{E}_{\rm high}=\hat{A}_{\rm AGN2}\times\epsilon_{\rm f,kin}\,\dot{M} _{\rm BHc}{}^{2}&\text{, }\lambda_{\rm Edd}>\chi_{\rm thr},\end{cases} \tag{12}\] where \(\Delta\dot{E}_{\rm low}\) and \(\Delta\dot{E}_{\rm high}\) are respectively the power injected in the kinetic and thermal mode, \(\chi_{\rm thr}\) is the Eddington threshold, \(\epsilon_{\rm r}\) is the mass-to-light conversion efficiency and \(\epsilon_{\rm f,kin}\) and \(\epsilon_{\rm f,th}\) are the fraction of the radiation energy kinetically and thermally injected into the surrounding gas, respectively (for more details on the values assumed by these parameters, see Ni et al., 2023). #### 2.2.4 General considerations about the camels fiducial simulations It should be noted that, in equations (2-12), an unitary value of the astrophysical parameters implies that the equations reduce exactly to the relations reported in Pillepich et al. (2018), Weinberger et al. (2017), Angles-Alcazar et al. (2017), Muratov et al. (2015), Bird et al. (2022) and Ni et al. (2022). These relations are the ones that have been implemented in the 'original' IllustrisTNG, SIMBA and ASTRID simulation runs, as detailed in Nelson et al. (2019), Dave et al. (2019), Bird et al. (2022) and Ni et al. (2022). We will thus refer to simulations with unit values of the astrophysical parameters and cosmological parameters equal to \(\Omega_{\rm m}=0.30\) and \(\sigma_{\rm S}=0.80\) as fiducial simulations. It is important to note that, in camels, only the fiducial simulations have been calibrated to reproduce several galaxy properties. For the IllustrisTNG suite, the calibrations have been performed by using the galaxy stellar mass function, the stellar-to-halo mass relation, the total gas mass content within the virial radius \(r_{500}\) of massive groups, the stellar mass-stellar size and the black hole mass - galaxy mass relations, all at \(z=0\), and, finally, the functional shape of the cosmic star formation rate density for \(z\leq 10\)(Pillepich et al., 2018). For the SIMBA suite, the calibrations are based only on the stellar mass function, the cosmic SFR density and the black hole mass - galaxy mass relation (Dave et al., 2019). For the ASTRID suite, the free parameters of the UV-band dust optical depth have been calibrated against the observed galaxy UV luminosity function at redshift \(z=4\), and applied to all redshifts (Bird et al., 2022). In all the other simulations, the subgrid parameters and the cosmological parameter values are varied without requiring the simulations to reproduce any kind of observation. #### 2.2.5 Simulation types and physical quantities used The three suites contain four different varieties of simulations. These include: * 27 fiducial simulations for which only the seed for generating the initial conditions is varied (cosmic variance set, CV); * 61 simulations in which the value of the cosmological and astrophysical parameters is varied one at a time, with a fixed seed value for all the simulations (1-parameter set, 1P). In particular, the simulation '1P_1_0' is a fiducial simulation, used for reference; * 1000 simulations in which the value of the cosmological and astrophysical parameters, as well as the seed value, are varied randomly, by using Latin-hypercube sampling (Latin-hypercube set, LH); * 4 simulations in which the cosmological parameters and the seed value are fixed, and the astrophysical values are set to extreme values (extreme set, EX), such as very efficient supernova feedback (\(A_{\rm{SN1}}=\bar{A}_{\rm{SN1}}=100.00\)), very efficient AGN feedback (\(A_{\rm{AGN1}}=\bar{A}_{\rm{AGN1}}=100.00\)) and no feedback (all astrophysical parameters equal to zero). In particular, the EX_0 is a fiducial simulation, used for reference. We made use of all these simulations both from the IllustrisTNG, SIMBA and ASTRID suites in our analysis. For the comparison with observations, we consider the following physical quantities: * Stellar half-mass radius, \(R_{*,1/2}\), defined as the radius containing half of the total stellar mass of the galaxy; * Total stellar mass, \(M_{*}\), defined as the sum of the masses of all star particles bound to a certain subhalo, as detected by subfind; * Total mass, \(M_{\rm{tot}}\), defined as the sum of the masses of all particles/cells of every type (stellar, dark matter, gas, black hole) bound to a certain subhalo; * Stellar/DM/gas/total mass within the half-mass radius, \(M_{*,1/2}\), \(M_{\rm{DM},1/2}\), \(M_{\rm{g},1/2}\) and \(M_{1/2}\), respectively, defined as the sum of the masses of particles of the respective types which are within a sphere with radius equal to the stellar half-mass radius of a certain subhalo; * DM fraction within the stellar half-mass radius, \(f_{\rm{DM}}(<R_{*,1/2})\), defined as the ratio \(M_{\rm{DM},1/2}/M_{1/2}\); * Number of star particles within the stellar half-mass radius, \(N_{*,1/2}\), defined as the ratio \(M_{*,1/2}/M_{\rm{g},\,min}\); * Star formation rate, SFR, defined as the sum of the star formation rates of all star-forming gas cells of a certain subhalo; * Maximum rotational velocity of the spherically-averaged rotation curve, \(V_{\rm{max}}\), where all particle types (gas, stars, DM and SMBHs) are considered for its determination; * one-dimensional total velocity dispersion, \(\sigma\), defined as the 3D velocity dispersion of all the member particles/cells bound to a certain subhalo, divided by \(\sqrt{3}\); * one-dimensional local dark matter velocity dispersion around a star particle, \(\sigma_{\rm{DM}}\), defined as the 1D velocity dispersion of all dark matter particles within the comoving radius of a sphere centered on a certain star particle, enclosing the nearest 64\(\pm\)1 dark matter particles; * mean one-dimensional dark matter velocity dispersion, \(\overline{\sigma}_{\rm{DM}}\), defined as the mean of the distribution formed by all the 1D local dark matter velocity dispersions, \(\sigma_{\rm{DM}}\), around each star particle of the subhalo; * Total gas metallicity, \(Z\), defined in the IllustrisTNG suite as the mass-weighted average metallicity of the gas cells bound to a certain subhalo for all gas cells within a sphere with radius associated to the maximum rotational velocity of the velocity curve, \(V_{\rm{max}}\). The quantities \(V_{\rm{max}}\), \(\sigma\), \(\sigma_{\rm{DM}}\), \(\overline{\sigma}_{\rm{DM}}\) and \(Z\) in particular have only been used for the evaluation of the wind mass loading at injection, in the analysis detailed in Section 3.6. The quantities that we are using are all tabulated de-projected values, obtained via the files 'fof_subhalo_tab_033.hdfs' (for IllustrisTNG and SIMBA) and 'fof_subhalo_tab_090.hdfs' (for ASTRID), available on the CAMELS website, relative to the \(z=0\) snapshot. In future articles, we will consider the single particles associated to each subhalo also to evaluate numerically the corresponding projected quantities. Figure 1: _Left panel_. Distribution of all galaxies from the IllustrisTNG fiducial simulation in the \(M_{\rm{DM}}\)-\(M_{*}\) plane. Opaque circles and triangles are star-forming and passive galaxies, respectively, which satisfy the filtering conditions detailed in Section 2.2. Transparent circles and triangles are all the star-forming and passive objects which are under the filtering threshold. _Right panel_. Same as the left panel, but showing galaxies with all the parameters fixed to those of the reference simulations except for \(A_{\rm{SN1}}\), set to 0.25 (‘1P_3\(3\)5’ simulation, dark green points) and 2.3 (‘1P_3_3’ simulation, red points). #### 2.2.6 Observational realism and cosmic variance It has to be noted that galaxy quantities are determined in a different way in simulation and real data. For example, effective radius is measured as the radius encompassing half of the total [3.6 \(\mu\)m] luminosity in SPARC (these wavelengths probe quite well the mass of the galaxies) and deprojected using a constant multiplicative factor, while in the simulations, it is defined as the radius containing half of the total stellar mass. Observed total masses are derived in Posti et al. (2019), fitting an analytical galaxy model to rotation curves, while in simulations bounded star/gas/DM particles/cells are considered. Stellar mass is calculated in SPARC by using [3.6 \(\mu\)m] luminosity and a constant mass-to-light ratio, while the treatment is obviously more complex in the simulations. The inclusion of more observational realism in the simulated quantities is difficult to treat and is beyond the scope of this paper. However, we believe that possible differences arising from homogenizing the definition of galaxy quantities will induce secondary contributions, which will not strongly affect the results presented in this work. Both cosmological simulations and observations are sampling only a limited volume of the Universe and therefore are affected by cosmic variance. Cosmic variance can potentially impact the physical properties and scaling relations resulting from both simulations and observations. A discussion on how cosmic variance affects the fiducial simulations is described in detail in Appendix A3, using the Cosmic Variance set. Results show that the effects of cosmic variance on the properties considered in this paper is of the order of \(10^{-2}\) dex for all simulation suites, and thus negligible. #### 2.2.7 Filtering procedure In this paper, for each simulation we consider a filtered subset of all the subhalos detected by the subfind algorithm. This is done because some of the objects detected by the algorithm are not actual galaxies, but disk fragments or other artifacts, while other objects are not well-resolved, having smaller dimensions than the gravitational softening length, or with too few star particles inside the half-mass radius. The parameters on which we base this filtering are the half mass radius, \(R_{*,1/2}\), the number of star particles within the stellar half-mass radius, \(N_{*,1/2}\), and the DM fraction within the stellar half-mass radius, \(f_{\rm DM}(<R_{*,1/2})\). We consider for the analysis only subhalos which have \(R_{*,1/2}>\epsilon_{\rm min}\), \(N_{*,1/2}>50\) and \(f_{\rm DM}(<R_{*,1/2})>0\). Because SPARC is a sample of star-forming galaxies, we also performed a selection with respect to the specific star formation rate (\({\rm sSFR}:={\rm SFR}/M_{*}\), where SFR is the galaxy's star formation rate). Following Bisigello et al. (2020), we considered as star-forming galaxies only those subhalos which possess \(\log_{10}({\rm sSFR}/{\rm yr}^{-1})>-10.5\). The effects of fixing a specific sSFR threshold on the scaling relations are negligible and are discussed in Appendix A4. The effects of the selections are shown in Fig. 1 for both the fiducial IllustrisTNG simulation and for two IllustrisTNG simulations from the \({}^{1}\)P set. The overall effect of the selection criteria is approximately a vertical cut in stellar mass, with the threshold located at \(\log_{10}(M_{*}/{\rm M}_{\odot})\approx 9.25\), consistent with cuts performed during the analysis of the stellar mass function in Villaescusa-Navarro et al. (2021). Our selection is very conservative when compared to other works with camels, e.g. in Villaescusa-Navarro et al. (2022), where the selection is based on considering only subhalos with total number of star particles greater than 20. In their case, the selection criteria effect is a cut in \(M_{*}\), with threshold approximately equal to \(\log_{10}(M_{*}/M_{\odot})\approx 8.35\). Fig. 1 also shows an important selection effect that could result from analyzing subsequent comparisons between various simulations: as one can see from the right panel, the high-mass threshold between star-forming galaxies (points) and passive galaxies (triangles) is lower for simulations with lower SN feedback values. The difference between the two simulations can initially be accounted for by a straightforward shift along the \(M_{*}\) axis, taking into account both star-forming and passive galaxies. However, after applying a cut in specific star-formation rate (points only), this discrepancy can only be explained by an additional shift along the \(M_{\rm DM}\) axis. This would erroneously suggest a significant influence of baryonic processes on the dark matter halo properties of the galaxies. It is thus important to note that any apparent effect of the SN feedback processes on DM scaling relations is mainly a combination of effects on the total stellar mass of the galaxies, plus a selection effect due to not considering also the passive galaxies in the trends. ## 3 Comparison between observations and camels simulations The aim of this paper is to compare the scaling relation trends obtained from the SPARC star-forming galaxy sample and the corresponding trends from the camels simulations, in order to determine constraints on cosmological and astrophysical parameters. We start in Section 3.1 by comparing the observed trends with the fiducial simulations of all three camels simulation suites. In Section 3.2, we analyze the effect of varying the cosmological and astrophysical parameters in IllustrisTNG one by one, to check the relative contribution of each parameter independently to the scaling relations. In Sections 3.3-3.5, to find the combination of cosmological and astrophysical parameters that better fits the observed data, we consider all the 1065 simulations of the IllustrisTNG, SIMBA and ASTRID suites, and perform a chi-squared best fit analysis. We then provide constraints for both cosmological and astrophysical parameters by means of a bootstrapping procedure on both simulations and observations. Finally, in Section 3.6, we compare the inferred wind mass loading at injection from the IllustrisTNG suite with mass loading trends presented in literature, and compare the trends from the three simulation suites. ### CAMELS fiducial simulations comparison with SPARC observations As a preliminary analysis, it is important to check if the fiducial simulations from IllustrisTNG, SIMBA and ASTRID reproduce accurately the observed trends. We thus compared the simulations labeled '1P_1_0' from both the IllustrisTNG, SIMBA and ASTRID suites of camels, which are the fiducial simulations, to the observed SPARC scaling relation trends. To obtain the observed trends, we binned the SPARC data in fixed bins of stellar mass, and for each bin we evaluated the 16th, 50th (median) and 84th percentiles. We then linearly interpolated between these points to quantify the observed trends. Discussion on how the binning procedure affects the results is detailed in Appendix A2. Fig. 2 shows the stellar half-mass radius, \(R_{*,1/2}\), the DM fraction within the stellar half-mass radius, \(f_{\rm DM}(<R_{*,1/2})\), the DM mass within the stellar half-mass radius, \(M_{\rm DM,1/2}\), and the total mass, \(M_{\rm tot}\), as a function of the total stellar mass, \(M_{*}\), for star-forming simulated galaxies taken from the IllustrisTNG (blue points), SIMBA \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline relation & \(\chi^{2}\) & \(\hat{\chi}^{2}\) & \(\chi^{2}\) & \(\hat{\chi}^{2}\) & \(\chi^{2}\) & \(\chi^{2}\) & \(\hat{\chi}^{2}\) \\ & (Illustris TNG) & (Illustris TNG) & (SIMBA) & (SIMBA) & (ASTRID) & (ASTRID) \\ \hline \(R_{+,1/2}\) vs \(M_{*}\) & 79.45 & 0.33 & 127.63 & 0.41 & 581.71 & 3.11 \\ \(f_{\rm DM}(<R_{+,1/2})\) vs \(M_{*}\) & 57.16 & 0.24 & 353.49 & 1.14 & 158.15 & 0.85 \\ \(M_{\rm DM,1/2}\)vs \(M_{*}\) & 219.61 & 0.90 & 1270.60 & 4.10 & 558.37 & 2.99 \\ \(M_{\rm tot}\) vs \(M_{*}\) & 212.16 & 0.87 & 170.72 & 0.55 & 290.28 & 1.55 \\ cumulative & 568.38 & 2.33 & 1922.45 & 6.20 & 1588.51 & 8.49 \\ \hline \end{tabular}. The last row presents the cumulative results, which represent the sum of the chi-squared values relative to the four scaling relations. \end{table} Table 1: Chi-squared (\(\chi^{2}\)) and normalized chi-squared (\(\hat{\chi}^{2}\)) values for the scaling relations considered in Section 3.1, associated to the Illustris TNG, SIMBA and ASTRID fiducial simulations. First and second columns show the values associated to Illustris TNG, third and fourth columns show those associated to SIMBA and the last two show those associated to ASTRID Figure 2: From top to bottom: stellar half-mass radius, \(R_{+,1/2}\), DM fraction within \(R_{+,1/2}\), \(f_{\rm DM}\), DM mass within \(R_{+,1/2}\), \(M_{\rm DM,1/2}\), and total mass, \(M_{\rm tot}\), from camels’ fiducial simulations (first column: Illustris TNG suite, blue points; second column: SIMBA suite, green points; third column: ASTRID suite, orange points), compared with the corresponding SPARC trends, shown as black curves. The shaded grey area represents the scatter of the observed relations, given by the difference between the 16th and the 84th percentiles with the median. The dark grey regions represent the error on the median for the SPARC trends, while the blue, green and orange colored regions represent the error on the median with respect to the Illustris TNG, SIMBA and ASTRID simulation points, respectively. (green points) and ASTRID (orange points) suites. The observed SPARC trends for the 16th, median and 84th percentile are shown with black lines, with a shaded grey region representing the scatter of the observed data points. It should be noted that the scatter of the SPARC relation is of the same order of magnitude of the typical observational uncertainties associated to the quantities considered for the scaling relations. The error on the median trends for both SPARC and simulated galaxies' trends are also shown, with a dark grey region for SPARC, and a blue/green/orange region for the IllustrisTNG/SIMBA/ASTRID trends, respectively. In the IllustrisTNG suite, two of these relations (the size-mass relation and the \(M_{\rm{tot}}\)-\(M_{*}\) relation) have been used to calibrate the fiducial simulation. Considering the fact that the observed trends used to calibrate these scaling relations in camels do not match exactly the ones we are using in this paper (the SPARC trends), and that the correlations involving the central DM fraction and mass are not used in the calibration of the reference simulations, our results should not be affected by any circularity issue. All simulations follow the direction of all the trends observed in SPARC, but with an offset with respect to the SPARC trends. This offset qualitatively seems to be stronger for the SIMBA and ASTRID simulations, rather than for the IllustrisTNG simulation. To quantitatively compare the simulations' data to the observed data, we evaluated for the IllustrisTNG, SIMBA and ASTRID simulations and for each of the scaling relations the \(\chi^{2}\) between the simulation points and the interpolated SPARC trends, via the following formula: \[\chi^{2}=\sum_{i=1}^{N_{\rm{sim}}}\frac{[y_{\rm{sim},\,i}-f_{\rm{rel}}(x_{ \rm{sim},\,i})]^{2}}{\sigma_{\rm{rel},\,i}^{2}}, \tag{13}\] where \(\chi^{2}\) is the chi-squared evaluated for the scaling relation considered, (\(x_{\rm{sim},\,i}\), \(y_{\rm{sim},\,i}\)) are the \(N_{\rm{sim}}\) points from the simulation in the considered scaling relation parameter space, \(f_{\rm{rel}}\) is the observed scaling relation median trend's linear interpolation function, and \(\sigma_{\rm{rel},\,i}\) is given by the mean between \(\sigma\)- and \(\sigma_{*}\), which are the differences, in absolute value, between the linear interpolated functions of the 16th and the 84th percentile trends associated to the observed scaling relation, respectively, and the interpolated median trend, each evaluated at \(x_{\rm{sim},\,i}\),. Given that the various simulations have a different galaxy count \(N_{\rm{sim}}\), to compare different simulations we also considered a normalized chi-squared, defined as \(\hat{\chi}^{2}:=\chi^{2}/(N_{\rm{sim}}-1)\). The chi-squared for IllustrisTNG, SIMBA and ASTRID fiducial simulations are shown in Table 1. As seen in Fig. 2, comparing the errors on the medians and the \(\hat{\chi}^{2}\) values, only the IllustrisTNG (and SIMBA for log-masses lower than \(\sim 5.0\times 10^{9}\,M_{\odot}\)) fiducial simulations provide a \(R_{*,1/2}\)-\(M_{*}\) which on average is in agreement with the observations, while the ASTRID simulation is in disagreement with the SPARC trend over the full mass range. All the simulations also produce comparable total masses which are, at fixed stellar mass, slightly larger than the observed values found by Posti et al. (2019). This discrepancy is stronger for the ASTRID simulation, especially at higher mass values. The sizes-mass relation results for IllustrisTNG and SIMBA are compatible with the results shown in Villaescusa-Navarro et al. (2021), _IX_ panel of Fig. 4. The IllustrisTNG fiducial simulation replicates better the scaling relations involving quantities evaluated within the stellar half-mass radius, such as \(f_{\rm{DM}}(<R_{*,1/2})\)-\(M_{*}\) and \(M_{\rm{DM},1/2}\)-\(M_{*}\), with a moderate shift towards higher values at fixed stellar mass. SIMBA and ASTRID fiducial simulations, instead, produce unrealistically high dark matter masses. ### Effects of the variation of cosmological and astrophysical parameters We proceeded by comparing the IllustrisTNG fiducial simulation and the observed SPARC trends with IllustrisTNG simulations from the '1P' simulation set, which assume, for each cosmological and astrophysical parameter, the minimum and maximum value available2. Footnote 2: An exception for this has been made for the upper limit of the \(A_{\rm{SNI}}\) parameter. We choose \(A_{\rm{SNI}}=2.30\) as the upper limit, given the fact that a supernova feedback which is too much energetic suppresses the star formation in almost all galaxies, producing a sample of star-forming galaxies too small to be statistically significant. Fig. 3 shows the same scaling relations presented in Fig. 2 for the illustrisTNG simulation, but with each of the columns showing the effects of varying one of the two cosmological parameters on the simulations' trends. Fig. 4 is the same as Fig. 3, but with each column showing the effects of varying one of the four astrophysical parameters on the simulations' trends, instead. The same figures for SIMBA and ASTRID are shown in Fig. 10 and 11, and are discussed in detail in Appendix B. Starting with the cosmological parameters, we can see that there is a monotonic trend between increasing values of \(\Omega_{\rm{m}}\) and the normalization of the scaling relations. Moreover, the effects of varying the density parameter, \(\Omega_{\rm{m}}\), on the scaling relation trends are more Figure 3: Comparison between SPARC observations and IllustrisTNG simulations with differing cosmological parameters. Each row in the plot corresponds to a different scaling relation. Each column shows the effect of varying one of the two cosmological parameters. intense than variations concerning the amplitude of the linear matter density fluctuations, \(\sigma_{\rm S}\), for which almost no variation of the scaling relation trends can be appreciated. Regarding the astrophysical parameters instead, as expected we see that in the range of mass and for the galaxy-type considered, the impact on our scaling relations of the parameters related to the SN feedback is stronger than that of the AGN-related parameters. For modifying the wind energy per unit star-formation rate, an increase of \(A_{\rm SN1}\) corresponds, at fixed stellar mass, to an increase in half-mass radius, dark matter fraction, dark matter mass within the half-mass and total mass. As we have already shown in Fig. 1, \(A_{\rm SN1}\) impacts strongly the stellar mass accretion, which would explain most of the changes observed 3. Of course, more energetic winds are expected to push the gas to larger distances, altering the gravitational potential, the half-mass radii, and thus making DM mass and DM fraction larger. Therefore, with all the other parameters fixed to the reference values, less energetic models better reproduce the observations. Footnote 3: This is confirmed by the dependence of the star formation density as a function of redshift and astrophysical parameters in Figure 9 of Villaescususus-Navarro et al. 2021. The effects of increasing the wind speed at injection (\(A_{\rm SN2}\)) are instead more subtle. While the internal DM fractions and total mass are practically unchanged, with only a slight slope change at \(M_{\rm s}\sim 1.6\times 10^{10}\,{\rm M}_{\odot}\) for the \(M_{\rm tot}\)-\(M_{\rm s}\) relation, there is an increase in stellar half-mass radius for the simulation with lower wind speeds at injection. This increase seems to be stronger for star-forming galaxies of intermediate-high mass. The increase in stellar half-mass radius also implies an increased DM mass within the stellar half-mass radius. For low values of \(A_{\rm SN2}\), the scaling relations extend to much higher values of stellar mass, since winds with less momentum allow the formation of very massive star-forming galaxies. On the other hand, higher values of \(A_{\rm SN2}\) quench more efficiently star formation, preventing the formation of more massive galaxies, which results in scaling relation trends that stop at lower stellar mass. These trends are only mildly seen varying \(A_{\rm SN1}\). The effects of changing \(A_{\rm AGN1}\) and \(A_{\rm AGN2}\) instead seem to be negligible: we cannot notice any apparent change of normalization, slope or scatter in the scaling relation trends among the two extreme values adopted for the two parameters. In regards to SIMBA and ASTRID results, for the cosmological parameters we find that in both cases an increase in \(\Omega_{\rm m}\) corresponds to an increase in the normalization of the scaling relations. In both cases, there is better concordance with the observations for low values of \(\Omega_{\rm m}\), but also a reduction in the number of late-type galaxies (LTGs) present in the simulations. Neither suite instead shows sensitivity to variations of \(\sigma_{\rm S}\). For the astrophysical parameters, the analysis done in Appendix B shows that in both cases no simulation which is only subject to the variation of one astrophysical parameter can reconcile the simulated galaxies' trends with the observed SPARC trends, especially for scaling relations relative to central DM masses and DM fractions. In the SIMBA case, one necessarily needs to lower \(\Omega_{\rm m}\), while in ASTRID low values of \(\dot{A}_{\rm SN2}\) solves the discrepancy, but at the cost of having all galaxies clustered at low stellar mass values. Figure 4: Comparison between SPARC observations and IllustrisTNG simulations with differing astrophysical feedback parameters. Each row in the plot corresponds to a different scaling relation. Each column shows the effect of varying one of the four astrophysical parameters. ### IllustrisTNG simulations' best-fit to the observations From the analyses performed in the previous sections, it can be seen that the fiducial simulations do not exactly reproduce the observed trends, and that varying the astrophysical and cosmological parameters could improve the agreement. Therefore, we searched within the IllustrisTNG suite simulations for the set of cosmological and astrophysical parameters that provide a best-fit to the observed SPARC trends. We considered all the 1065 simulations from the 'LH', '1P' and 'EX' sets, and for each of the simulations we followed the same procedure detailed in Section 3.1. We then ordered the simulations according to the value of the respective cumulative \(\tilde{\chi}^{2}\) result. We find that the simulation that better fits all the observed SPARC data is the simulation 'LH-698', having the following cosmological and astrophysical parameters: \(\Omega_{\rm m}=0.27\), \(\sigma_{8}=0.83\), \(S_{8}=0.78\), \(A_{\rm SNI}=0.48\), \(A_{\rm SN2}=1.24\), \(A_{\rm AGN1}=2.53\) and \(A_{\rm AGN2}=1.79\), where the value of \(S_{8}\) has been inferred from \(\Omega_{\rm m}\) and \(\sigma_{8}\) via the definition, \(S_{8}:=\sigma_{8}\sqrt{\Omega_{\rm m}/0.3}\). The normalized chi-squared associated to this simulation is \(\tilde{\chi}^{2}=1.17\). The first column of Fig. 5 shows the comparison between this simulation and the observed SPARC trends. The fact that the best-fit simulation obtained is not one of the fiducial simulations seems to reassure against eventual circularity problems in this procedure. It has to be noted that there is the chance that other simulations in the IllustrisTNG suite, with different parameter combinations, show a similar chi-squared as the one of the 'best-fit' simulation considered above. This is because different parameter combinations, by compensation with each other due to degeneracies, could give rise to similar physical conditions for the galaxies, and thus produce similar scaling relations with respect to the ones observed in our Universe. Indeed, by employing a method of Bayesian inference based on implicit likelihood inference (ILI), by using the observed star formation rate density (SFRD) and, separately, the stellar mass functions (SMFs), at different redshifts, Jo et al. (2023) confirm the existence of degeneracies between cosmological and astrophysical parameters in camels. To check that the choice of parameters associated with the best-fit simulation is not just the result of a statistical fluctuation, and to assign statistical uncertainties to the parameters, we decided to perform a bootstrap analysis of the best-fit sample, which enabled us to take into account the uncertainty induced by the degeneracies among fitted parameters. To verify that the procedure recovers the ground-truth correctly within a certain confidence limit, we have first tested it by using mock observational data taken from the '1P_1_0' fiducial simulation and various LH simulations, instead of real data. The results confirm that this procedure performs well, recovering the ground truth in all of the cases tested. Test results show that the parameters that are better constrained by this approach are \(\Omega_{\rm m}\) and \(A_{\rm SN1}\), while the constraining power for \(\sigma_{8}\), \(S_{8}\) and \(A_{\rm SN2}\) is milder. AGN-feedback related parameters are instead roughly constrained by this method. These results confirm the dependencies found in Section 3.2. More details are provided in Appendix C. We subsequently applied this method with the SPARC catalog as the observational data, by bootstrapping both the simulations and the observed data, with the aim of obtaining constraints on both the cosmological and the astrophysical parameters from the sample of best-fit simulations, and not from just one simulation. We bootstrapped each of the 1065 simulations and the SPARC dataset 100 times4, and for each of the resamplings we performed the same analysis detailed in Section 3.1. We then order, for each resampling, the simulations according to the values of \(\tilde{\chi}^{2}\), and take the best-fit simulation. We thus obtained a list of 100 best-fit simulations. The constraints obtained, associated to each of the correlations, are summarized in Table 2. The constraints are given in terms of the 16th, 50th (median) and 84th percentiles. Footnote 4: The bootstrapping process is performed via the Mathematica resource function “Boostrapquistics”: [https://resources.wolframcloud.com/FunctionRepository/resources/BootstrapStatistics/](https://resources.wolframcloud.com/FunctionRepository/resources/BootstrapStatistics/). The mean fraction of substitutions with duplicate elements over the total number of objects in the bootstrapped array that this function performs is constant, and equal to \(\simeq 0.36\). We obtain \(\Omega_{\rm m}=0.27^{+0.01}_{-0.05}\), \(\sigma_{8}=0.83^{+0.08}_{-0.11}\), \(S_{8}=0.78^{+0.03}_{-0.09}\), \(A_{\rm SN1}=0.48^{+0.25}_{-0.16}\), \(A_{\rm SN2}=1.21^{+0.03}_{-0.34}\), \(A_{\rm AGN1}=2.53^{+0.89}_{-1.82}\) and \(A_{\rm AGN2}=1.31^{+0.49}_{-0.67}\), with an associated normalized chi-squared of \(\tilde{\chi}^{2}=1.23^{+0.29}_{-0.20}\). While we manage to constrain the cosmological and SN feedback parameters, we are unable to constrain the AGN feedback parameters. This latter result is expected and consistent with the trends discussed in Sec. 3.2. Regarding the cosmological parameters, \(\Omega_{\rm m}\) and \(S_{8}\) are better constrained than \(\sigma_{8}\), while in the case of the SN feedback parameters, \(A_{\rm SN1}\) is better constrained than \(A_{\rm SN2}\). To further analyze the impact of eventual circularity effects on our results, we also report in Table 2 the cumulative chi-squared results obtained by considering only the internal dark matter scaling relation, that is, the \(f_{\rm DM}\)-\(M_{*}\) and the \(M_{\rm DM,1/2}\)-\(M_{*}\) relations. In this case, the results are \(\Omega_{\rm m}=0.22^{+0.03}_{-0.02}\), \(\sigma_{8}=0.92^{+0.05}_{-0.10}\), \(S_{8}=0.79^{+0.01}_{-0.07}\), \(A_{\rm SN1}=0.37^{+0.11}_{-0.09}\), \(A_{\rm SN2}=0.80^{+0.60}_{-0.22}\), \(A_{\rm AGN1}=1.36^{+1.17}_{-0.90}\) and \(A_{\rm AGN2}=1.37^{+0.43}_{-0.71}\), which are compatible with the cumulative results within \(1\sigma\). ### SIMBA simulations' best-fit to the observations We considered all the 1065 'LH', '1P' and 'EX' simulations from the SIMBA suite, to check if there is a simulation with a set of reasonable cosmological and astrophysical parameters that fits the observations. By repeating the same procedure detailed in Section 3.3, we found that the best-fit SIMBA simulation for all the SPARC observed trends is the simulation 'LH-360', having the following parameters: \(\Omega_{\rm m}=0.13\), \(\sigma_{8}=1.00\), \(S_{8}=0.65\), \(\tilde{A}_{\rm SN1}=0.35\), \(\tilde{A}_{\rm SN2}=0.50\), \(\tilde{A}_{\rm AGN1}=0.68\) and \(\tilde{A}_{\rm AGN2}=1.16\), with a normalized chi-squared of \(\tilde{\chi}^{2}=1.98\). The second column of Fig. 5 shows the comparison between this simulation and the observed SPARC trends. We performed again the bootstrap analysis detailed in Section 3.3, this time on both the SIMBA simulations and the SPARC dataset. Results are summarized in Table 3. We obtain \(\Omega_{\rm m}=0.14^{+0.02}_{-0.01}\), \(\sigma_{8}=0.96^{+0.03}_{-0.25}\), \(S_{8}=0.649^{+0.004}_{-0.135}\), \(\tilde{A}_{\rm SN1}=0.45^{+0.06}_{-0.10}\), \(\tilde{A}_{\rm SN2}=0.81^{+0.03}_{-0.31}\). \(\tilde{A}_{\rm AGN1}=0.56^{+0.12}_{-0.02}\) and \(\tilde{A}_{\rm AGN2}=1.13^{+0.03}_{-0.35}\), with an associated normalized chi-squared of \(\tilde{\chi}^{2}=2.01^{+0.52}_{-0.43}\). In the case of the SIMBA suite, we are unable to give meaningful constraints on the SN feedback parameter \(\tilde{A}_{\rm SN2}\), but we manage to constrain the two AGN feedback parameters \(\tilde{A}_{\rm AGN1}\) and \(\tilde{A}_{\rm AGN2}\), which in this case are associated to the physical properties of the SMBH jets. Once again, the cosmological parameters that are better constrained are \(\Omega_{\rm m}\) and \(S_{8}\), while \(\sigma_{8}\) has a higher associated uncertainty. However, these results are obtained at the cost of considering values of \(\Omega_{\rm m}\) that are near 0.10 and of \(\sigma_{8}\) near 1.00. We have verified that, by lowering the value of \(\Omega_{\rm m}\), the dependence of the scaling relations from the SN and AGN feedback parameters are different \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Parameter & \(R_{*,1/2}\)-\(M_{*}\) & \(f_{\rm DM}(<R_{*,1/2})\)-\(M_{*}\) & \(M_{\rm DM,1/2}\)-\(M_{*}\) & \(M_{\rm tot}\)-\(M_{*}\) & cumulative & cumulative alt. \\ \hline \(\Omega_{\rm m}\) & \(0.29^{+0.05}_{-0.04}\) & \(0.24^{+0.03}_{-0.04}\) & \(0.21^{+0.02}_{-0.02}\) & \(0.36^{+0.10}_{-0.05}\) & \(0.27^{+0.01}_{-0.05}\) & \(0.22^{+0.03}_{-0.02}\) \\ \(\sigma_{8}\) & \(0.85^{+0.11}_{-0.16}\) & \(0.83^{+0.11}_{-0.03}\) & \(0.87^{+0.10}_{-0.07}\) & \(0.95^{+0.03}_{-0.04}\) & \(0.83^{+0.08}_{-0.11}\) & \(0.92^{+0.05}_{-0.10}\) \\ \(S_{8}\) & \(0.82^{+0.14}_{-0.16}\) & \(0.78^{+0.05}_{-0.07}\) & \(0.74^{+0.06}_{-0.06}\) & \(1.05^{+0.16}_{-0.20}\) & \(0.78^{+0.03}_{-0.09}\) & \(0.79^{+0.01}_{-0.07}\) \\ \(A_{\rm SN1}\) & \(0.70^{+0.44}_{-0.32}\) & \(0.48^{+0.73}_{-0.12}\) & \(0.32^{+0.01}_{-0.01}\) & \(0.35^{+0.07}_{-0.07}\) & \(0.48^{+0.25}_{-0.16}\) & \(0.37^{+0.11}_{-0.09}\) \\ \(A_{\rm SN2}\) & \(1.17^{+0.27}_{-0.23}\) & \(1.24^{+0.44}_{-0.38}\) & \(0.76^{+0.47}_{-0.25}\) & \(1.00^{+0.10}_{-0.32}\) & \(1.21^{+0.03}_{-0.34}\) & \(0.80^{+0.60}_{-0.22}\) \\ \(A_{\rm AGN1}\) & \(2.09^{+1.46}_{-1.09}\) & \(1.41^{+1.12}_{-0.93}\) & \(1.35^{+0.62}_{-0.64}\) & \(1.26^{+2.30}_{-0.95}\) & \(2.53^{+0.89}_{-1.82}\) & \(1.36^{+1.17}_{-0.90}\) \\ \(A_{\rm AGN2}\) & \(1.39^{+0.60}_{-0.63}\) & \(1.46^{+0.33}_{-0.64}\) & \(1.25^{+0.38}_{-0.68}\) & \(0.83^{+0.47}_{-0.07}\) & \(1.31^{+0.49}_{-0.67}\) & \(1.37^{+0.43}_{-0.71}\) \\ \(\chi^{2}\) & \(60^{+41}_{-26}\) & \(22^{+17}_{-0.09}\) & \(45^{+16}_{-16}\) & \(89^{+46}_{-40}\) & \(344^{+185}_{-12}\) & \(99^{+33}_{-28}\) \\ \(\tilde{\chi}^{2}\) & \(0.25^{+0.06}_{-0.06}\) & \(0.10^{+0.03}_{-0.03}\) & \(0.14^{+0.03}_{-0.03}\) & \(0.28^{+0.12}_{-0.10}\) & \(1.23^{+0.29}_{-0.20}\) & \(0.32^{+0.08}_{-0.07}\) \\ \hline \end{tabular} \end{table} Table 2: Constraints on cosmological and astrophyiscal parameters, based on the methods detailed in Section 3.3, for the IllustrisTNG suite. The constraints are given in terms of 16th, 50th and 84th percentiles. The ’cumulative alt.’ column values are the cumulative chi-squared results obtained by considering only the internal dark matter scaling relations. Figure 5: Same as Fig. 2, but comparing the camels best-fit simulations. with respect to what was shown in Figure 2, which was evaluated for the reference cosmology. In particular, the dependence on the AGN feedback parameters is stronger. A possible motivation could be that, for a lower value of \(\Omega_{\rm m}\), the halos are less massive and thus the momentum transfer between the SMBH jets and the host galaxy particles is more effective in SIMBA compared to IllustrsTNG, in which the isotropic kinetic feedback results in isotropic winds with lower velocities. Similarly to the results from Table 1, we again find that SIMBA shows, on average, a worse agreement with the observed values than IllustrsTNG, having a higher cumulative \(\hat{\chi}^{2}\) value. ### ASTRID simulations' best-fit to the observations Finally, we considered all the 1061 'LH' and '1P' simulations of the ASTRID suite, and performed the same analysis as in Sections 3.3 and 3.4. We found that the best-fit ASTRID simulation for all the SPARC observed trends is the simulation 'LH-474', having the following parameters: \(\Omega_{\rm m}=0.46\), \(\sigma_{8}=0.97\), \(S_{8}=1.20\), \(\hat{A}_{\rm SN1}=0.42\), \(\hat{A}_{\rm SN2}=0.59\), \(\hat{A}_{\rm AGN1}=1.58\) and \(\hat{A}_{\rm AGN2}=0.64\), with a normalized chi-squared of \(\hat{\chi}^{2}=1.52\). The third column of Fig. 5 shows the comparison between this simulation and the observed SPARC trends. We also performed the bootstrap analysis detailed in Section 3.3, on both ASTRID simulations and the SPARC dataset. Results are summarized in Table 4. We obtain \(\Omega_{\rm m}=0.44^{+0.02}_{-0.15}\), \(\sigma_{8}=0.81^{+0.15}_{-0.17}\), \(S_{8}=0.85^{+0.36}_{-0.06}\), \(\hat{A}_{\rm SN1}=0.41^{+0.34}_{-0.17}\), \(\hat{A}_{\rm SN2}=0.61^{+0.12}_{-0.04}\), \(\hat{A}_{\rm AGN1}=2.49^{+0.56}_{-0.90}\) and \(\hat{A}_{\rm AGN2}=0.62^{+0.15}_{-0.09}\), with a normalized chi-squared of \(\hat{\chi}^{2}=1.65^{+0.58}_{-0.40}\). In the case of the ASTRID suite, we have that both \(\Omega_{\rm m}\) and \(\sigma_{8}\) have large uncertainties, with the upper uncertainty on \(\Omega_{\rm m}\) lower than the one of \(\sigma_{8}\). The value of \(\Omega_{\rm m}\) is very high with respect to the values found in both IllustrsTNG and SIMBA results, while the value of \(\sigma_{8}\) is compatible with both Planck Collaboration et al. (2020) and Hinshaw et al. (2013) results. As far as the astrophysical parameters are concerned, both SN-feedback parameters are significantly lower than the fiducial value, with \(\hat{A}_{\rm SN2}\) showing lower uncertainty than \(\hat{A}_{\rm SN1}\). We also find that the parameter \(\hat{A}_{\rm AGN1}\) is poorly constrained, while we cannot constrain the parameter \(\hat{A}_{\rm AGN2}\). As shown in Fig. 5, these results are obtained at the cost of having a value of \(\Omega_{\rm m}\) close to 0.40 and all galaxies confined in a small region around \(M_{*}\sim 2\times 10^{9}\)\(M_{\odot}\). The latter is a very similar behavior to the simulation with \(\hat{A}_{\rm SN2}=0.50\), shown in Appendix B, which could imply that it is an effect associated to the fact that, differently from equation (3), the wind velocity in ASTRID does not have a wind velocity floor, thus allowing very low values of \(v_{\rm w}\), or could be an effect that depends on other parameters, for example a low value of \(\hat{A}_{\rm AGN2}\) (or a mix of these causes). A speculative mechanism that tries to explain why only low-mass LTGs remain in these simulations is presented in Section 4.2. ### Wind mass loading analysis Given that the wind mass loading factor is one of the principal quantities that is influenced by the SN feedback parameters and enters in any chemical evolution model (e.g., Peeples and Shankar, 2011; Tortora et al., 2022), it is important to check its trends and compare the results from both suites and with literature results. In the left panel of Fig. 6 we show the mass loading factor at injection from the IllustrsTNG suite, taken by evaluating Eqs. (2) and (3) numerically for each galaxy by using \(\overline{\sigma}_{\rm DM}\) and \(Z\), as a function of the maximum velocity of the rotation curve, \(V_{\rm max}\). These trends are obtained by considering both the fiducial simulation and the best-fit simulation as determined in Section 3.3. It emerges that the IllustrsTNG best fit trend is, on average, 0.60 dex lower than the fiducial counterpart, with respect to \(\eta_{\rm w}\) values. As discussed in Sec. 3.2, this discrepancy can be explained by the fact that, in the best-fit simulations, galactic wind outflows are overall less energetic. We find that the fiducial simulation is compatible with trends measured in hydrodynamical simulations described in Dave et al. (2011) and Muratov et al. (2015) at high values of \(V_{\rm max}\), but is totally incompatible with the empirical determinations of mass loading factor, inferred from measurements of the mass-metallicity relation, presented in Peeples and Shankar (2011), Lilly et al. (2013) or Zahid et al. (2014), while the best-fit 'LH-698' simulation is placed in between the trends of Muratov et al. (2015) and Zahid et al. (2014), and is compatible with the former at low values of \(V_{\rm max}\). The right panel of Fig. 6 shows instead the comparison between the three simulation suites' wind mass loading trends as a function of maximum rotational velocity. The trends from SIMBA have been obtained by plotting the \(\eta_{\rm w}\), evaluated for each galaxy by considering the respective stellar mass values, against the associated \(V_{\rm max}\) values, while the trends from ASTRID have been obtained in a manner similar to IllustrsTNG, but using equations (10) and (11) instead. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameter & \(R_{*,1/2}\)-\(M_{*}\) & \(f_{\rm DM}(<R_{*,1/2})\)-\(M_{*}\) & \(M_{\rm DM,1/2}\)-\(M_{*}\) & \(M_{\rm max}\)-\(M_{*}\) & cumulative \\ \hline \(\Omega_{\rm m}\) & \(0.45^{+0.01}_{-0.25}\) & \(0.14^{+0.01}_{-0.01}\) & \(0.11^{+0.01}_{-0.01}\) & \(0.46^{+0.03}_{-0.05}\) & \(0.14^{+0.02}_{-0.01}\) \\ \(\sigma_{8}\) & \(0.961^{+0.002}_{-0.02}\) & \(0.98^{+0.02}_{-0.23}\) & \(0.76^{+0.24}_{-0.01}\) & \(0.96^{+0.02}_{-0.06}\) & \(0.96^{+0.03}_{-0.25}\) \\ \(S_{8}\) & \(1.18^{+0.004}_{-0.045}\) & \(0.65^{+0.00}_{-0.14}\) & \(0.49^{+0.17}_{-0.03}\) & \(1.18^{+0.06}_{-0.06}\) & \(0.649^{+0.004}_{-0.135}\) \\ \(\tilde{A}_{\rm SN1}\) & \(2.47^{+0.04}_{-2.12}\) & \(0.40^{+0.18}_{-0.05}\) & \(0.44^{+0.20}_{-0.09}\) & \(1.49^{+0.02}_{-0.98}\) & \(0.45^{+0.06}_{-0.10}\) \\ \(\tilde{A}_{\rm SN2}\) & \(1.63^{+0.21}_{-0.95}\) & \(0.78^{+1.06}_{-0.28}\) & \(0.71^{+0.83}_{-0.21}\) & \(0.69^{+0.43}_{-0.17}\) & \(0.81^{+1.03}_{-0.31}\) \\ \(\tilde{A}_{\rm AGN1}\) & \(0.77^{+0.16}_{-0.50}\) & \(0.68^{+0.00}_{-0.14}\) & \(0.68^{+0.55}_{-0.20}\) & \(0.65^{+0.87}_{-0.26}\) & \(0.56^{+0.12}_{-0.02}\) \\ \(\tilde{A}_{\rm AGN2}\) & \(1.28^{+0.32}_{-0.71}\) & \(1.16^{+0.01}_{-0.04}\) & \(1.16^{+0.10}_{-0.20}\) & \(1.30^{+0.35}_{-0.59}\) & \(1.13^{+0.03}_{-0.35}\) \\ \(\chi^{2}\) & \(34^{+20}_{-13}\) & \(27^{+13}_{-12}\) & \(22^{+12}_{-06}\) & \(63^{+33}_{-21}\) & \(261^{+109}_{-083}\) \\ \(\hat{\chi}^{2}\) & \(0.14^{+0.06}_{-0.04}\) & \(0.21^{+0.10}_{-0.07}\) & \(0.23^{+0.06}_{-0.08}\) & \(0.19^{+0.07}_{-0.06}\) & \(2.01^{+0.52}_{-0.43}\) \\ \hline \end{tabular} \end{table} Table 3: Same as Table 2, but for the SIMBA suite. The constraints are given in terms of 16th, 50th and 84th percentiles. For evaluating the ASTRID points, we also had to use the cumulative velocity dispersion, \(\sigma\), instead of \(\overline{\sigma}_{\rm DM}\), because the values of the one-dimensional local dark matter velocity dispersion for each star particle are not provided for ASTRID in the camels suite. We have checked with direct comparisons in IllustrisTNG that the difference between using \(\sigma\) or \(\overline{\sigma}_{\rm DM}\) on the mass loading values amounts to an overestimate of the mass loading values of no more than 0.2 dex when using \(\sigma\), compared to using \(\overline{\sigma}_{\rm DM}\). As one can see, the fiducial SIMBA mass loading values are, on average, 0.51 dex higher and shifted towards higher velocities than the IllustrisTNG fiducial trend. The best-fit trend tends to agree better with the IllustrisTNG simulations, but (as we saw in Section 3.4) this is achieved by using unreasonable values of the cosmological parameters, along with a lower wind mass loading factor parameter, \(\tilde{A}_{\rm SN1}\). The very low value of \(\Omega_{\rm m}\) in the best-fit simulation is strongly impacting the formation of very massive halos, preventing their formation, contrary to what happens in the reference SIMBA simulation. It should be noted that, as discussed in Section 2.2, the discrepancy between Muratov et al. (2015)'s mass loading trend (orange curve in the left panel of Fig. 6) and the one from SIMBA's fiducial simulation (green regions in the right panel of Fig. 6) is due to the fact that SIMBA uses Angles-Alcazar et al. (2017)'s mass loading trend, which has double the amplitude of the mass loading in Muratov et al. (2015) due to how the two mass loadings are evaluated. The fiducial ASTRID mass loading values are instead compatible with the IllustrisTNG best-fit simulation values, while the mass loading trend associated to the best-fit analysis is positioned at much lower rotational velocities, and much higher values of the mass loading. This inverted behavior with respect to SIMBA and IllustrisTNG seems to point to some kind of issue with the best-fit simulation detected by our methods, perhaps concerning the lack of a wind velocity floor in equation (11), which produces very low values at the denominator in equation (10). In fact, the median of the wind velocity distribution for the ASTRID best-fit simulation is 40 km/s, while for IllustrisTNG we obtain 350 km/s. We also performed a linear regression (in log-space) of the wind mass loading factor at injection trends, both for the fiducial and the best-fit simulation. Outflows powered by stellar feedback are thought to be driven a) either by momentum, injected into the ISM by massive stellar winds and SNe through radiation pressure, with a power-law scaling \(\eta_{\rm w}\propto V_{\rm max}^{-1}\), or b) by energy, injected into the ISM by massive stars and core-collapse SNe, in which case the scaling is \(\eta_{\rm w}\propto V_{\rm max}^{-2}\)(see Dekel & Silk, 1986; Murray et al., 2005; Hopkins et al., 2012). We obtain for the fiducial simulation: \[\log_{10}(\eta_{\rm w})=(-2.36\pm 0.08)\log_{10}(V_{\rm max})+(5.5\pm 0.2), \tag{14}\] while for the best-fit simulation we obtain: \[\log_{10}(\eta_{\rm w})=(-1.99\pm 0.07)\log_{10}(V_{\rm max})+(4.2\pm 0.1), \tag{15}\] which has a slightly shallower slope than the fiducial trend, closer to the theoretical \(V_{\rm max}^{-2}\) trend. These trends are shown as black dashed (fiducial) and dotted (best-fit) curves in Fig. 6. In literature (Muratov et al., 2015), a double power-law trend is used to describe analytically the wind mass loading trend as a function of \(V_{\rm max}\). For simplicity, we used a simple power-law, since we do not have many low-velocity galaxies due to our selection criteria. We performed a linear regression also of the SIMBA and ASTRID wind mass loading factor trends. For SIMBA, we obtain for the fiducial simulation: \[\log_{10}(\eta_{\rm w})=(-2.90\pm 0.11)\log_{10}(V_{\rm max})+(7.6\pm 0.3), \tag{16}\] while for the best-fit simulation we get: \[\log_{10}(\eta_{\rm w})=(-1.27\pm 0.15)\log_{10}(V_{\rm max})+(3.1\pm 0.3). \tag{17}\] For ASTRID instead, we obtain for the fiducial simulation: \[\log_{10}(\eta_{\rm w})=(-2.11\pm 0.02)\log_{10}(V_{\rm max})+(4.75\pm 0.04), \tag{18}\] while for the best-fit simulation we get: \[\log_{10}(\eta_{\rm w})=(-2.43\pm 0.01)\log_{10}(V_{\rm max})+(6.25\pm 0.03) \tag{19}\] We caution on the slope values of the SIMBA and ASTRID best fit-simulations, since they are obtained only from galaxies within a tight range of velocities at \(V_{\rm max}\lesssim 150\) km/s. This is especially true for ASTRID, given the peculiar behavior described above. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Parameter & \(R_{*,1/2}\)-\(M_{*}\) & \(f_{\rm DM}(<R_{*,1/2})\)-\(M_{*}\) & \(M_{\rm DM,1/2}\)-\(M_{*}\) & \(M_{\rm tot}\)-\(M_{*}\) & cumulative \\ \hline \(\Omega_{\rm m}\) & \(0.41^{+0.03}_{-0.09}\) & \(0.27^{+0.22}_{-0.14}\) & \(0.44^{+0.00}_{-0.26}\) & \(0.13^{+0.24}_{-0.01}\) & \(0.44^{+0.02}_{-0.15}\) \\ \(\sigma_{8}\) & \(0.87^{+0.05}_{-0.22}\) & \(0.85^{+0.12}_{-0.18}\) & \(0.64^{+0.25}_{-0.00}\) & \(0.87^{+0.06}_{-0.30}\) & \(0.81^{+0.15}_{-0.17}\) \\ \(S_{8}\) & \(0.98^{+0.03}_{-0.20}\) & \(0.80^{+0.37}_{-0.30}\) & \(0.78^{+0.11}_{-0.23}\) & \(0.57^{+0.29}_{-0.13}\) & \(0.85^{+0.36}_{-0.06}\) \\ \(\tilde{A}_{\rm SN1}\) & \(0.38^{+0.07}_{-0.13}\) & \(0.89^{+2.39}_{-0.52}\) & \(0.25^{+0.66}_{-0.00}\) & \(1.59^{+0.39}_{-0.51}\) & \(0.41^{+0.34}_{-0.17}\) \\ \(\tilde{A}_{\rm SN2}\) & \(0.55^{+0.18}_{-0.04}\) & \(0.63^{+1.06}_{-0.06}\) & \(0.73^{+0.00}_{-0.19}\) & \(0.64^{+0.19}_{-0.03}\) & \(0.61^{+0.12}_{-0.04}\) \\ \(\tilde{A}_{\rm AGN1}\) & \(1.58^{+0.91}_{-0.79}\) & \(0.82^{+1.14}_{-0.47}\) & \(2.48^{+0.00}_{-1.61}\) & \(0.73^{+0.29}_{-0.09}\) & \(2.49^{+0.56}_{-0.90}\) \\ \(\tilde{A}_{\rm AGN2}\) & \(0.58^{+0.09}_{-0.05}\) & \(0.81^{+0.48}_{-0.25}\) & \(0.52^{+0.21}_{-0.00}\) & \(0.56^{+0.20}_{-0.0}\) & \(0.62^{+0.05}_{-0.09}\) \\ \(\chi^{2}\) & \(11^{+16}_{-10}\) & \(3^{+24}_{-4}\) & \(2^{+10}_{-1}\) & \(1^{+2}_{-1}\) & \(11^{+42}_{-1}\) \\ \(\tilde{\chi}^{2}\) & \(0.07^{+0.03}_{-0.03}\) & \(0.07^{+0.06}_{-0.04}\) & \(0.06^{+0.02}_{-0.02}\) & \(0.06^{+0.05}_{-0.03}\) & \(1.65^{+0.58}_{-0.40}\) \\ \hline \end{tabular} \end{table} Table 4: Same as Table 2, but for the ASTRID suite. The constraints are given in terms of 16th, 50th and 84th percentiles. ## 4 Discussion In this paper we have built a new method for constraining astrophysical and cosmological parameters, by comparing scaling relations predicted by simulations with those constructed with the data. Depending on very specific details of the simulations analyzed, as for example the cosmological parameters and the recipes adopted for the SN feedback, the selection criteria adopted, we generated samples of simulated galaxies more or less abundant and characterized by a wide range of physical properties. We have optimized the use of this wealth of information to constrain first the astrophysical processes, for instance, wind energy, wind velocity and wind mass loading and the AGN-related parameters, and also cosmological parameters, such as \(\Omega_{\rm m}\), \(\sigma_{8}\) and \(S_{8}\). ### Discrepancy between fiducial simulations and observations This analysis has firstly allowed us to highlight a strong discrepancy among the three suites implemented in the camels simulations. In the literature, it is already known that the IllustrisTNG simulation systematically underestimates the SPARC trends in regards to the stellar-to-halo mass relations \(M_{*}/M_{\rm tot}\)-\(M_{*}\) and \(M_{*}/M_{\rm tot}\)(Romeo et al., 2020), in agreement with our findings. The comparison between IllustrisTNG, SIMBA and ASTRID fiducial simulations shows that the former aligns more closely to the observed SPARC scaling relations than the latter two, especially for those relations that involve internal quantities. More specifically, SIMBA shows a systematically larger DM mass and DM fraction in the central regions, compared with IllustrisTNG, while ASTRID also shows systematically larger stellar half-mass radii for all values of \(M_{*}\), and higher total masses with respect to SPARC for high values of \(M_{*}\) (\(\sim 10^{10.5}\)\(M_{\odot}\)). Some discrepancies of SIMBA with observations have been noted in both Dave et al. (2019) and Glowacki et al. (2020). In the first, it is reported that SIMBA fails to reproduce correctly the stellar mass function at \(z=0\), the sizes of quenched low-mass galaxies and the production of stellar metallicity, as well as sSFR, in low-mass star-forming galaxies. In the second, it is reported that SIMBA produces galaxies which are overly bulge-dominated, due to the implementation of the feedback from star formation. Moreover, in Marasco et al. (2020), it is noted that there is a strong discrepancy in the stellar-to-dark matter ratio of simulated to observed systems, which extends into the innermost regions of galaxies. A possible explanation for this discrepancy could be the fact that galactic winds associated with SN feedback in SIMBA do not interact with gas particles from the ISM, due to hydrodynamic decoupling implemented in the simulation (see Angles-Alcazar et al., 2017). As reported in Glowacki et al. (2020), this could lead to overly bulge-dominated star-forming galaxies, with a corresponding overdensity of dark matter particles in the central regions. Another explanation for the discrepancy could be a higher wind mass loading contribution in SIMBA simulations, as shown in the right panel of Fig. 6. Strong baryonic mass ejections from the internal regions in SIMBA, along with effects from hydrodynamical decoupling, could skew the DM fraction evaluated within the stellar half-mass radius towards higher values. The systematic increase in the central DM mass instead seems to be a long-lasting issue of hydrodynamical simulations (Navarro and Steinmetz, 2000; Marasco et al., 2020), which seems to not be explicable without demanding substantial revisions of the simulated model of structure formations. There could also be an increased effect of adiabatic contraction in galaxies for SIMBA simulations: the fact that baryonic infall drags towards the center of the galaxy the DM particles more intensely in SIMBA simulations than in IllustrisTNG Figure 6: _Left panel_. Wind mass loading at injection from IllustrisTNG fiducial simulation (blue points) and from the best-fit ‘LH-698’ simulation (blue squares), compared with mass loading factors measured in the hydrodynamical simulations described in Dave et al. (2011) and Muratov et al. (2015) (red and orange lines) and extracted from Beifior et al. (2019). We also show the empirical determinations of the mass loading factor inferred from measurements of the local mass-metallicity relation by Peeples and Shankar (2011) (ocher line), Lilly et al. (2013) (dark violet line) and Zahid et al. (2014) (dark blue line). Dashed and dotted black lines are the best fit trends for the fiducial and best-fit IllustrisTNG simulations, respectively. _Right panel_. Comparison between the wind mass loading at injection trends obtained from the IllustrisTNG fiducial simulation (blue points), the IllustrisTNG best-fit simulation (blue squares), the SIMBA fiducial simulation (green points), the SIMBA best-fit simulation (green squares), the ASTRID fiducial simulation (orange points) and the ASTRID best-fit simulation (orange squares). simulations could be a possible explanation of this strong increase in central DM mass (Gnedin et al., 2004; Napolitano et al., 2010). Finally, concerning the ASTRID fiducial simulation, at fixed stellar masses, we find systematically high values for all the physical quantities investigated: galaxies are larger than observations and contain more dark matter. It is not clear yet what could be the reason behind the observed discrepancies with the observed SPARC trends. A more detailed analysis of the physical reasons behind this discrepancy is deferred to future CASCO papers. ### One-parameter variation results' interpretations To evaluate the impact of SN and AGN feedback on scaling relations, we have also investigated, for a fiducial cosmology, the impact of varying the astrophysical parameters. For the '1P' simulations analysis, the main result is that scaling relation normalization in IllustrisTNG seems to be mainly affected by the supernova feedback parameter \(A_{\rm SN1}\), with lower values of the wind energy per unit SFR being associated, at fixed stellar mass, to lower values of stellar half-mass radius, internal DM mass and fractions, total mass and higher stellar masses per galaxy, shifting the correlations to match the observations. These results are not surprising: lower SN feedback implies a higher number of stars formed per galaxy, which implies a higher stellar mass, which is one of the physical driver of the changes in the correlations (see Figs. 1 and 4). However, more energetic winds are expected to push the gas to larger distances if compared to a weaker feedback, altering the gravitational potential. More/less energetic winds, indeed, increase/reduce half-mass radii, and consequently DM mass and DM fraction. The impact of the SN feedback parameter \(A_{\rm SN2}\) on the correlation between the stellar half-mass radius and stellar mass is inverted, with lower values of \(A_{\rm SN2}\) associated with higher stellar half-mass radii, at fixed stellar mass, and a larger number of galaxies at the high-mass end. A possible answer for this could be the fact that, in the definition of wind mass loading, wind speed at injection is at the denominator, so that lower values of \(A_{\rm SN2}\) have an opposite effect on \(\eta_{\rm W}\) with respect to a decrease in \(A_{\rm SN1}\). This means that lower values of wind speeds at injection are associated with a higher amount of galactic outflows, implying higher stellar half-mass radii. The impact on the other correlations is mild. The analysis on the '1P' simulation set also shows that the effects of varying \(A_{\rm AGN1}\) and \(A_{\rm AGN2}\) on all the scaling relation trends are negligible. This could be because, in the \([10^{9},~{}10^{11}]\) M\({}_{\odot}\) stellar mass range, AGN feedback effects are weaker with respect to SN feedback. In Irodotou et al. (2022) for example, it is noted that AGN feedback mechanisms mainly influence the star distribution, star formation and gas outflows in the central kiloparsec regions, and only slightly affect the total-stellar mass scaling relation of barred, Milky Way-like galaxies, which lie at the higher end of the mass interval considered in this paper. In the case of SIMBA and ASTRID, the figures relative to the 1P analysis have been shown and discussed in detail in Appendix B. For the SIMBA suite, considering again the reference cosmological parameters, the dependence of the scaling relations on the wind mass loading is negligible, while some variations are seen with respect to the wind velocity. By combining the wind velocity and the wind mass loading, we also see that an increase of \(\tilde{A}_{\rm SN1}\) or \(\tilde{A}_{\rm SN2}\) correspond to larger wind energy. The wind velocity seems to have a larger impact than in IllustrisTNG, and inverted, with slower winds producing smaller DM and total mass. The large central DM mass, produced by SIMBA in this reference cosmology, could be the possible cause of such independence or small dependence of scaling relations by the SN feedback parameters. There could also be a saturation effect with respect to the scaling relation trends in the SIMBA simulations at high values of \(\Omega_{\rm m}\), preventing the wind mass loading to show its effect on the trends. Indeed, we have verified that lowering the value of \(\Omega_{\rm m}\) to \(\sim 0.10\) creates a greater variation in scaling relation trends than that which can be seen in Fig. 12. This can also be slightly seen in Fig. 12, where we need values of \(\Omega_{\rm m}\) close to \(0.10\) to have a reconciliation with the observations. This potential saturation effect could then explain the low response to the variations of \(\tilde{A}_{\rm SN1}\) that we see in Fig. 12 for the SIMBA simulations. For the ASTRID suite, one has to be very careful in tracing the actual effects on the scaling relations of each parameter. For \(\tilde{A}_{\rm SN1}\), the effects on the scaling relations are similar to the case of IllustrisTNG, but with the difference that lower values of \(\tilde{A}_{\rm SN1}\) correspond to a lower number of LTG galaxies having high stellar mass. This could be a selection effect: a lower value of \(\tilde{A}_{\rm SN1}\) could indirectly affect the sSFR in such a way as to convert most of the high stellar mass galaxies into passive galaxies. For \(\tilde{A}_{\rm SN2}\), the main effect of decreasing this parameter seems to be an increase in the formation of LTG galaxies with respect to simulations with higher values of the parameter, but also a reduction of the galaxies' stellar masses to around \(M_{*}\sim 2\times 10^{9}\,M_{\odot}\). A speculative explanation for this effect could come by reading the right panel in figure 4 of Ni et al. (2023). Here, we see that the star formation rate density (SFRD) in ASTRID is systematically higher at higher redshifts than the other two simulations. This implies that galaxies in ASTRID started to form stars much faster than in IllustrisTNG and SIMBA. For low values of \(\tilde{A}_{\rm SN2}\) then, gas remains trapped more easily in galaxies due to lower wind outflow velocities, and due to the very high SFRD in their past histories, these galaxies started to convert gas in stars and exhaust cold gas faster than in IllustrisTNG and SIMBA. In the end, at \(z=0\) in ASTRID only low stellar mass galaxies will remain with enough gas content left to form stars, which will have a higher \(M_{\rm g}/M_{*}\) ratio than the galaxies at high mass. This has been verified by plotting \(M_{\rm g}/M_{*}\) for all ASTRID galaxies. As far as the AGN feedback in ASTRID is concerned, the thermal mode seems to affect the scaling relations more than the kinetic mode. In particular, as detailed in Ni et al. (2023), a higher value of \(\tilde{A}_{\rm AGN2}\) is associated to an heightened star-formation, due to a positive feedback induced by the fact that larger values of \(\tilde{A}_{\rm AGN2}\) suppress the formation of massive black holes, which brings less baryonic suppression on the total matter power spectrum. This is seen in Fig. 12, in that the simulation with \(\tilde{A}_{\rm AGN2}=0.25\) has a very low number of LTG galaxies present, which do not present a very high extension in stellar mass, while the simulation with \(\tilde{A}_{\rm AGN2}=4.00\) shows LTG galaxies even at \(M_{*}\geq 10^{11}\,M_{\odot}\). We would like to comment that, as reported in Ni et al. (2023), due to the intricacy of how feedback processes effects are conflated numerically, one should try to view the astrophysical parameters not as the numerical amount of feedback that a simulation manifests with respect to the fiducial simulations, but as the modulation of various processes, that lead to variations in many different physical quantities. For example, in ASTRID the matter power spectrum is sensitive to both \(\tilde{A}_{\rm SN2}\) and \(\tilde{A}_{\rm AGN2}\), while the global galaxy properties are mainly driven, indirectly, by the \(\tilde{A}_{\rm SN2}\) parameter. ### Discussion on bootstrap procedure results and comparison with literature We have also developed a method to quantify the agreement between simulations and data, by performing a \(\chi^{2}\) minimization. We have shown that the constraining power of the analyzed scaling relations is stronger on \(\Omega_{\rm m}\), while the dependence on \(\sigma_{\rm S}\) is milder. However, it is vital for our approach to check the consistency with independent and more robust cosmological parameter probes. In fact, our best-fitted results for the IllustrisTNG suite are in good agreement with almost all the cosmological results presented in literature, as shown in Fig. 7. We constrain the cosmological parameters with an average precision of 10 per cent, and the quite good agreement with results based on cosmological probes gives credibility to our results and to the constraints on the astrophysical parameters. While the errors on \(\Omega_{\rm m}\) are very small (11 percent), in perfect agreement with the estimates obtained using IllustrisTNG, the uncertainty for \(\sigma_{\rm S}\) is of \(\sim 15\) per cent, with a predominant tail towards lower values. SIMBA shows an agreement within \(1\sigma\) with all literature measurements only for \(\sigma_{\rm S}\), while there is agreement only with the \(\Omega_{\rm m}\) result presented in Hikage et al. (2019). ASTRID, finally, is in agreement within 1\(\sigma\) with the Planck Collaboration et al. (2020) results for all three cosmological parameters, albeit with large error bars that are strongly skewed towards low values of \(\Omega_{\rm m}\) and very high values of \(S_{8}\). Regarding the astrophysical parameters, we find that IllustrisTNG shows a constraint of \(A_{\rm SN1}\) which is in tension with the fiducial unit value by more than \(2\sigma\), which directly indicates that the mass loading of the IllustrisTNG simulations must be lowered to allow compatibility with the observations. This is consistent with the findings of Jo et al. (2023), who find a bimodal posterior distribution for \(A_{\rm SN1}\) for which the highest peak is below unity, near \(0.50\). The value of \(A_{\rm SN2}\) is instead compatible with the fiducial value within \(1\sigma\). Due to the fact that AGN feedback processes in IllustrisTNG have a negligible effect on the scaling relations, we find that it is not possible to constrain the parameters \(A_{\rm AGN1}\) and \(A_{\rm AGN2}\). The results obtained are in agreement with those derived via machine learning approach applied on single galaxies in Villaescusa-Navarro et al. (2022) and Echeverri et al. (2023). In the case of the SIMBA suite instead, we have a match with observations only for unreasonably low \(\Omega_{\rm m}\) (= 0.14) and high \(\sigma_{\rm S}\) (= 0.96) values of the cosmological parameters. This tendency towards having extreme values of the cosmological parameters could be an effect of the potential saturation at high values of \(\Omega_{\rm m}\) in SIMBA simulations that we described before, in that we need low values of \(\Omega_{\rm m}\) first to be able to break the saturation, and then a variation in astrophysical parameters at fixed (low) values of \(\Omega_{\rm m}\) to better fit the observations. Regarding the astrophysical parameters, we find in the case of SIMBA that the AGN feedback parameters \(\tilde{A}_{\rm AGN1}\) and \(\tilde{A}_{\rm AGN2}\) are better constrained, while we cannot constrain the \(\tilde{A}_{\rm SN2}\) parameter. As discussed in Section 3.4, this result can be explained considering that, for cosmologies with lower values of \(\Omega_{\rm m}\), the scaling relations are more strongly affected by AGN feedback. Finally, in the case of the ASTRID suite, we have a match with observations only with unrealistic mass distributions, where all galaxies are concentrated around \(M_{*}\sim 2\times 10^{9}\,M_{\odot}\). This is because the \(\chi^{2}\) minimization finds the simulations with the best trends, and in ASTRID all simulations which have LTGs at high stellar mass do not reproduce the observed relations as well as the ones with clustered galaxies around the median. These simulations all have low values of \(\tilde{A}_{\rm SN2}\), \(\tilde{A}_{\rm AGN2}\) and high values of \(\Omega_{\rm m}\), giving an indication that these parameters are primarily responsible for this behavior in the simulations. Overall, the best constraints seem to come from the IllustrisTNG suite, which does not show the problems that SIMBA and ASTRID manifested during this analysis. ### Wind mass loading discussion Regarding the mass loading analysis, to explain the discrepancies between the simulations and the literature results shown in the left panel of Fig. 6, one must first distinguish the different approaches with which mass loading factors are considered in literature. As reported in Belfiore et al. (2019), the first approach considers the so-called'mass loading factor at injection', which means that the state of the outflowing gas is directly related to an ongoing star formation event. This is the approach that is also used in hydrodynamical simulations which use a sub-grid for launching winds, such as camels (see Pillepich et al., 2018). The second one considers a 'time-averaged cumulative mass loading factor', which is the ratio between the star formation rate and the amount of gas leaving the galaxy's halo over a defined time-scale (see Muratov et al., 2015). Usually, the cumulative mass loading factor is up to an order of magnitude lower than the instantaneous loading factor, which could explain why the results from Peeples & Shankar (2011), Lilly et al. (2013) and Zahid et al. (2014) are systematically lower than both our results and the hydrodynamical simulations' results from Dave et al. (2011) and Muratov et al. (2015). Given that the empirically determined values depend on the metallicity calibrations and oxygen nucleosynthetic yields, changing these two parameters in the observations could give higher loading factors than the ones shown in Fig. 6. As far as the mass loading trends from the simulations are concerned, in both IllustrisTNG and SIMBA suites there is a tendency for the best-fit simulations to decrease their mass loading values with respect to the corresponding fiducial simulations, which brings them closer to the hydrodynamical simulation results from Dave et al. (2011) and Muratov et al. (2015). Only in the case of the ASTRID simulation we observe the reverse, in that the best-fit simulation has unusually high mass loading values and low maximum rotation velocities compared to the fiducial simulation, which is instead closer to the literature results. Along with the unrealistic mass distribution discussed previously, this result for the mass loading in ASTRID reinforces the idea that the simulation that better reproduces the SPARC trends from ASTRID is physically unrealistic. ## 5 Conclusions In this work, we have introduced the project _CASCO: Cosmological and AStrophysical parameters from Cosmological simulations and Observations_, which aims at comparing simulations and observations for constraining cosmological parameters and astrophysical processes. In this first paper of the series, we compare various scaling relations for star-forming galaxies, taken from the IllustrisTNG, SIMBA and ASTRID subgrid-based suites of the camels simulations (Villaescusa-Navarro et al., 2021), with observed data from the star-forming galaxy catalog SPARC (Lelli et al., 2016). The simulated sample consists, for each simulation, of all those galaxies having \(R_{*,1/2}>\epsilon_{\rm min}\), \(N_{*,1/2}>50\) and \(f_{\rm DM}(<R_{*,1/2})>0\), while the observed SPARC sample is made up by 152 star-forming galaxies, binned with respect to the stellar mass. The scaling relations considered are the size-mass relation (\(R_{*,1/2}\)-\(M_{*}\)), the internal DM fraction against stellar mass (\(f_{\rm DM}(<R_{*,1/2})\)-\(M_{*}\)), the internal DM mass against stellar mass (\(M_{\rm DM,1/2}\)-\(M_{*}\)) and the total-stellar mass relation (\(M_{\rm tot}\)-\(M_{*}\)). * We started by comparing the fiducial simulations (\(\Omega_{\rm m}=0.30\), \(\sigma_{\rm S}=0.80\), \(A_{\rm SN1}=A_{\rm AGN1}=A_{\rm SN2}=A_{\rm AGN2}=1.00\)) of the three simulation suites. IllustrisTNG shows a better agreement with the observed scaling relation trends, especially in regards to trends involving internal quantities, e.g. \(M_{\rm DM,1/2}\), with a cumulative (i.e. sum of all the contributions from the single scaling relations) normalized chi-squared of \(\tilde{\chi}^{2}=2.33\) for IllustrisTNG, against \(\tilde{\chi}^{2}=6.20\) for the SIMBA fiducial simulation and \(\tilde{\chi}^{2}=8.49\) for the ASTRID fiducial simulation. * We then proceeded by varying the two cosmological parameters, \(\Omega_{\rm m}\) and \(\sigma_{\rm S}\), and the four astrophysical parameters, \(A_{\rm SN1}\), \(A_{\rm SN2}\), \(A_{\rm AGN1}\) and \(A_{\rm AGN2}\), which regulate the SN feedback and the AGN feedback processes, respectively, one by one. We varied each of the six parameters between the minimum and the maximum of the allowed range, and compared the resulting simulated trends to the observed trends from SPARC. Results show that simulations with a lower value of the astrophysical parameter \(A_{\rm SN1}\) better reproduce the observed trends in all three simulation suites, while strong variations of both AGN feedback parameters in the IllustrisTNG simulation suite show negligible effects on the scaling relations considered. This is not surprising, since the role of the AGN feedback is expected to be more relevant in more massive galaxies. On the other hand, by fixing the cosmological parameters to the reference values, SIMBA simulations predict scaling relations which do not depend on wind mass loading and AGN parameters, and show a dependence only from the wind velocity. These small dependencies, and the systematically high central DM mass produced in the reference cosmology, necessarily require a change in the cosmological parameters in order to accommodate the observations. Finally, ASTRID simulations show a weak dependency on the wind mass loading, in a way similar to the case of the IllustrisTNG suite, and show peculiar clustering effects at low values of \(\hat{A}_{\rm SN2}\). While there is still no dependency on the AGN parameter \(\hat{A}_{\rm AGN1}\), which regulates the kinetic AGN feedback mode, there is some dependency on \(\hat{A}_{\rm AGN2}\), the parameter which regulates the thermal AGN feedback mode, in that higher values of this parameter enhance star-formation in galaxies due to a positive feedback regarding the suppression of the formation of massive black holes (Ni et al., 2023). * We next considered all 1065 simulations of the 'LH', '1P' and 'EX' sets in the IllustrisTNG suite, performed a bootstrap resampling 100 times on both the simulation points and the SPARC dataset, and searched for the best-fit simulation associated to each resampling, in order to obtain constraints on the cosmological and astrophysical parameters by considering the parameter distributions associated to the best-fit simulations. We obtain \(\Omega_{\rm m}=0.27^{+0.01}_{-0.05}\), \(\sigma_{\rm S}=0.83^{+0.08}_{-0.11}\), \(S_{\rm S}=0.78^{+0.03}_{-0.09}\), \(A_{\rm SN1}=0.48^{+0.25}_{-0.16}\), \(A_{\rm SN2}=1.21^{+0.03}_{-0.34}\), \(A_{\rm AGN1}=2.53^{+0.89}_{-1.82}\) and \(A_{\rm AGN2}=1.31^{+0.49}_{-0.67}\) with IllustrisTNG, \(\Omega_{\rm m}=0.14^{+0.02}_{-0.01}\), \(\sigma_{\rm S}=0.96^{+0.03}_{-0.25}\), \(S_{\rm S}=0.649^{+0.04}_{-0.135}\), \(\hat{A}_{\rm SN1}=0.45^{+0.06}_{-0.10}\), \(A_{\rm SN2}=0.81^{+0.13}_{-0.31}\), \(\hat{A}_{\rm AGN1}=0.56^{+0.12}_{-0.02}\) and \(\hat{A}_{\rm AGN2}=1.13^{+0.03}_{-0.35}\) with SIMBA and, finally, \(\Omega_{\rm m}=0.44^{+0.02}_{-0.15}\),\(\sigma_{\rm S}=0.81^{+0.15}_{-0.17}\), \(S_{\rm S}=0.85^{+0.36}_{-0.06}\), \(\hat{A}_{\rm SN1}=0.41^{+0.14}_{-0.17}\), \(A_{\rm SN2}=0.61^{+0.12}_{-0.04}\), \(\hat{A}_{\rm AGN1}=2.49^{+0.56}_{-0.09}\) and \(\hat{A}_{\rm AGN2}=0.62^{+0.65}_{-0.09}\) with AstrRID. We thus manage to constrain \(\Omega_{\rm m}\) and \(A_{\rm SN1}\) with good precision, while Figure 7: Comparison of the constraints on \(\Omega_{\rm m}\) (left panel), \(\sigma_{\rm S}\) (central panel) and \(S_{\rm S}:=\sigma_{\rm S}(\Omega_{\rm m}/0.3)^{0.5}\) obtained from fitting the SPARC star-forming galaxy catalog scaling relation trends with the cambls’ IllustrisTNG (black point, grey point for DM scaling relations only), SIMBA (green point) and ASTRID (orange point) simulation suites, with results presented, from top to bottom, in Planck Collaboration et al. (2020) (red point), Hinshaw et al. (2013) (blue point), Costanzi et al. (2019) (brown point), Bocquet et al. (2019) (magenta point), Amon et al. (2022) and Secco et al. (2022) (dark orange point), Hikage et al. (2019) (cyan point) and Asgari et al. (2021) (violet point). The confidence interval between the 16th and 84th percentile in our measurements is also shown as transparent grey bands.
2301.11289
Blockchain-aided Secure Semantic Communication for AI-Generated Content in Metaverse
The construction of virtual transportation networks requires massive data to be transmitted from edge devices to Virtual Service Providers (VSP) to facilitate circulations between the physical and virtual domains in Metaverse. Leveraging semantic communication for reducing information redundancy, VSPs can receive semantic data from edge devices to provide varied services through advanced techniques, e.g., AI-Generated Content (AIGC), for users to explore digital worlds. But the use of semantic communication raises a security issue because attackers could send malicious semantic data with similar semantic information but different desired content to break Metaverse services and cause wrong output of AIGC. Therefore, in this paper, we first propose a blockchain-aided semantic communication framework for AIGC services in virtual transportation networks to facilitate interactions of the physical and virtual domains among VSPs and edge devices. We illustrate a training-based targeted semantic attack scheme to generate adversarial semantic data by various loss functions. We also design a semantic defense scheme that uses the blockchain and zero-knowledge proofs to tell the difference between the semantic similarities of adversarial and authentic semantic data and to check the authenticity of semantic data transformations. Simulation results show that the proposed defense method can reduce the semantic similarity of the adversarial semantic data and the authentic ones by up to 30% compared with the attack scheme.
Yijing Lin, Hongyang Du, Dusit Niyato, Jiangtian Nie, Jiayi Zhang, Yanyu Cheng, Zhaohui Yang
2023-01-25T02:32:02Z
http://arxiv.org/abs/2301.11289v1
# Blockchain-aided Secure Semantic Communication for AI-Generated Content in Metaverse ###### Abstract The construction of virtual transportation networks requires massive data to be transmitted from edge devices to Virtual Service Providers (VSP) to facilitate circulations between the physical and virtual domains in Metaverse. Leveraging semantic communication for reducing information redundancy, VSPs can receive semantic data from edge devices to provide varied services through advanced techniques, e.g., AI-Generated Content (AIGC), for users to explore digital worlds. But the use of semantic communication raises a security issue because attackers could send malicious semantic data with similar semantic information but different desired content to break Metaverse services and cause wrong output of AIGC. Therefore, in this paper, we first propose a blockchain-aided semantic communication framework for AIGC services in virtual transportation networks to facilitate interactions of the physical and virtual domains among VSPs and edge devices. We illustrate a training-based targeted semantic attack scheme to generate adversarial semantic data by various loss functions. We also design a semantic defense scheme that uses the blockchain and zero-knowledge proofs to tell the difference between the semantic similarities of adversarial and authentic semantic data and to check the authenticity of semantic data transformations. Simulation results show that the proposed defense method can reduce the semantic similarity of the adversarial semantic data and the authentic ones by up to 30% compared with the attack scheme. Metaverse, Blockchain, Semantic Communication, Semantic Attacks, Semantic Defenses ## I Introduction The word Metaverse was coined in the science-fiction novel Snow Crash [1] to describe a virtual reality society in which people utilize digital avatars to symbolize themselves and experience the world. In recent years, Metaverse has received attention from academia and industries as a novel Internet application due to the advancement of Augmented Reality (AR), Virtual Reality (VR), Artificial Intelligence (AI), and Blockchain. Specifically, extending reality (AR/VR) and AI technologies can provide users with immersive experiences and facilitate continuous data synchronization from the physical domain to the virtual domain, which is supported by data perceived by edge devices and services driven in Metaverse. Blockchain can help the physical and virtual domains to share information and construct economic systems in a decentralized manner. Thus, the Metaverse can be viewed asthe integration of multiple technologies supported by massive data interactions between the physical and virtual domains. One of the significant advantages of Metaverse is that people can conduct experiments in the virtual world that cannot be conducted in the real world. Because the virtual world can be created based on real-world data, e.g., images, sensing data, and text, the experimental result in Metaverse can be used to guide the real world. One example is the virtual transportation networks [2], i.e., virtual environments in which users can safely train automatic driving algorithms and test vehicles. To build such virtual environments, virtual service providers (VSPs) can leverage network edge devices to capture images from the real world and then render the virtual objects. However, this process entails three challenges as follows: * How to use the data collected from the real world, e.g., images, to achieve fast virtual world building? * How to improve the efficiency of data transmission to facilitate interactions of the physical and virtual domains? * How to ensure the security of the data received by VSP to ensure that the virtual world can be accurately synchronized with the real world? Since virtual transportation networks require extensive interactions between the physical domain and Metaverse, edge devices can be utilized to capture images of geographical landmarks. However, converting these captured images into a consistent style for the virtual world is complicated. In several virtual service designs, VSPs need to hire digital painters to pre-process images [3], which is time-consuming and costly. The boom in AI has brought alternative solutions. AI-generated content (AIGC) technology [4] allows VSP to process quickly images collected from the real world using well trained AI models (**For D1**). However, the collected data could still burden the network. For example, the authors in [5] state that a pair of sensing devices can generate 3.072 megabytes of data per second, which challenges conventional communication systems. Fortunately, semantic communication [6] is introduced to filter out irrelevant information from edge devices to reduce information redundancy of VSPs by extracting semantic data from raw data and expressing desired meanings (**For D2**). With the help of semantic communications, massive amounts of data can be circulated in the virtual transportation networks to empower Metaverse services. However, the introduction of semantic communication brings about higher requirements of the data security. Efficient semantic data sharing should be achieved between unknown VSPs and edge devices deployed in untrusted environments. However, the extracted semantic data can be tampered to almost the same descriptors (semantic similarities) but different desired meanings. The attacker (edge device) can modify the pixels of a sunflower to make it similar to the extracted semantic data (snowy mountain) in terms of semantic similarities but visually dissimilar [7], which affects the output of the AIGC models and is difficult for VSPs to detect the difference between the adversarial and authentic semantic data. Moreover, it is hard to detect and prevent semantic data mutations for virtual transportation networks in Metaverse. Since VSPs and edge devices are distributed, it is difficult to record, trace, and verify data transformations. To solve the aforementioned problems, the blockchain and Zero-Knowledge Proof techniques can be used (**For D3**). Thus, we present a blockchain-aided semantic communication framework with AIGC services to achieve virtual transportation networks in Metaverse. Here, blockchain-aided semantic communication can facilitate data circulation and economic activities between VSPs and edge devices in a decentralized manner [8]. Targeted semantic attacks [7] are utilized to generate adversarial semantic data to improve their semantic similarities almost up to that of the extracted semantic data by training various loss functions. Zero-Knowledge Proof (ZKP) [9] is integrated into the proposed framework to process semantic data and securely guarantee correct transformations. The contributions of this paper are summarized as follows: * We propose a blockchain-aided semantic communication framework for AIGC in virtual transportation networks that ensures the authenticity of semantic data transmitted from edge devices to VSPs to facilitate interactions of the physical and virtual domains. * We illustrate how a training-based targeted semantic attack scheme generates adversarial semantic data (images) without revealing the authentic semantic data. The attack semantic data has almost the same semantic similarities as the authentic ones but is visually dissimilar to them. * We, for the first time, design a blockchain and zero-knowledge proof-based semantic defense scheme to assure the authentication of semantic data. The scheme can utilize zero-knowledge proof to record the transformations of semantic data, and use blockchain to track and verify semantic data mutations. The remainder of the paper is described as follows. Section II reviews previous works on secure semantic communication. Section III demonstrates a blockchain-aided semantic communication framework for AIGC in Metaverse. Section IV illustrates a training-based targeted semantic attack scheme. Section V designs a blockchain and zero-knowledge proof-based semantic defense scheme. The proposed mechanisms are evaluated in Section VI. Section VII concludes the paper and elaborates the future work. ## II Related Work In this paper, we consider the integration of blockchain-aided semantic communications for Metaverse, which involves multiple emerging technologies. Therefore, we divide the related work into three parts: Blockchain-aided Semantic Communications, Semantic Attacks, and Semantic Defenses. ### _Blockchain-aided Semantic Communications_ Semantic communication [6] can lighten virtual transportation network burdens by transmitting relevant semantic data to VSPs after processing original data by AI technologies in edge devices. Z. Weng _et al._[10] utilized deep learning (DL) to identify the essential speech information with higher weights for semantic communication in dynamic channel conditions. To enable edge devices to perform DL-based semantic communication tasks, H. Xie _et al._[11] proposed a lite-distributed semantic communication framework for edge devices to transmit low-complexity texts by optimizing training processes. Although semantic communication can help Metaverse reduce information redundancy, it can not handle the challenge that how to construct trust among unknown edge devices and VSPs to facilitate data sharing and economic activities. Blockchain [12] is a peer-to-peer network that can construct decentralized ledgers for participants to share data. The integration of blockchain and AI-based semantic communication can empower Metaverse ecosystems to carry out rich activities between the physical and virtual domains [13]. Y. Lin _et al._[8] proposed a unified blockchain-semantic framework to enable Web 3.0 services to implement on-chain and off-chain interactions. A proof of semantic mechanism is proposed to verify semantic data before adding it to blockchain. However, they do not mention the performance indicators of semantic data. Y. Lin _et al._[14] proposed a blockchain-based semantic exchange framework that can mint Non-Fungible Tokens (NFT) for semantic data, utilize the game theory to facilitate exchange, and introduce ZKP to enable privacy-preserving. However, they do not consider semantic attacks that may reduce exchange efficiency. ### _Semantic Attacks and Defenses_ The introduction of semantic communication brings about security issues for Metaverse. However, current research on the security of semantic communication is still in its infancy. Q. Hu _et al._[15] analyzed semantic noise that causes semantic data to express misleading meanings. To reduce the effects caused by semantic noise, they added weight perturbation to adversarial training processes, suppressed noise-related and task-unrelated features, and designed semantic similarity-based loss functions to reduce transmitting overheads. X. Luo _et al._[16] focused on privacy leakages when sharing background knowledge. They introduced symmetric encryption to adversarial training processes to encrypt and decrypt semantic data to ensure confidentiality. H. Du _et al._[7] focused on the semantic Internet-of Things (SIoT) and proposed new performance indicators for SIoT to quantify security issues, including semantic secrecy outage probability and detection failure probability. They focused on image transmission-oriented semantic communication and divided semantic attacks into targeted and untargeted semantic attacks. The targeted attack can generate adversarial semantic data (images) that can be recovered to a given target requested by receivers. The adversarial semantic data has almost the same descriptors (semantic similarities), but is visually dissimilar [17]. The untargeted semantic attack can generate adversarial semantic data that minimizes semantic similarities. They do not pursue to be recovered to any target by receivers. In this paper, we study the targeted semantic attacks and introduce ZKP to differ in semantic similarities between the adversarial and target images to protect semantic data. ZKP has been widely used in blockchain to enable privacy-preserving and authenticity. R. Song _et al._[9] utilized NFT and ZKP to construct a traceable data exchange scheme in blockchain, which can protect data privacy and exchange fairness. Z. Wang _et al._[18] designed a ZKP-based off-chain data feed scheme to take in off-chain sensitive data to execute the business logic of smart contract-based applications. H. Galal _et al._[19] leveraged ZKP and smart contracts to hide details of NFTs and swap NFTs in a fair manner. Y. Fang _et al._[20] utilized ZKP to verify the authenticity of model prediction processes without leasing private parameters of deep learning models. However, the above methods do not consider how to use ZKP to prevent targeted semantic attacks to detect adversarial semantic data. ### _Artificial Intelligence Generated Content_ AIGC refers to the use of AI algorithms, natural language processing (NLP), and computer version (CV) methods to produce a variety of media forms, including text, images, and audio. In terms of text content generation, an epoch-making AIGC application is the ChatGPT, a conversational language model developed by OpenAI [21]. ChatGPT is a type of the Generative Pre-trained Transformer (GPT) model that is trained on a huge amount of conversational data. ChatGPT is able to generate text that sounds as if it were written by a human. This makes it helpful for a variety of purposes, including chatbots, virtual assistants, and language translation. For image generation, diffusion model [22], a new class of state-of-the-art generative models, have demonstrated exceptional performance in Image Generation tasks and have surpassed the performance of generative adversarial networks (GANs) [23] in numerous tasks. Stable Diffusion, a AIGC model that is released by Stability AI, is an open-source text-to-image generator that creates amazing artwork in a matter of seconds. It operates with unprecedented speed and quality on consumer-grade GPUs. Furthermore, AIGC techniques continues to advance in the field of audio generation. The authors in [24] propose a deep learning-based method that employs contrastive representation learning and clustering to automatically derive thematic information from music pieces in the training data. The simulation results show that the proposed model is capable of generating polyphonic pop piano music with repetition and plausible variations. One of the primary benefits of AIGC is that it can be produced at a number and speed that would be difficult for human beings to do alone. It also permits a great degree of consistency and precision. Therefore, the development of AIGC technologies can provide a strong boost to the evolution of the Internet. Notably, despite the potential benefits, there are also some worries about the ramifications of AIGC, including the possibility for prejudice, the semantic errors in the generated content, and the blurring of the boundary between human- and machine-generated content. Therefore, it is significant to ensure the correctness of the AIGC, avoiding attacks from malicious parties that causes resource waste to affect the quality of AIGC services. ## III Blockchain-aided Semantic Communication Framework for AIGC in Metaverse The implementation of virtual transportation networks requires the following main steps: 1) Semantic extraction and transmission for data interactions between physical and virtual domains, 2) Semantic transformation and verification, and 3) AIGC for Metaverse, as shown in Fig. 1. ### _Semantic Extraction and Transmission_ Pedestrians are in danger when auto-driving models in vehicles are not trained well enough, which motivates the development of virtual transportation networks in the Metaverse. Therefore, it is necessary to digitize physical domains using data produced in the real world to simulate environments for training vehicles and drivers. VSPs can take pictures using edge devices like smartphones, cameras, and sensors in physical domains to obtain information about the weather, traffic, and geographical landmarks that can be used to train and test the detection systems of vehicles in virtual domains. Virtual domain output can be fed back into physical domains to configure vehicles for better and safer performance. Based on the simulated environment, inexperienced drivers and vehicles can practice their reactions under unfamiliar weather or traffic conditions in Metaverse in a safe way. Therefore, VSPs need to frequently interact with edge devices supported by a tremendous amount of data to construct virtual transportation networks in Metaverse to provide services, which challenges the transmission capabilities of edge devices and VSPs. Unfortunately, conventional communication systems cannot afford such frequent interactions between the physical and virtual domains, which will cause virtual transportation networks in Metaverse to lack enough data to simulate virtual environments. Semantic communication, a completely new paradigm that extracts semantic meaning from raw data for transmission, can be utilized to resolve this challenge. Instead of transmitting original data, edge devices can extract the semantic data and transmit to VSPs. The VSPs can then use the received semantic data to generate simulation environments for drivers and autonomous vehicle training on the virtual road. For instance, edge devices, e.g., smartphones, positioned at various locations along the same road may gather photographs of traffic conditions from their perspective. To reduce information redundancy, they can utilize semantic segmentation modules to crop key components of images as semantic data [2]. Then edge devices only transmit semantic data to VSPs to report traffic conditions on roads. However, since VSPs have to collect semantic data from multiple edge devices to train virtual transportation networks in different situations, malicious edge devices could falsify semantic data (images) and corrupt the training process, which is dangerous for pedestrians and drivers. In this paper, we study the targeted semantic attack [7, 17] in that the adversarial and authentic semantic data have almost the same semantic similarities but are totally irrelevant. The semantic similarities are calculated with the help of high-dimensional descriptors extracted by a convolutional neural network (CNN). However, malicious edge devices could train a corresponding neural network to modify some pixels in attack images to achieve similar descriptors as the authentic images to corrupt the construction of virtual transportation networks. The training-based targeted semantic attack scheme is illustrated in Section IV. ### _AIGC in Metaverse_ After receiving semantic data (images) from edge devices deployed in different locations, VSPs can perceive conditions or views of landmarks, and render images by AIGC services in Metaverse. For example, semantic data of landmarks from different perspectives can be utilized by VSPs to render 3D scenes to provide users with seamless experiences. VSPs can also utilize views of landmarks to generate artworks or avatars in Metaverse. Therefore, AIGC services play an important role in virtual transportation networks to facilitate the use of data resources and the applications of Metaverse. Besides, semantic data is important for subsequent AIGC services since its quality may affect the content generated by AIGC. However, since semantic data circulated in the Metaverse may be corrupted by the aforementioned targeted semantic attacks produced by malicious edge devices, the AIGC services may do useless work and provide users with hateful content. For example, VSPs want to collect images of famous landmarks (i.e., Eiffel Tower) while malicious edge devices may extract unrelated images of flowers to corrupt the subsequent AIGC services that renders 3D scenes with Fig. 1: Blockchain-aided Semantic Communication Framework for AIGC in Metaverse different perspectives of the landmark. VSPs also want to perceive images of landmarks to generate artworks or avatars while attackers transmit irrelative semantic data of animals to obstacle AIGC services. Since the adversarial semantic data (images) modified by attackers have almost the same semantic similarities, it is difficult and time-consuming for VSPs to verify the authenticity of images. Therefore, it is necessary to design a mechanism to ensure the security of data transmission to protect the security of AIGC services. ### _Semantic Transformation and Verification_ Malicious semantic data can affect the security of virtual transportation services in Metaverse. Modifying pixels in unrelated semantic data increases the semantic similarity score, causing the VSPs to use the wrong semantic data as input to AIGC and corrupt the virtual environment. Inspired by [17], image transformations can be performed to distinguish semantic similarities between the adversarial and authentic images. However, malicious edge devices can continue adjusting pixels in irrelative images to make them similar to transformed semantic data. Moreover, it is impossible for edge devices to transform semantic data unlimited times. A possible and practical solution is to record and verify transformations performed on semantic data. Therefore, we propose a blockchain and zero-knowledge proof-based semantic defense scheme in Section V. The logic of transformations is recorded on the circuit produced by the zero-knowledge algorithm. Edge devices utilize extracted semantic data as inputs to generate proof of transformations and output transformed semantic data. They send the proof and semantic data to VSPs for verification. Since VSPs and edge devices are distributed in unknown Metaverse environments represented by avatars, the blockchain should be used to record and verify transformations to prevent data mutations. ## IV Training-based Targeted Semantic Attack Scheme Although the blockchain-aided semantic communication framework for AIGC in Metaverse can reduce information redundancy and establish decentralized trust between unknown edge devices and VSPs, malicious edge devices may conduct training-based targeted semantic attacks to corrupt semantic data (images) circulated in the Metaverse services. The targeted semantic attacks refer to transmitting adversarial semantic data with almost the same semantic descriptors but visually dissimilar to the authentic one [7], which is difficult for VSPs to utilize semantic similarities (inner product of descriptors among images) evaluation to distinguish them. In this section, we illustrate the workflow of the training-based targeted semantic attack targeted semantic attacks [7][17]. **Descriptor Extraction.** Since extracted semantic data (images) is difficult to evaluate, edge devices can utilize a CNN to map images to high dimensional descriptors. Then VSPs can use the descriptors to distinguish semantic similarities of images. The process of descriptor extraction is illustrated as follows. Let us denote three types of semantic data produced by malicious edge devices, including the adversarial semantic data \(\mathbf{x_{a}}\), the authentic one \(\mathbf{x_{t}}\), and the carrier one \(\mathbf{x_{c}}\). The authentic semantic data \(\mathbf{x_{t}}\) is extracted from original images by semantic extraction in edge devices according to requirements and associated with label \(y_{t}=f_{\mathbf{e}}(x_{t})\), which has semantic similarities with \(\mathbf{x_{a}}\). \(f_{\mathbf{e}}(\cdot)\) is utilized to classify semantic data. The carrier semantic data \(\mathbf{x_{c}}\) is an auxiliary image to help malicious edge devices generate \(\mathbf{x_{a}}\), which is classified as \(y_{c}=f_{\mathbf{e}}(x_{c})\neq y_{t}\) and has visual similarities with \(\mathbf{x_{a}}\). The adversarial semantic data \(\mathbf{x_{a}}\) is produced by training networks to learn how to modify pixels, which can achieve that \(\mathbf{x_{a}}\) has almost the same descriptors but is visually dissimilar from the authentic one \(\mathbf{x_{t}}\) generated by semantic extraction, as shown in Fig. 2. Besides, \(\mathbf{x_{a}}\) is visually similar to \(\mathbf{x_{c}}\) but classified incorrectly as \(y_{t}\). Therefore, the descriptor extraction process is vital for malicious edge devices to produce adversarial images \(\mathbf{x_{a}}\). The input image \(\mathbf{x}\) should be re-sampled to \(\mathbf{x^{s}}\) with the same dimension \(s\) to make \(\mathbf{x_{t}}\) and \(\mathbf{x_{c}}\) with the same resolution. The image \(\mathbf{x^{s}}\) is utilized as an input to train a Fully CNN given by \(\mathbf{g_{x^{s}}}=g(\mathbf{x^{s}}):\mathbb{R}^{W\times H\times 3}\rightarrow \mathbb{R}^{w\times h\times d}\) to implement feature extraction where \(W\times H\times 3\) and \(w\times h\times d\) are weight, height, and channel of images. A pooling layer is utilized to map the input tensor \(\mathbf{g_{x^{s}}}\) and the descriptor \(\mathbf{h_{x^{s}}}=h(\mathbf{g_{x^{s}}})\) by the CNN with network parameters \(\theta\), which is mapped by \(h:\mathbb{R}^{w\times h\times d}\rightarrow\mathbb{R}^{d}\). The descriptor is the output of the pooling layer, which can be utilized to compare with other descriptors to calculate semantic similarities. The \(l_{2}\) normalization is utilized to easily compare semantic similarities. **Loss Function.** Considering the goal of malicious edge devices is to make \(\mathbf{x_{a}}\) almost the same as \(\mathbf{x_{t}}\) in terms of semantics but visually similar to \(\mathbf{x_{c}}\), the loss function consists of the performance loss \(l_{\text{IS}}(\mathbf{x},\mathbf{x_{t}})\) and the distortion loss between \(\mathbf{x}\) and \(\mathbf{x}_{c}\), which can be defined as \[L_{\text{IS}}(\mathbf{x}_{c},\mathbf{x}_{t};\mathbf{x})=l_{\text{IS}}(\mathbf{ x},\mathbf{x}_{t})+\lambda\|\mathbf{x}-\mathbf{x}_{c}\|^{2}, \tag{1}\] Fig. 2: Relationship among Images where \(\lambda\) is a hyper-parameter to show the impact of the distortion loss. **Performance Loss.** Let us assume that malicious edge devices can access to the network structure of the descriptor extraction [17] since it is necessary for edge devices to know the evaluation standard of VSPs. Considering different scenarios where targeted semantic attacks generate, referring to [17], we introduce the three empirical forms of the performance loss as follows. _Global Descriptor Loss_ is suitable that malicious edge devices even know all parameters of the network of the descriptor extraction. Thus, the performance loss function \(l_{\text{IS}}\) can be given by calculating the inner product of \(\mathbf{x_{a}}\) and \(\mathbf{x_{t}}\) as follows: \[l_{\text{global}}(\mathbf{x},\mathbf{x}_{t})=1-\mathbf{h_{x}^{\top}}\mathbf{h_ {x_{t}}}. \tag{2}\] _Activation Tensor Loss_ is adapted to the scenario where the outputs of the network of the descriptor extraction are the same for \(\mathbf{x_{a}}\) and \(\mathbf{x_{t}}\) before down-sampling to resolution \(s\), which can be denoted by the mean squared difference of \(\mathbf{g_{x}}\) and \(\mathbf{g_{x_{t}}}\) as follows: \[l_{\text{tensor}}(\mathbf{x},\mathbf{x}_{t})=\frac{\|\mathbf{g_{x}}-\mathbf{ g_{x_{t}}}\|^{2}}{w\cdot h\cdot d}. \tag{3}\] _Activation Histogram Loss_ is utilized to preserve first-order statistics of activations \(\|u(\mathbf{g_{x}},\mathbf{b})_{i}\) per channel \(i\) to achieve identical extracted descriptors regardless of spatial information in images, which can be denoted as follows: \[l_{\text{hist}}(\mathbf{x},\mathbf{x}_{t})=\frac{1}{d}\sum_{i=1}^{d}\|u( \mathbf{g_{x}},\mathbf{b})_{i}-u(\mathbf{g_{x_{t}}},\mathbf{b})_{i}\|, \tag{4}\] where \(\mathbf{b}\) is the histogram bin centers. **Optimization.** Our goal is to find the optimal loss function that minimizes the semantic similarities of \(\mathbf{x_{t}}\) and \(\mathbf{x}\), which equivalently optimizes network parameters \(\theta\). We can use the Adam algorithm to update \(\theta\) as \[\theta_{t+1}=\theta_{t}-\eta\frac{\rho_{t}}{\sqrt{v_{t}+\epsilon}}, \tag{5}\] where \(\eta\) is the learning rate, \(\rho_{t}\) and \(v_{t}\) are the first-order and second-order momenta of gradients, and \(\epsilon\) is utilized to prevent the \(\sqrt{v_{t}+\epsilon}\) from being zero. Therefore, the adversarial semantic data \(\mathbf{x}_{a}\) can be expressed by \[\mathbf{x}_{a}=\operatorname*{arg\,min}_{\mathbf{x}}L_{\text{IS}}(\mathbf{x} _{c},\mathbf{x}_{t};\mathbf{x}), \tag{6}\] where \(L_{\text{IS}}\) can be replaced by \(l_{\text{global}}\), \(l_{\text{tensor}}\), or \(l_{\text{hist}}\) according to different scenarios. ## V Blockchain and Zero-Knowledge Proof-based Semantic Defense Scheme Since the adversarial semantic data generated by malicious edge devices has almost the same descriptors (semantic similarity) but different desired meanings from the authentic ones produced by honest edge devices, inspired by [7][17], we utilize blockchain and zero-knowledge proof-based semantic defense scheme to help VSPs to identify attack images transmitted in the Metaverse. Instead of submitting extracted semantic data directly, edge devices should transform or process semantic data by the bilinear interpolation algorithm [25], and utilize Zero-Knowledge Proof to record and verify transformations. The details of the proposed scheme are elaborated as follows, as shown in Fig. 3. **Transformation.** The transformation of semantic data is a training-free defense method that uses visual invariance to distinguish adversarial and authentic semantic extraction. The reason is that attackers adjust some pixels to make descriptors of adversarial images similar to the authentic ones [7]. As a result, we attempt to increase visual invariance by blurring extracted images using the spatial transformation of the bilinear interpolation algorithm. Let us assume that \((x_{1},y_{1})\), \((x_{1},y_{2})\), \((x_{2},y_{1})\), and \((x_{2},y_{2})\) are four points in the extracted images. Then the targeted points \((x,y)\) can be obtained by the spatial transformation, i.e., bilinear interpolation in the \(x\) and \(y\) directions. The spatial transformation can be considered a mapping function \(f(\cdot,\cdot)\). Thus, the linear interpolation in the \(x\) direction can be derived as follows: \[f(x,y_{1})\approx\frac{x_{2}-x}{x_{2}-x_{1}}f(x_{1},y_{1})+\frac{x-x_{1}}{x_{2 }-x_{1}}f(x_{2},y_{1}) \tag{7}\] and \[f(x,y_{2})\approx\frac{x_{2}-x}{x_{2}-x_{1}}f(x_{1},y_{2})+\frac{x-x_{1}}{x_{2 }-x_{1}}f(x_{2},y_{2}). \tag{8}\] The targeted points are transformed by the linear interpolation in the \(y\) direction as follows: \[f(x,y)\approx\frac{y_{2}-y}{y_{2}-y_{1}}f(x,y_{1})+\frac{y-y_{1}} {y_{2}-y_{1}}f(x,y_{2}) \tag{9}\] \[\approx\frac{(x_{2}-x)(y_{2}-y)}{(x_{2}-x_{1})(y_{2}-y_{1})}f(x_{ 1},y_{1})+\frac{(x-x_{1})(y_{2}-y)}{(x_{2}-x_{1})(y_{2}-y_{1})}f(x_{2},y_{1})\] \[+\frac{(x_{2}-x)(y-y_{1})}{(x_{2}-x_{1})(y_{2}-y_{1})}f(x_{1},y_{2} )+\frac{(x-x_{1})(y-y_{1})}{(x_{2}-x_{1})(y_{2}-y_{1})}f(x_{2},y_{2}).\] Although edge devices can perform transformations to obtain the blurred semantic data that can distinguish from the adversarial one, it is difficult for VSPs to verify whether the blurred semantic data is derived from authentic transformations. Malicious edge devices may modify some pixels to make descriptors of their semantic data close to that of honest edge devices after transformations. Therefore, to assure the authenticity of the blur transformation for semantic data, we utilize ZKP to record transformations and blockchain to verify them. The proposed mechanism is a tuple of 3 polynomial-time schemes after extracting a circuit mapping the transformation, including Key Generation, Proof, and Verification, which works as follows: **Extraction.** The mechanism initiates the logic in a computation circuit \(\mathsf{C}\) using the simple arithmetic expression mapping the transformation \(f\) to construct the public statement \(\mathsf{s}\) and the private witness \(\mathbf{w}\)[20]. The circuit implements the logic of bilinear interpolation via circum [26], a ZKP circuit compiler. The circuit can provide a relation between the inputs and outputs semantic data which can be used for verifiable computation on transformations without disclosing the inputs. The process of extraction can be illustrated as follows: \[\mathsf{Extract}(f)\rightarrow(\mathsf{s},\mathsf{w}). \tag{10}\] **Key Generation.** Edge devices take a security parameter \(1^{\lambda}\) and the transformation circuit \(\mathsf{C}\) as inputs to generate a common reference string \(\mathsf{crs}\). The common reference string \(\mathsf{crs}\) includes an evaluation key \(\mathsf{crs.ek}\) and a verification key \(\mathsf{crs.vk}\) for proof and verification. The process of key generation can be expressed as follows: \[\mathsf{KeyGen}(1^{\lambda},\mathsf{C})\xrightarrow{\mathsf{c}}\mathsf{crs}( \mathsf{ek},\mathsf{vk}). \tag{11}\] **Proof.** Edge devices take the evaluation key \(\mathsf{crs.ek}\), the statement \(\mathsf{s}\) and the witness \(\mathsf{w}\) related to the transformed and original semantic data, which satisfies \(\mathsf{C}(\mathsf{s},\mathsf{w})=1\). The statement \(\mathsf{s}\) and the witness \(\mathsf{w}\) stand for the public and private information corresponding to the transformation relation. Thus, a zero-knowledge proof \(\pi\) is generated to reflect and verify the relation. The process of proof generation can be denoted as follows: \[\mathsf{Prove}(\mathsf{crs.ek},\mathsf{s},\mathsf{w})\xrightarrow{\mathsf{C }}\pi. \tag{12}\] **Verification.** VSPs utilize smart contracts deployed on blockchain to verify the authenticity of transformations and implement the business logic of blockchain-aided semantic communication for AIGC in Metaverse. Since the verification process is implemented in the blockchain, edge devices and VSPs can query verification results to construct trust in a decentralized manner. The verification key \(\mathsf{crs.vk}\), the statement \(\mathsf{s}\), and the proof \(\pi\) are the inputs written into smart contracts to determine whether to accept or reject the proof according to outputs. When the status of the output is 1, the verification succeeds and VSPs accept the semantic data provided by edge devices; otherwise, VSPs refuse to accept the proof corresponding to semantic data produced by malicious edge devices. The process of proof verification can be given as follows: \[\mathsf{Verify}(\mathsf{crs.vk},\mathsf{s},\pi)\rightarrow\{0,1\}. \tag{13}\] The semantic defense scheme should satisfy the following properties including completeness, soundness, and zero-knowledge [9][18][19][20]. **Completeness** means that an honest edge device with a valid witness \(\mathsf{w}\) can convince an honest VSP for the authenticity of transformed semantic data generated from the circuit \(\mathsf{C}\). If the transformed semantic data extracted by bilinear interpolation is correct and edge devices construct the proof \(\pi\) correctly through the proof circuit \(\mathsf{C}\) and the evaluation key \(\mathsf{crs.ek}\), then the VSPs can use the verification key \(\mathsf{crs.vk}\) generated by the proof circuit \(\mathsf{C}\), the proof \(\pi\), and the public information (statement) \(\mathsf{s}\) to obtain a verification result. For each \(\mathsf{C}(\mathsf{s},\mathsf{w})=1\), the proof \(\pi\) generated from an honest edge device will be accepted with probability \(1\), which can be denoted as follows: \[\Pr\left[\begin{array}{c}\mathsf{KeyGen}(1^{\lambda},\mathsf{C})\to \mathsf{crs}(\mathsf{ek},\mathsf{vk})\\ \mathsf{Prove}(\mathsf{crs.ek},\mathsf{s},\mathsf{w})\rightarrow\pi\\ \mathsf{Verify}(\mathsf{crs.vk},\mathsf{s},\pi)=1\end{array}\right]=1. \tag{14}\] Fig. 4: Performance Loss of Attacks Fig. 3: Blockchain and Zero-Knowledge Proof-based Semantic Defense Scheme **Soundness** denotes that malicious edge devices cannot provide VSPs with authentic transformed semantic data, which means that edge devices can provide valid witnesses if they can produce valid proofs. If \((\mathsf{s},\mathsf{w})\) is not a valid input to the proof circuit \(\mathsf{C}(\mathsf{s},\mathsf{w})=0\), there is a polynomial time extractor \(\mathsf{Ext}\) for a malicious edge device \(\mathcal{A}\) that makes the probabilities of \(\mathsf{Verify}(\mathsf{crxs}.\mathsf{vk},\mathsf{s},\pi^{*})=1\) are negligible. The adversarial advantage of soundness \(\mathsf{Adv}_{\mathcal{A}}^{sd}(\lambda)\) can be represented as follows: \[\Pr\left[\begin{array}{c}\mathsf{KeyGen}(1^{\lambda},\mathsf{C})\to\mathsf{ crs}(\mathsf{ek},\mathsf{vk})\\ \mathcal{A}(\mathsf{crxs}.\mathsf{ek},\mathsf{s})\to\pi^{*}\\ \mathsf{Ext}(\mathsf{crxs}.\mathsf{ek},\mathsf{s},\pi^{*})\to\mathsf{w}\\ \mathsf{Verify}(\mathsf{crxs}.\mathsf{vk},\mathsf{s},\pi^{*})=1\end{array} \right]\leq\mathsf{negl}(\lambda). \tag{15}\] Malicious edge devices \(\mathcal{A}\) can hardly falsify semantic data or make honest VSPs \(\mathcal{V}\) accept invalid proofs about transformations \(f\). According to (15), VSPs cannot accept false proof \(\pi\) from \(\mathcal{A}\) except for a negligible probability \(\mathsf{negl}(\lambda)\). **Zero-knowledge** represents that VSPs can only verify the authenticity of transformed semantic data while they cannot obtain the original semantic data unless edge devices send it to them. Let us assume that there are probabilistic polynomial time simulators \(\mathsf{S}_{1}\), \(\mathsf{S}_{2}\), and malicious edge devices \(\mathcal{A}_{1}\), \(\mathcal{A}_{2}\). The simulator \(\mathsf{S}_{1}\) can produce a common reference string that is used by the simulator \(\mathsf{S}_{2}\) to generate a simulated proof. Then the proposed mechanism has the zero-knowledge property if the adversarial advantage of zero-knowledge \(\mathsf{Adv}_{\mathcal{A}}^{zk}(\lambda)\) satisfies: \[|\Pr(\mathsf{real})-\Pr(\mathsf{sim})|\leq\mathsf{negl}(\lambda), \tag{16}\] where \(\Pr(\mathsf{real})\) and \(\Pr(\mathsf{sim})\) can be denoted as \[\Pr(\mathsf{real})=\Pr\left[\begin{array}{c}\mathsf{KeyGen}(1^{\lambda}, \mathsf{C})\to\mathsf{crs}\\ \mathcal{A}_{1}(f)\to(\mathsf{s},\mathsf{w})\\ \mathsf{Prove}(\mathsf{crs},\mathsf{s},\mathsf{w})\to\pi\\ \mathcal{A}_{2}(\mathsf{crs},\mathsf{s},\mathsf{w},\pi)=1\end{array}\right] \tag{17}\] and \[\Pr(\mathsf{sim})=\Pr\left[\begin{array}{c}\mathsf{S}_{1}(1^{\lambda}, \mathsf{C})\to\mathsf{crs}\\ \mathcal{A}_{1}(f)\to(\mathsf{s},\mathsf{w})\\ \mathsf{S}_{2}(\mathsf{crs},\mathsf{s})\to\pi\\ \mathcal{A}_{2}(\mathsf{crs},\mathsf{s},\mathsf{w},\pi)=1\end{array}\right]. \tag{18}\] VSPs who know public statements mapping with semantic data can hardly learn any knowledge about semantic data before and after transformation. According to (16), VSPs cannot know anything about semantic data other than what can be inferred from transformation. ## VI Experimental Evaluations We simulate the proposed mechanism on Ubuntu 20.04LTS, Intel Xeon, 8 core, 64G memory, and 25000Mb/s with 2 Tesla V100, Go 1.19.4, Node v14.17.0, PyTorch 1.12.1, Torchvision 0.13.1, and CUDA 10.2. We perform experiments on a standard image benchmark, Revisited Paris [27], to measure the attack and defense performance among edge devices and VSPs. We exploit pre-trained networks on ImageNet [28] and AlexNet [29] to execute the attacks and defenses with 100 iterations. Unless otherwise mentioned, we use these parameters referring to [17]. The learning rate \(\eta\) is set to 0.01. The hyper-parameter \(\lambda\) that adjusts the impact of distortion loss is 0. The training process performs 100 iterations for our experiments. ### _Performance of Semantic Attack Scheme_ Fig. 4 is the performance loss of the proposed semantic attack scheme with different loss functions [Global, Hist, Tensor] corresponding to the global descriptor, activation tensor, and activation histogram as the number of iterations increases. Tensor converges much slower than Global and Hist, which requires more iterations to reach its plateau. Fig. 5 is the semantic similarity attack performance between the adversarial semantic data and the authentic or the carrier one with three loss functions [Global, Hist, Tensor]. As shown in Fig. 5 (a) and Fig. 5 (b), Global and Hist converge around the 40th iteration which is much faster than Tensor. The semantic similarity performance of attacks is consistent with Fig. 4 which shows the same trend in three loss functions. ### _Performance of Semantic Defense Scheme_ Fig. 6 is the semantic similarity comparison between attack and defense schemes by displaying example images. Fig. 7 is the semantic similarity (descriptors) defense performance between the adversarial semantic data and the authentic one or the carrier one with three loss functions [Global, Hist, Tensor]. Compared with Fig. 5, the adversarial image (semantic data) can differ in semantic similarities when applying the proposed semantic defense scheme. Since the extracted semantic data is required to execute the defense scheme and the execution can be assured by zero-knowledge proof, the images can differ in descriptors. As shown in Fig. 5 (a) and Fig. 7 (a), the semantic similarity can reduce by up to 35% between the adversarial and the authentic images. Fig. 5 (b) and Fig. 7 (b) show that the semantic similarity can decrease 10% between the adversarial and the carrier images. Fig. 5: Semantic Similarity Performance of Attacks reduce the time consumed in blockchain to improve consensus efficiency due to the attack and defense schemes cropping authentic semantic data while differing in semantic similarities according to Fig. 7. Fig. 8 (b) is the ZKP computation overhead for [GenWitness, GenProof, VerifyProof] operations with [10KB,100KB,1MB] data sizes. We can see from Fig. 8 (b) that the ZKP computation overhead depends on the GenProof operation, while the sizes of semantic data affect little on the proposed scheme. Besides, the computation overhead for verifying proofs is less than for generating proofs, which can be written into smart contracts [9][18][19]. ## VII Conclusion and Future Work In this paper, we first integrated blockchain-aided semantic communication into Metaverse to support AIGC services in virtual transportation networks. We also presented a training-based targeted semantic attack scheme to illustrate potential attacks by generating adversarial semantic data with the same descriptors but different desired meanings. To prevent the above attack, we designed a blockchain and zero-knowledge proof-based semantic defense scheme that records transformations of semantic data and verifies mutations in a decentralized manner. Simulation results show that the proposed mechanism can differ in the descriptors between the adversarial semantic data and the authentic one. The defense scheme can identify malicious edge devices falsifying semantic data to corrupt AIGC services in virtual transportation networks. In future research work, we will study how to mitigate malicious edge devices and facilitate resource allocation with economic incentive mechanisms in the proposed framework.
2305.04415
Unveiling the initial conditions of open star cluster formation
Open clusters (OCs) are infrequent survivors of embedded clusters gestated in molecular clouds. Up to now, little is known about the initial conditions for the formation of OCs. Here, we studied this issue using high-precision astrometric parameters provided by Gaia data release 3. The statistics show that the peculiar motion velocities of OCs vary little from infancy to old age, providing a remarkable opportunity to use OCs to trace their progenitors. Adopting a dynamical method, we derived the masses of the progenitor clumps where OCs were born, which have statistical characteristics comparable to previously known results for clumps observed in the Galaxy. Moreover, the masses of the progenitor clumps of OCs indicate they should be capable of gestating massive O-type stars. In fact, after inspecting the observed OCs and O-type stars, we found that there are many O-type stars in OCs. The destructive stellar feedback from O-type stars may disintegrate the vast majority of embedded clusters, and only those sufficiently dense ones can survive as OCs.
C. J. Hao, Y. Xu, L. G. Hou, Z. H. Lin, Y. J. Li
2023-05-08T01:51:43Z
http://arxiv.org/abs/2305.04415v1
# Unveiling the initial conditions of open star cluster formation ###### Abstract Open clusters (OCs) are infrequent survivors of embedded clusters gestated in molecular clouds. Up to now, little is known about the initial conditions for the formation of OCs. Here, we studied this issue using high-precision astrometric parameters provided by _Gaia_ data release 3. The statistics show that the peculiar motion velocities of OCs vary little from infancy to old age, providing a remarkable opportunity to use OCs to trace their progenitors. Adopting a dynamical method, we derived the masses of the progenitor clumps where OCs were born, which have statistical characteristics comparable to previously known results for clumps observed in the Galaxy. Moreover, the masses of the progenitor clumps of OCs indicate they should be capable of gestating massive O-type stars. In fact, after inspecting the observed OCs and O-type stars, we found that there are many O-type stars in OCs. The destructive stellar feedback from O-type stars may disintegrate the vast majority of embedded clusters, and only those sufficiently dense ones can survive as OCs. Galaxy: stellar content - open clusters and associations: general - stars: formation - stars: kinematics and dynamics 20XX Vol. **X** No. **XX**, 000-000 ## 1 Introduction The vast majority of stars in the Milky Way is believed to form in clusters of dozens to thousands of members in molecular clouds (e.g., Lada & Lada 2003; Bressert et al. 2010; Megeath et al. 2016). The observations of young star-forming regions (e.g., Feigelson et al. 2013), theory (e.g., McKee & Ostriker 2007; Heyer & Dame 2015), and simulations (e.g., Offner et al. 2009) all have pictured star formation as a turbulent, clumpy, and stochastic process. To some extent, star formation in crowded environment can determine the properties of stars themselves, such as the initial mass function (IMF), and stellar multiplicity distributions (Sills et al. 2018). However, understanding of the formation and evolution of stellar clusters is still poor as these objects are deeply embedded in molecular clouds in their early evolutionary stages, and hence not optically observable, and new puzzling observations continuously challenge theoretical models, so they remain a fascinating topic today (e.g., Krause et al. 2020). Figure 1 presents the pathway from radio observed molecular clouds and/or clumps, to proto stellar clusters consisting of embedded clusters that are often only visible at infrared wavelength, and ultimately to the optically identified star associations and/or OCs. Giant molecular clouds (GMCs), as the vast assemblies of molecular gas, possess masses from \(\sim\)\(10^{3}\) M\({}_{\odot}\) to \(\sim\)\(10^{7}\) M\({}_{\odot}\)(e.g., Elmegreen & Falgarone 1996; Murray 2011). Galactic clumps, as the dense parts of GMCs, gestate many denser cores, which are the nurseries of embedded clusters (e.g., Lada & Lada 2003; Rathborne et al. 2006; McMillan et al. 2007). It has become very clear that not all stars form in relaxed, centrally concentrated structure, and can often form in complex hierarchical or substructured distributions that follow the gas (e.g., Whitmore et al. 1999; Schmeja et al. 2008; Wright et al. 2014; Krumholz et al. 2019). For example, the best studied embedded cluster, \(Trapezium\), is within the more extended Orion Nebula Cluster (Kuhn et al. 2019). However, it has been suggested that the vast majority of embedded clusters will evolve into unbound star associations, and only a few percent (4-7%) will survive as bound OCs (e.g., Lada & Lada 2003; Bastian & Goodwin 2006), as illustrated in the sketch map shown in Figure 1. On average, each GMC or GMC complex probably produce one bound open star cluster (Elmegreen & Clemens 1985), and stars in such systems account for about 10% of all stars in our Galaxy (Roberts 1957; Adams & Myers 2001). Although efforts in both observations (e.g., Lada & Lada 2003, and references within) and numerical simulations (e.g, Proszkow et al. 2009; Proszkow & Adams 2009; Girichidis et al. 2012; Dale et al. 2015; Farias et al. 2018) have been devoted to study the star formation and early evolution of embedded clusters, little is known about the initial conditions of OC formation. The reason for the low survival rate of OCs arising from embedded clusters is still a mystery. During the formation of stellar clusters, newborn stars could have profound effects on other stars and their natal molecular material, and many stellar feedback mechanisms would inject momentum into the star-forming environment (Krumholz et al. 2014), e.g., protostellar outflows (McKee 1989; Bally 2016; Li et al. 2020), stellar radiation pressure (Murray & Rahman 2010), stellar winds from hot stars (van Kempen et al. 2010), etc. Such stellar-feedback mechanisms are in principle enough to move all the surrounding material (Krumholz et al. 2014). Indeed, the stellar system that forms in a clump may expand (e.g., Orion Nebula cluster, Kuhn et al. 2019) as it emerges from the molecular gas. In this process, unlike other objects (e.g., binary or triple stellar systems and individual stars) whose kinetics can be changed easily, gravitationally bound OCs contain a large number of stars, making them potentially good kinematic fossils for investigating their progenitors. The _Gaia_ mission has published its data release 3 (_Gaia_ DR3, Gaia Collaboration et al. 2016, 2022), which includes astrometric and photometric measurements of about 1.8 billion stars of different types, Figure 1: Sketch map of the evolutionary pathway from clumps in a GMC to proto stellar clusters consisting of embedded clusters, and ultimately to bound open clusters and/or unbound star associations. ages and evolutionary stages, and the determinations of the radial velocities (RVs) of more than 33 million objects. Meanwhile, the data quality of _Gaia_ has been further improved. On the other hand, at present, thousands of OCs have been discovered in the Milky Way (e.g., Hao et al. 2022; Castro-Ginard et al. 2022), particularly with precise astrometric parameters (e.g., Cantat-Gaudin et al. 2020; Hao et al. 2021; Tarricq et al. 2021), which provide a good opportunity to investigate the characteristics of their progenitors. The remaining paper is organized as follows. Section 2 describes the sample of OCs used in this work. The kinematic properties of OCs are studied in Sect. 3.1, which mainly concentrates on the peculiar motions of OCs in the Galaxy. Then, adopting a dynamical method, we derived the masses of progenitor clumps where OCs were born in Sect. 3.2, and the statistical characteristics of derived clumps were also compared with the previously known results of Galactic clumps. Next, we made an investigation in Sect. 3.3 to realize whether the present-day OCs house massive O-type stars, ultimately confirming the indication of derived progenitor clumps of OCs. In Sect. 4, we discussed the reason for the low survival rate of gravitationally bound OCs and explored which embedded clusters can evolve into long-lived OCs. Finally, we summarized this work in Sect. 5. ## 2 Sample Up to now, thousands of OCs have been identified in _Gaia_ data, and their ages cover a wide range, from a few million years (Myr) to billions of years. Based on previous works (i.e., Koposov et al. 2017; Cantat-Gaudin et al. 2018, 2019; Castro-Ginard et al. 2018, 2019, 2020; Liu & Pang 2019; Ferreira et al. 2019; Sim et al. 2019), Cantat-Gaudin et al. (2020) determined the parameters of 2 017 OCs found in _Gaia_ data release 2 (_Gaia_ DR2, Gaia Collaboration et al. 2018). Similarly, based on previous studies (i.e., Dias et al. 2002; Kharchenko et al. 2013; Dias et al. 2014; Schmeja et al. 2014; Scholz et al. 2015; Castro-Ginard et al. 2018; Cantat-Gaudin et al. 2018, 2019; Castro-Ginard et al. 2019, 2020; Liu & Pang 2019; Hao et al. 2020; Ferreira et al. 2020; He et al. 2021), Hao et al. (2021) synthesized a sample of more than 3 700 OCs, whose parameters have been determined according to _Gaia_ early data release 3 (_Gaia_ EDR3, Gaia Collaboration et al. 2021). We compiled a large number of Galactic OCs with three-dimensional kinematic parameters through the following steps. For the OCs synthesized by Hao et al. (2021), after removing 134 potentially false positive or non-existing clusters reported in Dias et al. (2002) and Cantat-Gaudin & Anders (2020), we cross-matched the remaining OCs with the 2 017 OCs listed in the work of Cantat-Gaudin et al. (2020), where 1 821 non-repetitive OCs were found. Then, we cross-matched the members stars of 2 017 OCs compiled by Cantat-Gaudin et al. (2020) with the _Gaia_ DR3 data set and updated their astrometric parameters. Among these objects, there are 1 772 OCs that have member stars with RV measurements provided by _Gaia_ DR3. For the 1 821 OCs listed in Hao et al. (2021), we have also updated their astrometric parameters by using _Gaia_ DR3, and 1 456 OCs have member stars with RV measurements. Thus, 3 228 OCs with _Gaia_ RV measurements were obtained. For each of these OCs, we used a weighted procedure to determine its mean RV and RV uncertainty based on the errors of individual measurements, following Soubiran et al. (2018). In the end, after filtering 375 objects with RV uncertainties larger than 10 km s\({}^{-1}\), we gathered a sample of 2 853 OCs with reliable mean RV parameters. Age parameters of the selected OCs come from Cantat-Gaudin et al. (2020) and Hao et al. (2021). The _Gaia_ DR3 data set is a large increase of the OC members that with RV measurements available. Taking advantage of RV measurements from both _Gaia_ DR2 and ground-based spectroscopic surveys and catalogues, Tarricq et al. (2021) computed the weighted RVs and RV uncertainties of 1 382 OCs in Cantat-Gaudin et al. (2020). Selecting the most reliable OCs that have an RV uncertainty lower than 3 km s\({}^{-1}\) based on at least 3 member stars, Tarricq et al. (2021) obtained 513 clusters in their sample. Under this criterion, there are 1 317 OCs in our sample that can be considered to possess the most reliable mean RVs, which have a median RV uncertainty of 1.01 km s\({}^{-1}\) and a median number of 14 member stars with RV measurements, benefiting from _Gaia_ DR3. In Figure 2, we presented the RVs, RV uncertainties, and the numbers of member stars with RV measurements of 2 853 OCs in the sample. For about 35% (997 OCs) of the sample, the mean RV is based on more than 10 member stars, and for about 71% (2 015 OCs) it is based on at least 3 member stars. The RVs of 525 OCs are based on only one member star, which represent \(\sim\)18% OCs of our sample. 2 415 OCs (\(\sim\)85%) have RV uncertainties lower than 5 km s\({}^{-1}\), and the RV uncertainties of 1 970 OCs (\(\sim\)70%) are lower than 3 km s\({}^{-1}\). The median uncertainty of the weighted mean RV is 1.64 km s\({}^{-1}\) when the full sample is considered. The sample of 2 853 OCs was used for analysis in the next sections. ## 3 Results ### Peculiar motions of OCs The large number of member stars of OCs makes them potentially good kinematic fossils for investigating their progenitors. The peculiar motions (PMs) are non-circular motions with respect to the rotating Galactic disc and are significant kinematic attributes of OCs, the study of which enables to use OCs to trace their progenitors. For the OCs obtained in Sect. 2, we have calculated their PM velocities (\(v_{\rm pm}\)), which were derived from their measured distances, proper motions, and radial velocities following Reid et al. (2009) and Xu et al. (2013). In the Galactocentric reference frame, the three-dimensional motions of OCs were straightforwardly calculated using the linear speeds projected onto the celestial sphere. Then, the PMs of OCs were estimated by subtracting Galactic rotation and the solar motions. Here, a Galactic rotation speed near the solar circle of 236 \(\pm\) 7 km s\({}^{-1}\), a distance of the Sun to the Galactic centre of 8.15 \(\pm\) 0.15 kpc and solar motions of \(U_{\odot}\) = 10.6 \(\pm\) 1.2 km s\({}^{-1}\), \(V_{\odot}\) = 10.7 \(\pm\) 6.0 km s\({}^{-1}\) and \(W_{\odot}\) = 7.6 \(\pm\) 0.7 km s\({}^{-1}\) were adopted (Reid et al. 2019), where \(U\), \(V\) and \(W\) are the velocity components towards the Galactic centre, in the direction of Galactic rotation and towards the North Galactic Pole, respectively. The PM velocities of OCs were defined as \(v_{\rm pm}\) = \(\sqrt{U^{2}+V^{2}+W^{2}}\). Figure 3(A) displays the distribution of the PM velocities of OCs in different ages with respect to their Galactocentric distance, which demonstrates that the PM velocities of OCs at different distances from the Galactic centre are comparable. Besides, almost all of OCs (99.2\(\%\)) are located between a Galactocentric distance range of [4; 16] kpc; hence, the influence of the Galactic "bar" on the PM velocities of OCs in the sample should be negligible. Figure 3(B) shows the distribution of PM velocities of the OCs in different ages versus their \(z\)-heights from the Galactic middle plane, which indicates that Figure 2: RV uncertainties as a function of the RVs of OCs in the sample. The numbers (\(N\)) of OC member stars with RV measurements are colour coded. there is no significant distinction of the PM velocities of OCs with different \(z\)-heights. Most OCs cross the Galactic plane several times in one orbital period and they gradually migrate from the Galactic disk as they age (Wu et al. 2009; Hao et al. 2021). The above results imply that there may be no difference in the PM velocities of OCs when they travel in the Galaxy. We made an investigation to address whether the PM velocities of OCs are variable as they age. As shown in Figure 4(A), we present the PM velocities of OCs as a function of cluster age. For OCs younger than one thousand million years, there is no visible variation of the PM velocities with the increasing OC age. The Pearson correlation coefficient (PCC), \(\rho_{\rm X,Y}\), was used to evaluate the correlation between the PM velocities of the OCs and the cluster ages: \[\rho_{\rm X,Y}=\frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}}=\frac{E(X,Y)-E(X)E(Y)}{ \sqrt{E(X^{2})-E^{2}(X)}\sqrt{E(Y^{2})-E^{2}(Y)}}, \tag{1}\] Figure 4: Properties of the PM velocities of OCs. _Panel_ (A): PM velocity as a function of cluster age. The dashed black line is \(10^{9}\) years. _Panel_ (B): distribution of the PM velocities of OCs. The solid purple line shows the best-fitting Maxwellian velocity distribution. The dashed red line indicates the most probable velocity. The dashed black line is the velocity of 48 km s\({}^{-1}\). Figure 3: Distributions of OCs with different PM velocities. _Panel_ (A): OCs with different PM velocities as a function of Galactocentric distance. The Solar circle (black dashed line) is at 8.15 kpc (Reid et al. 2019). _Panel_ (B): OCs with different PM velocities as a function of cluster \(z\)-height. The ages of the OCs are colour coded. where \(cov(X,Y)\) denotes the covariance between the two variables, and \(E\) is the mean of each variable. The Pearson correlation coefficient (PCC) between the PM velocities and ages is about 0.19 for all OCs in the sample, and only 0.17 for OCs younger than 1 Gyr (Figure 4(A)). Hence, the variation of PM velocities is small for OCs from infancy to the old age of one thousand million years. We also investigated the distribution characteristics of the PM velocities of OCs, as shown in Figure 4(B). The distribution of the PM velocities of OCs can be fitted with a Maxwellian velocity distribution function: \[f(v)=A\times e^{-B\cdot v_{\rm pm}^{2}}\cdot v_{\rm pm}^{2}. \tag{2}\] The best-fitting parameters of \(A\) and \(B\) are 4.06 and 0.0035, with 95\(\%\) confidence intervals of [3.69, 4.44] and [0.0033, 0.0038], respectively. According to the Maxwellian velocity distribution, the most probable velocity is: \[v_{p}=\sqrt{\frac{1}{B}}. \tag{3}\] Besides, the probability that the velocity is within the finite interval \([v_{1},v_{2}]\) is: \[P(v)=\int_{v_{1}}^{v_{2}}f(v)\,dv,\ \int_{0}^{\infty}f(v)\,dv=1. \tag{4}\] Hence, it is shown that the probability of \(v_{\rm pm}\) in the range [0, 48] km s\({}^{-1}\) is 99.9% (see Figure 4(B)), and the most probable velocity of an OC, \(v_{p}\), is \(\sim\)17 km s\({}^{-1}\) (see Figure 4(B)). Since the variation of the PM velocities is very small for OCs from infancy to the age of one thousand million years, it is likely to make a connection between the present-day OCs and their progenitors. Then, the initial conditions for producing OCs can be revealed. ### The progenitor clumps of OCs The present-day PM velocity of an OC has two possible origins: the separation velocity of the OC from the system where it born, or the inherited velocity from its natal system. The former scenario can be conceived that most embedded clusters evolve to be unbound star associations after separating from their natal systems, and ultimately become Galactic field stars, while only a few percent survive as bound OCs (see Figure 1), showing a very low fraction of bound OCs as announced in previous studies (e.g., Lada & Lada 2003; Krumholz et al. 2019). We found that there are several arguments can favor the former scenario, e.g., not all stars form in centrally concentrated but complex substructured distributions that follow the gas (e.g., Wright et al. 2014); stellar clusters formed in clumps are expanding when they emerging from the gas (e.g., Kuhn et al. 2019); the spatial distributions of gas and stars can determine whether the cluster remains bound or not (e.g., Smith et al. 2011, 2013), and the substructured distribution of a stellar cluster can help it survive (e.g., Allison et al. 2009); etc. Section 3.1 shows that the PM velocities of OCs vary little from infancy to the old stage. Thus, if we suppose that the PM velocities of OCs are nearly the separation velocities from their natal systems, the masses of progenitors that gave birth to OCs can be estimated, which provides us a chance to study the initial properties of the OCs' progenitors. Observations show that the dense clumps that are gravitationally bound have many denser cores (e.g., Urquhart et al. 2014). It has become clear that the stellar clusters formed in clumps are substructured, containing many embedded clusters born in denser cores, as illustrated in Figure 1. Here, similar to the definition given by Kennicutt & Evans (2012), the scales (diameter) of clumps to which we refer are 1-10 pc and the denser cores are 0.1-1.0 pc. The mass of a clump (\(M_{\star}\)) can be estimated as the total mass of stars (\(M_{\star}\)) and gas (\(M_{\rm gas}\)), where \(M_{\rm c}\) = \(M_{\star}/\)SFE. SFE, star formation efficiency, is a fundamental parameter of the star formation in a region, which is defined as: \[{\rm SFE}=\frac{M_{\star}}{M_{\star}+M_{\rm gas}}. \tag{5}\] Here, \(M_{\star}\) and \(M_{\rm gas}\) are the total stellar and gaseous masses contained in the region, respectively. The total potential energy of the stars before gas expulsion, \(\Omega_{1}\), can be approximated by: \[\Omega_{1}\sim-M_{\star}\cdot\frac{G\cdot M_{\rm c}}{r_{\rm h}}, \tag{6}\] where \(M_{\star}\) is the mass of stars, \(r_{\rm h}\) is the radius that contains half of the total mass in stars and \(G\) is the gravitational constant. When gas is expelled, the potential energy of the system, \(\Omega_{2}\), arises only from the stellar component, i.e., \[\Omega_{2}\sim-M_{\star}\cdot\frac{G\cdot M_{\star}}{r_{\rm h}}. \tag{7}\] The same as Farias et al. (2015, 2018), we assumed that the gas is expelled instantaneously. Then, the stars have not had time to change their kinetic energy after gas expulsion, so we can assume \(\Omega_{2}={\rm SFE}\cdot\Omega_{1}\). Combining with Eq. (5), the separation velocity (\(v\)) of an OC from its natal proto-OC system can be approximated by the escape velocity as: \[v=\sqrt{-\frac{2\Omega_{2}}{M_{\star}}}=\sqrt{\frac{2GM_{\star}}{r_{\rm h}}}= \sqrt{\frac{2GM_{\rm c}\cdot{\rm SFE}}{r_{\rm h}}}. \tag{8}\] In the following text, the term "proto-OC system" will refer to those proto stellar clusters formed in clumps, consisting of many substructures called embedded clusters. Then, the mass of a clump can be estimated as: \[M_{\rm c}=\frac{v^{2}r_{\rm h}}{2G\cdot{\rm SFE}}. \tag{9}\] If we want to derive the masses of progenitor clumps of OCs, there are three parameters need to be determined, i.e., the separation velocity \(v\), the SFE, and the radius \(r_{\rm h}\). _Separation velocity_\(v\). The variation of the PM velocities of the OCs younger than one billion years is very small. For those OCs with ages of nearly one billion years, their PM velocity variations are estimated to be only about a few km s\({}^{-1}\). In the following, we only selected the OCs younger than one billion years to do further statistical analyses and supposed their PM velocities are almost the separation velocities from their natal proto-OC systems, i.e., \(v_{\rm pm}\simeq v\). _SFE._ Estimates of SFE are indirect and uncertain, e.g., the value of SFE globally observed for GMCs is 1-5\(\%\)(Duerr et al., 1982; Grudic et al., 2018), and in star-forming regions of embedded clusters, SFEs range from approximately 10-30\(\%\)(Lada & Lada, 2003), while a bound OC would emerge only if SFE is greater than 50\(\%\)(Wilking & Lada, 1983). Considering not all embedded clusters in a proto-OC system can survive as bound OCs, we adopt SFE = 40\(\%\) for the systems that can produce OCs. _Radius_\(r_{\rm h}\). As mentioned above, stars form in the proto-OC system are substructured that follow the gas. While considering the stars are more concentrated than the gas (e.g., Krumholz et al., 2019), the \(r_{\rm h}\) of proto-OC systems that we adopted are slightly smaller than the radii of clumps. Referring to the clumps found in the submillimetre survey ATLASGAL (Atacama Pathfinder Experiment Telescope Large Area Survey of the Galaxy, Urquhart et al., 2014), the radii, \(r_{\rm h}\), were set to the range [0.3, 3.0] pc. It can be expected that larger PM velocities of OCs implies richer and more massive progenitor clumps, because there are more significant momentum injection. There is a mass-radius relation for the Galactic clumps (Krumholz et al., 2019), i.e., \(r\propto M^{\alpha}\). Combining this relation with Eq.( 8), we can obtain \(r\propto v^{2\alpha/(1-\alpha)}\). Index (\(\alpha\)) of the mass-radius relation for the clumps are in the range of 0.3-0.6 (Wong et al., 2008; Roman-Duval et al., 2010; Urquhart et al., 2018). Here, the adopted value of \(\alpha\) is 0.5. Besides, we have chosen different values of \(\alpha\) and the following results are not significantly different. We first extracted OCs with ages younger than 1 Gyr. Then, since the fitted Maxwellian velocity distribution in Sect. 3.1 shows that the probability of the PM velocities of OCs in the range [0, 48] km s\({}^{-1}\) is 99.9%, we rejected OCs with PM velocities larger than 48 km s\({}^{-1}\), and the sources with PM velocity uncertainties larger than 10 km s\({}^{-1}\) were also eliminated, eventually obtaining a subsample of 1 571 OCs. For these OCs, the masses of their natal clumps (\(M_{\rm c}\)) are deduced with the above dynamical method, which produces a range from \(10^{2}\)\({\rm M}_{\odot}\) to \(10^{6}\)\({\rm M}_{\odot}\). Actually, a vast majority of progenitor clumps have masses of \(10^{3}\) to \(10^{6}\)\({\rm M}_{\odot}\), and only \(\sim\)1% of them are smaller than \(10^{3}\)\({\rm M}_{\odot}\). The mass of the clump corresponding to the most probable velocity, \(v_{p}\), is 2.6 \(\times 10^{4}\)\({\rm M}_{\odot}\), consistent with that about 48% derived clumps have the order of mass magnitude of \(10^{4}\)\({\rm M}_{\odot}\). The derived masses of clumps are mainly (\(\sim\)82\(\%\)) in the range from \(10^{4}\)\({\rm M}_{\odot}\) to \(10^{6}\)\({\rm M}_{\odot}\), which are comparable to the expectations of clump candidates where young massive stellar clusters are expected to be found (e.g., Urquhart et al. 2018), and actually, such systems are anticipated to yield OCs (e.g., Lada & Lada 2003). The mass function of the derived progenitor clumps (clump mass function, CMF) of OCs was also determined and compared with those of previously reported results. As shown in Figure 5, the mass function for these progenitor clumps, \(\psi(M_{\rm c})\equiv{\rm d}N/{\rm d}M_{\rm c}\propto{M_{\rm c}}^{\beta_{\rm c}}\), was obtained, where the best-fitting power-law exponent was found to be \(\beta_{\rm c}\) = \(-\)2.11 \(\pm\) 0.07. Our adopted lower-mass limit is at log (\({M_{\rm c}}/{\rm M}_{\odot}\)) = 3.7, because below this limit, the mass functions begin to fall significantly deviate from the extrapolated power law. Next, we derived the best-fitting value of \(\beta_{\rm c}\) and its error from a least-squares estimation of clump mass above the mass-fitting limit. In order to obtain the parameter index of the clump mass function, we fixed the bin widths (mass) and counted the number of clumps per bin. Besides, we adopted different values of \(\alpha\) in the range [0.3, 0.6], and the resulting indices of \(\beta_{\rm c}\) were within the uncertainty of above result. The derived \(\beta_{\rm c}\) is in good agreement with the value of \(-\)2.12 \(\pm\) 0.15 reported in the \(Herschel\) InfraRed Galactic Plane Survey (Olmi et al. 2018), commensurate with the result of \(-\)2.10 deduced from numerical simulations (Guszejnov & Hopkins 2015), and slightly flatter than \(Salpeter\)'s value (\(-\)2.35, Salpeter 1955). This value also indicates that the power-law exponent of clumps harbouring predecessor OCs does not present a significant difference from the overall sample of Galactic clumps. Figure 5: Mass function of clumps that can produce OCs. The blue line indicates a maximum likelihood fit of a power law to the mass function, where the best-fitting index is \(\beta_{\rm c}\) = \(-\)2.11 \(\pm\) 0.07. The vertical dashed line shows the adopted lower-mass limit at log(\({M_{\rm c}}/{\rm M}_{\odot}\)) = 3.7. Both the masses and mass function of the derived progenitor clumps of OCs are almost concordant with the previously reported results of Galactic clumps, suggesting that the dynamic method adopted here should be reasonable, which also indicates a potential connection between the PM velocities of OCs and their natal clumps. ### OCs and O-type stars Massive O-type stars are believed to play a significant role in the formation and evolution of stellar clusters, and also have profound effects on open star cluster formation (e.g., Lada & Lada 2003). It is therefore of great interest to investigate whether there are O-type stars presenting in OCs. Stellar clusters with masses of a few hundreds \(\rm M_{\odot}\) to \(10^{5}\)\(\rm M_{\odot}\) are expected to contain more than one O-type star (Weidner et al. 2013). Consequently, as described in Sect. 3.2, since the masses of progenitor clumps that can produce OCs are in the range of \(10^{3}\)-\(10^{6}\)\(\rm M_{\odot}\), they are massive enough to give birth to O-type stars. The low-mass embedded clusters in the progenitor clumps generally are difficult to evolve into OCs (e.g., Lada & Lada 2003), while for the high-mass embedded clusters, some of them probably have survived as OCs and still harbor O-type stars. Therefore, we investigated the OCs in the sample to address whether they contain O-type stars or not. After inspecting OCs and the observed O-type stars, we found that there are many O-type stars present in present-day OCs. The O-type star catalogue used in this work, containing 1 089 O-type stars, was taken from Xu et al. (2021), who cross-matched the spectroscopically confirmed O-type stars collected by Skiff (2014) in _Gaia_ EDR3. After cross-matching the _Gaia_ source_id of 1 089 O-type stars with the 284 889 OC members in our sample, a total of 112 O-type stars were found in 56 OCs. Table 1 in the Appendix A presents these OCs, including their name, number and spectral types of O-type stars. The fraction of young OCs (\(<\) 10 Myr) harbouring massive O-type stars is \(\sim\)18\(\%\). Especially, as shown in Figure 6(A), for OCs with ages of 2 to 4 Myr, the fraction of OCs harbouring O-type stars is as high as 22\(\%\), which decreases to about 15\(\%\) for OCs of 8 to 10 Myr. O-type stars are the most massive stars on the main sequence, and even the least massive O-type star has an initial mass of 16 \(\rm M_{\odot}\) (Meynet & Maeder 2003). The most massive O-type stars spend less than one Myr on the main sequence and explode as a supernova after 3 or 4 Myr, while the least massive ones can remain on the main sequence for about 10 Myr, but cool slowly during this time and become early B-type stars (Weidner & Vink 2010). Thus, if an OC contained any O-type star, there is an upper limit on the age of the cluster, i.e., at least younger than 10 Myr. As shown in Figure 6(B), the number of observed OCs does not obviously decrease at 3-4 Myr (i.e., the supernovae explosion timescale) and Figure 6: OCs and O-type stars. _Panel_ (A): fraction of OCs harbouring O-type stars for different age groups of OCs. _Panel_ (B): fraction of OCs (2–20 Myr) in different age groups. 10 Myr (i.e., the maximum lifetime of O-type stars), which suggests that the evolution of O-type stars probably not destroy their resident OCs. We also studied the characteristics of the PM velocities of the OCs harbouring O-type stars. The median value of \(v_{\rm pm}\) for young OCs (ages \(<\) 10 Myr) harbouring O-type stars is 18 \(\pm\) 3 km s\({}^{-1}\), which is similar to that of young OCs without O-type stars, i.e., 17 \(\pm\) 5 km s\({}^{-1}\). The mean \(v_{\rm pm}\) of the young OCs containing O-type stars is 19 km s\({}^{-1}\), while the corresponding values of OCs without O-type stars is 23 km s\({}^{-1}\). Figure 7 shows the PM velocities of OCs containing O-type stars as a function of the number of O-type stars in the cluster. For some (e.g., distant) OCs, the astrometric uncertainties translate into large PM velocity uncertainties. The NGC 3603 and FSR 0696 OCs have significantly large PM velocity uncertainties of 183 km s\({}^{-1}\) and 78 km s\({}^{-1}\), respectively, as their relatively parallax errors are as high as 30%. Hence, the two clusters are not presented in Figure 7. We found that the \(v_{\rm pm}\) of OCs containing 1-2 O-type stars are comparable to those with 5-8 O-type stars, indicating that there may be no relationship between the number of the harboured O-type stars and the PM velocities of OCs. Besides, about 61\(\%\) of young OCs harbouring O-type stars are located in the inner Galaxy, probably due to the presence of more numerous massive GMCs (Heyer & Dame 2015). ## 4 Discussion The low fraction of gravitationally bound open star clusters is still a mystery. Previous studies have shown that various stellar feedback mechanisms play important roles in the formation and evolution of stellar clusters (e.g., McKee 1989; Kroupa & Boily 2002; Murray & Rahman 2010; van Kempen et al. 2010; Krumholz et al. 2014; Bally 2016; Li et al. 2020). As the most massive stars, the formation and evolution of O-type stars are accompanied by violent feedback to their surroundings in the form of copious amounts of ultraviolet radiation, powerful stellar winds and supernova explosions (e.g., Dale et al. 2013; Dale & Bonnell 2008; Dekel & Krumholz 2013). Such destructive mechanisms can disperse the dense molecular material, and impede the birth of new stars, making it very difficult to satisfy the star formation efficiency (i.e., SFE \(>\) 50\(\%\), Wilking & Lada 1983) needed for the formation of bound OCs from embedded clusters (Lada & Lada 2003). The results in Sect. 3.2 and Sect. 3.3 have indicated that the progenitor clumps of OCs are capable of gestating O-type stars, and particularly, many O-type stars are even present in present-day OCs. The considerable influence caused by O-type stars in the progenitor clumps of OCs probably results in a vast majority of embedded clusters can not survive and Figure 7: PM velocities and errors of OCs (\(<\) 10 Myr) containing O-type stars as a function of the number of O-type stars in the cluster. The error bars indicate the PM velocity uncertainties of the OCs. The ages of the OCs are colour coded. evolve into OCs. Besides, the observed low SFEs (\(<\) 50\(\%\)) for most embedded clusters (Lada & Lada 2003) are probably also partially attributed to the existence of O-type stars. However, this then raises a new specific question on which embedded clusters can survive in the conditions caused by violent feedback from O-type stars. To study which embedded clusters can survive as bound OCs, we have conducted an investigation on the density properties of observed embedded clusters. The stellar mass density, \(\rho\), of 34 known embedded clusters was calculated based on their sizes and mass provided by Lada & Lada (2003). The result was presented in Figure 8(A). What is striking is that \(\sim\)6\(\%\) embedded clusters have a high stellar mass density of \(\sim\)4.0 \(\times\) 10\({}^{3}\)\({\rm M}_{\odot}\)\({\rm pc}^{-3}\); in contrast, the others are all below 1.0 \(\times\) 10\({}^{3}\)\({\rm M}_{\odot}\)\({\rm pc}^{-3}\). \(Trapezium\), as one of the 6\(\%\) of known embedded clusters with a sufficient stellar mass density, contains O-type stars, and has been identified as a possible predecessor of an OC (Kroupa et al. 2001). Besides, it is interesting that the percentage of \(\sim\)6\(\%\) is consistent with that of only 4-7\(\%\) of embedded clusters surviving as a bound OC (Lada & Lada 2003). A further step has also been made to judge wether the observed embedded clusters would survive as OCs by estimating their virial parameter, \(Q\) (\(M_{\rm virial}/M_{\rm ec}\)), as shown in Figure 8(B). The same as Krumholz et al. (2019), \(Q\) was determined as \(Q\equiv 5\sigma^{2}R/GM\), where \(R\) is the radius and \(M\) is the mass of the cluster. Here, we adopted a one-dimensional velocity dispersion, \(\sigma\), of 0.7 km s\({}^{-1}\), which is the typical limit value of observed young bound OCs (Cantat-Gaudin & Anders 2020). Statistically, we found that only those \(\sim\)6 \(\%\) of dense embedded clusters have virial parameters of \(Q\)\(<\) 1, which supports that they will likely evolve into bound stellar systems. Conservatively, we speculated that the embedded clusters can survive as bound OCs as long as their stellar mass densities are sufficiently high. However, it should be noted that the mass density here is a roughly mass density threshold for embedded clusters that can evolve to the phase of bound OCs. The precise threshold of the mass density is expected to be determined using a further larger sample of embedded clusters. ## 5 Summary We conducted a pilot study on the formation of OCs in the Milky Way. From infancy to the old stage, the variation of the PM velocities of OCs may be slight. Based on that, the masses of progenitor clumps capable of producing OCs were obtained through a dynamical approach, whose statistics are concordant with the known results of Galactic clumps, such as the CMF. In addition, as indicated by the masses of progenitor clumps, the investigation confirms that many massive O-type stars exist in present-day OCs, Figure 8: Stellar mass density (_Panel_ A) and virial parameter (_Panel_ B) of known embedded clusters. The dashed black line in _Panel_ B is the virial parameter \(Q\) = 1. whose destructive stellar feedback can lead to a large number of embedded clusters being destroyed, even those with sufficient densities can survive, and evolve to the phase of bound OCs. These results could provide helpful indications of the OC formation and are expected to blaze a new trail for studying star formation in our Galaxy. ###### Acknowledgements. We appreciate the anonymous referee for the comments which help us to improve the paper. This work was funded by the NSFC grant No. 11933011 and by the Key Laboratory for Radio Astronomy. YJL thanks support from the Natural Science Foundation of Jiangsu Province (grant number BK20210999), the Entrepreneurship and Innovation Program of Jiangsu Province, and NSFC grant No. 12203104. The authors acknowledge the open cluster catalogue compiled by Cantat-Gaudin et al. (2020). We used data from the European Space Agency mission _Gaia_ ([http://www.cosmos.esa.int/gaia](http://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC; see [http://www.cosmos.esa.int/web/gaia/dpac/consortium](http://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
2310.19194
Weaving Equity into Infrastructure Resilience Research and Practice: A Decadal Review and Future Directions
After about a decade of research in this domain, what is missing is a systematic overview of the research agenda across different infrastructures and hazards. It is now imperative to evaluate the current progress and gaps. This paper presents a systematic review of equity literature on disrupted infrastructure during a natural hazard event. Following a systematic review protocol, we collected, screened, and evaluated almost 3,000 studies. Our analysis focuses on the intersection within the dimensions of the eight-dimensional assessment framework that distinguishes focus of the study, methodological approaches, and equity dimensions (distributional-demographic, distributional-spatial, procedural, and capacity equity). To conceptualize the intersection of the different dimensions of equity, we refer to pathways, which identify how equity is constructed, analyzed, and used. Significant findings show that (1) the interest in equity in infrastructure resilience has exponentially increased, (2) the majority of studies are in the US and by extension in the global north, (3) most data collection use descriptive and open-data and none of the international studies use location-intelligence data. The most prominent equity conceptualization is distributional equity, such as the disproportionate impacts to vulnerable populations and spaces. The most common pathways to study equity connect distributional equity to the infrastructure's power, water, and transportation in response to flooding and hurricane storms. Other equity concepts or pathways, such as connections of equity to decision-making and building household capacity, remain understudied. Future research directions include quantifying the social costs of infrastructure disruptions and better integration of equity into resilience decision-making.
Natalie Coleman, Xiangpeng Li, Tina Comes, Ali Mostafavi
2023-10-29T23:26:20Z
http://arxiv.org/abs/2310.19194v2
Weaving Equity into Infrastructure Resilience Research and Practice: A Decadal Review and Future Directions ###### Abstract Disasters amplify existing inequalities, and infrastructures play a crucial role in this process. They determine the access of vulnerable communities to clean water, food, healthcare, and electricity. Increasingly, the need to account for equity in infrastructure resilience has been recognized. After about a decade of research in this domain, what is missing is a systematic overview of the state-of-the art and a research agenda across different infrastructures and hazards. It is now imperative to evaluate the current progress and gaps. This paper presents a systematic review of equity literature on disrupted infrastructure during a natural hazard event. Following a systematic review protocol, we collected, screened, and evaluated almost 3,000 studies. Our analysis focuses on the intersection within the dimensions of the eight-dimensional assessment framework that distinguishes _focus of the study_ ; the _methodological approaches_ and the _equity dimensions_ (distributional-demographic, distributional-spatial, procedural, and capacity equity). To conceptualize the intersection of the different dimensions of equity and how they are applied to different contexts, we refer to "pathways", which identify how equity is constructed, analyzed, and used. Significant findings show that (1) the interest in equity in infrastructure resilience has exponentially increased, (2) the majority of studies are in the US and by extension in the global north, (3) most data collection use descriptive and open-data and none of the international studies use location-intelligence data. The most prominent equity conceptualization is distributional equity, such as the disproportionate impacts to vulnerable populations and spaces. The most common pathways to study equity connect distributional equity to the infrastructure's power, water, and transportation in response to flooding and hurricane storms. Other equity concepts or pathways, such as connections of equity to decision-making and building household capacity, remain understudied. Future research directions include quantifying the social costs of infrastructure disruptions and better integration of equity into resilience decision-making. equity, infrastructure resilience, hazard, systematic literature review ## Introduction The increasing scale, intensity, and frequency of disasters have revealed the fragility of infrastructure systems (World Meteorological 2021). In the last decade, disasters such as the
2303.10936
Learning to Explore Informative Trajectories and Samples for Embodied Perception
We are witnessing significant progress on perception models, specifically those trained on large-scale internet images. However, efficiently generalizing these perception models to unseen embodied tasks is insufficiently studied, which will help various relevant applications (e.g., home robots). Unlike static perception methods trained on pre-collected images, the embodied agent can move around in the environment and obtain images of objects from any viewpoints. Therefore, efficiently learning the exploration policy and collection method to gather informative training samples is the key to this task. To do this, we first build a 3D semantic distribution map to train the exploration policy self-supervised by introducing the semantic distribution disagreement and the semantic distribution uncertainty rewards. Note that the map is generated from multi-view observations and can weaken the impact of misidentification from an unfamiliar viewpoint. Our agent is then encouraged to explore the objects with different semantic distributions across viewpoints, or uncertain semantic distributions. With the explored informative trajectories, we propose to select hard samples on trajectories based on the semantic distribution uncertainty to reduce unnecessary observations that can be correctly identified. Experiments show that the perception model fine-tuned with our method outperforms the baselines trained with other exploration policies. Further, we demonstrate the robustness of our method in real-robot experiments.
Ya Jing, Tao Kong
2023-03-20T08:20:04Z
http://arxiv.org/abs/2303.10936v1
# Learning to Explore Informative Trajectories and Samples for Embodied Perception ###### Abstract We are witnessing significant progress on perception models, specifically those trained on large-scale internet images. However, efficiently generalizing these perception models to unseen embodied tasks is insufficiently studied, which will help various relevant applications (e.g., home robots). Unlike _static_ perception methods trained on pre-collected images, the embodied agent can move around in the environment and obtain images of objects from any viewpoints. Therefore, efficiently learning the exploration policy and collection method to gather informative training samples is the key to this task. To do this, we first build a 3D semantic distribution map to train the exploration policy self-supervised by introducing the semantic distribution disagreement and the semantic distribution uncertainty rewards. Note that the map is generated from multi-view observations and can weaken the impact of misidentification from an unfamiliar viewpoint. Our agent is then encouraged to explore the objects with different semantic distributions across viewpoints, or uncertain semantic distributions. With the explored informative trajectories, we propose to select hard samples on trajectories based on the semantic distribution uncertainty to reduce unnecessary observations that can be correctly identified. Experiments show that the perception model fine-tuned with our method outperforms the baselines trained with other exploration policies. Further, we demonstrate the robustness of our method in real-robot experiments. Embodied Perception, Trajectory Exploration, Hard Sample Selection ## I Introduction Pre-training on large-scale datasets to build reusable models has drawn great attention in recent years, e.g., the deep visual models [2] pre-trained on ImageNet [3] can be reused for detection [4, 6], and pre-trained language model like BERT [5] can be used for image-text retrieval [7]. To better adapt to downstream tasks, many researchers focus on fine-tuning models on small-scale task-related datasets [8, 9]. However, generalizing the perception model pre-trained on large-scale internet images to embodied tasks is insufficiently studied, which will help various relevant applications (e.g., home robots). In order to use as few annotations as possible, efficiently collecting training data in embodied scenes becomes the main challenge. Different from visual learning based on _static_ data (e.g., images), the embodied agent can _move_ around and interact with 3D environment. Therefore, efficiently collecting training samples means learning an exploration policy to encourage the agent to explore the areas where the pre-trained model performs poorly. Since the ground-truth labels in scenes are unavailable, the underlying spatial-temporal continuity in the 3D world can be used self-supervised. To use the consistency in semantic predictions, the previous method [12] proposes a semantic curiosity policy, which explores inconsistent labeling of the same object by the perception model. When an exploration trajectory is learned, all observations on this trajectory are collected for labeling to fine-tune the pre-trained perception model. Despite the advance, this method utilizes a fuzzy inconsistency estimation (i.e., projecting multiple objects at different heights to the same location in a 2D map). In addition, the uncertainty of the predicted semantic distribution that reflects what the pre-trained perception model does not know in the new environment and the hard sample selection on the trajectory are ignored in [12]. To solve these problems, we propose learning to Explore Informative Trajectories and Samples (EITS) for embodied perception, as shown in Fig. 2. It consists of two steps: learning exploration policy and collecting hard samples to fine-tune the perception model. During the exploration, our agent moves around and collects multi-view observations fused by Exponential Moving Average to generate a 3D semantic map, which weakens the impact of misidentification from an unfamiliar viewpoint. The generated 3D map can be regarded as the pseudo ground truth due to the fusion of predicted results from different viewpoints. Unlike previous work [12] that adopts predicted labels to build the semantic map, our work builds a predicted probabilistic distribution map, i.e., a 3D semantic distribution map. It can be used to constrain not only the predicted labels but also the predicted distributions, as shown in Fig. 1 (left). Then one curiosity reward is measured by the semantic distribution disagreement between the semantic prediction of the current perspective and the generated 3D semantic distribution map. We also measure the uncertainty of the predicted semantic distribution as shown in Fig. 1 (right) as another curiosity reward. These two rewards are used together to learn the exploration policy Fig. 1: The illustration of semantic distribution disagreement, e.g., the bed is recognized to different objects/distributions across three viewpoints (\(v1,v2,v3\)), and semantic distribution uncertainty, e.g., the probability of couch in observation \(o1\) being predicted as bed and couch is relatively close.bg means background. by maximizing the disagreement and uncertainty of semantic predictions. Therefore, our agent can move to the areas where the semantic predictions are different from pseudo ground truth or the probabilities being predicted as two categories are relatively close. After obtaining the exploration strategy, we gather indistinguishable hard samples on each trajectory for the subsequent training. The selection method is the same as the uncertainty of semantic distribution. The gathered data is labeled and utilized to fine-tune the pre-trained perception model, making it generalized well to new environments. We also show that the perception model could be sustainably improved by our explore-finetune process back and forth. The method is evaluated on a challenging Matterport dataset [17] and our real-robot environment. Experimental results show that our EITS approach outperforms the state-of-the-art methods. The contributions of this paper can be summarized as threefold. (1) We propose a novel informative trajectory exploration method for embodied perception by measuring the semantic distribution disagreement across viewpoints and the uncertainty of each observation. In addition, we are probably the first to exploit the uncertainty over semantic predictions to handle this task. (2) The hard samples on the trajectory selected by the semantic distribution uncertainty further sieves the observations recognized well by the pre-trained model, which can enhance the performance better when fine-tuning the pre-trained model. (3) The proposed method achieves the best result on the challenging Matterport dataset. ## II Related Work ### _Robot Perception Learning_ Visual perception is a crucial function of a robot. Some works [16, 39] directly utilize the perception model pre-trained on COCO [40] images to perform object goal navigation. To improve the performance in embodied tasks, some researchers [24, 25, 26] focus on learning a policy to directly improve metrics of interest end-to-end at test time by obtaining information about the environment. Unlike them, we aim to explore informative samples self-supervised to better fine-tune the pre-trained perception model. The exploration in reinforcement learning [41, 42, 43] also aims to maximize an intrinsic reward function to encourage the agent to seek previously unseen or poorly understood parts of the environment. Different from them, we compute the reward function by multi-view consistency in semantics. The active interactive imitation learning [44, 45, 46] are also related to our work in disagreement and uncertainty measuring to decide whether to request a label from the human. However, our agent does not require human intervention when learning the exploration policy, and the exploration purpose is to improve the perception model. Recently, Chaplot et al. [12] measure the semantic curiosity to learn the exploration policy for embodied perception. But they ignore the uncertainty over semantic predictions and hard sample selection on the learned trajectory. Besides, some works [13, 14] attempt to learn both exploration and perception utilizing pseudo labels in a completely self-supervised manner without requiring any extra labels. In this paper, we propose effectively generalizing the pre-trained perception model to embodied tasks, where informative trajectories and samples are gathered by utilizing a 3D semantic distribution map to measure the semantic distribution disagreement and the semantic distribution uncertainty. Then the gathered data is labeled to fine-tune the perception model. ### _Semantic Mapping_ 3D mapping aiming to reconstruct a dense map of the environment has achieved great advances in recent years. Fuentes-Pacheco et al. [32] do a very detailed survey. Researchers also consider adding semantic information to the 3D map [14]. Similar to them, we adopt the same setting and learn 3D semantic mapping by differentiable projection operations. In this paper, we propose a 3D semantic distribution map, which is used to learn the exploration policy. Fig. 2: The architecture of our proposed informative trajectory and sample exploration method. It contains two steps: the exploration policy aims to encourage the agent to explore the objects with semantic distribution disagreement or uncertainty, then the training stage aims at gathering hard samples on trajectories based on semantic distribution uncertainty to fine-tune the pre-trained model. ### _Embodied Task_ Embodied agents can move around and interact with the surrounding environment. Many environments are photo-realistic reconstructions of indoor [17, 33] and outdoor [34, 35] scenes, where the ground-truth labels for objects are also provided. Recently, many researchers have used these simulated environments in visual navigation [16, 36], visual question answering [37] and visual exploration [23]. Visual navigation usually involves point/object goal navigation [16] and vision-and-language navigation [38] where the path to the goal is described in natural language. Visual question answering [37] should intelligently navigate to explore the environment, gather necessary visual information, and then answer the question. Unlike them, our agent aims to gather data for labeling to generalize the pre-trained perception model to unseen environments efficiently. ## III Approach We aim to train an embodied agent with a perception model pre-trained on internet images to explore informative trajectories and samples effectively. Then the perception model fine-tuned on the gathered data can generalize well to a new environment. As shown in Fig. 2, our proposed method consists of two main parts. The exploration part aims to learn the active movement of an agent to obtain informative trajectories via semantic distribution disagreement and semantic distribution uncertainty self-supervised. Then we take advantage of the semantic distribution uncertainty to collect hard samples on the learned trajectory. After images are collected and semantically labeled, we fine-tune the perception model on these images. ### _3D Semantic Distribution Mapping_ Note that for each time step \(t\), our agent's observation space consists of an RGB observation \(I_{t}\in\mathbb{R}^{3\times W_{I}\times H_{I}}\), a depth observation \(D_{t}\in\mathbb{R}^{W_{I}\times H_{I}}\), and a 3-DOF pose sensor \(x_{t}\in\mathbb{R}^{3}\) which denotes the \(x\)-\(y\) coordinates and the orientation of the agent. The agent has three discrete actions: move forward, turn left and turn right. The easiest way to associate semantic predictions across frames on a trajectory is to project the predictions on the top-down view to build a 2D semantic map as [12]. However, due to the embodied agent moving in a 3D environment, the height information is lost when projecting the predictions onto a 2D map. These will result in projecting multiple objects at different heights to the same location, e.g., if a potted plant is on the table, the potted plant and table will be projected to the same location. Therefore, the noise will be generated when calculating the disagreement across different viewpoints. In this paper, we utilize the 3D semantic distribution map to measure the semantic distribution disagreement. The semantic map \(M\) is a 4D tensor of size \(K\times L_{M}\times W_{M}\times H_{M}\), where \(L_{M}\), \(W_{M}\), \(H_{M}\), denote the 3 spatial dimensions, and \(K=C+2\), where \(C\) is the total number of semantic object categories. The first two channels in \(K\) represent whether the corresponding voxel (x-y-z location) contains obstacles and is the explored area, respectively. The other channels denote the predicted semantic probability distribution among \(C\) categories from the pre-trained perception model. The map is initialized with all zeros at the beginning of an episode, \(M_{0}=[0]^{K\times L_{M}\times W_{M}\times H_{M}}\). The agent always starts at the center of the map facing east at the beginning of the episode, \(x_{0}=(L_{M}/2,W_{M}/2,0.0)\) same as [16]. Fig. 2 shows the 3D semantic mapping procedure at a time step. The agent takes action and then sees a new observation \(I_{t}\). The pre-trained perception model (e.g., Mask RCNN [4]) is adopted to predict the semantic categories of the objects seen in \(I_{t}\), where the semantic prediction is a probability distribution among \(C\) categories for each pixel. The depth observation \(D_{t}\) is used to compute the point cloud. Each point in the point cloud is associated with the corresponding semantic prediction, which is then converted into 3D space using differentiable geometric transformations based on the agent pose to get the voxel representation. This voxel representation in the same location is aggregated over time using Exponential Moving Average to get the 3D semantic distribution map: \[M_{t}=\left\{\begin{array}{ll}M_{t-1},&t=1\\ \lambda*M_{t-1}+(1-\lambda)*m_{t},&t>1\end{array}\right. \tag{1}\] where \(m_{t}\) means the voxel representation at time step \(t\) and \(\lambda\) aims to control the relative importance of \(M_{t-1}\) and \(m_{t}\). The map can integrate the predicted semantics of the same object from different viewpoints to alleviate the misrecognition caused by the unfamiliar viewpoint. Therefore, the map representation can be used as pseudo ground truth labels of objects in the scene. ### _Exploring Informative Trajectory_ The goal of exploration policy \(a_{t}=\pi(I_{t},\theta)\) is exploring objects that are poorly identified by the current perception model based on the observation \(I_{t}\), where \(a_{t}\) means the action and \(\theta\) represents the parameters of the policy model. Hence, we can collect valuable observations in the explored areas to fine-tune the perception model. We propose two novel distribution-based rewards to train the exploration policy by maximizing the disagreement and uncertainty during moving. The semantic distribution disagreement reward is defined as the Kullback-Leibler divergence between the current prediction and the 3D semantic distribution map, which encourages the agent to explore the objects with different semantic distributions across viewpoints: \[r_{d}=KL(m_{t},M_{t-1}). \tag{2}\] Unlike semantic curiosity [12] which maximizes the label inconsistency based on the 2D semantic map, our semantic distribution disagreement aims to explore the objects with different distributions from the 3D semantic distribution map. In addition, we propose a semantic distribution uncertainty reward \(r_{u}\) to explore the objects whose predicted probabilities belonging to two categories are relatively close, as Eq. 4 explains. \[r_{u}=\left\{\begin{array}{ll}1,&u>\delta\\ 0,&u<\delta\end{array}\right. \tag{3}\] To train the policy, we first input the semantic map to a global exploration policy to select a long-term goal (i.e., an x-y coordinate of the map). Then a deterministic Fast Marching Method [18] is used for path planning, which uses low-level navigation actions to achieve the goal. We sample the long-term goal every 25 local steps, same as [16] to reduce the time horizon for exploration in reinforcement learning. The Proximal Policy Optimization (PPO) is used to train the policy. ### _Efficient Sample Selection and Continue Training_ After obtaining the trajectory, the easiest way is to label all observations on the trajectory. Although the trained exploration policy can find more objects with inconsistent and uncertain predictions, there are still many observations that the pre-trained model can accurately identify. To efficiently fine-tune the perception model, we propose a sample selection method by measuring the uncertainty \(u\) of the semantic distribution: \[u=Second_{max}(P_{i}), \tag{4}\] where \(P_{i}\in\mathbb{R}^{C}\) is the predicted class probability of \(i\)th object in a single image, the \(Second_{max}\) means the second largest score in \(\{p_{i}^{0},p_{i}^{1},...,p_{i}^{C-1}\}\). If \(u\) is larger than threshold \(\delta\), we select the corresponding image. Considering that the semantic distribution disagreement relies heavily on multi-view observations in the trajectory will reduce the efficiency of selection, and thus it is not utilized to select hard samples. We label the selected images and use them to fine-tune the perception model. ## IV Experiments ### _Implementation details_ We use the Matterport3D [17] dataset with Habitat simulator [19] in our main experiments. The scenes in the Matterport3D dataset are 3D reconstructions of real-world environments, split into a training set (54 scenes) and a test set (10 scenes). We assume that the perfect agent pose and depth image can be obtained in our setup. The exploration policy consists of convolutional layers followed by fully connected layers. The pre-trained Mask RCNN is frozen while training the exploration policy. We use the PPO with a time horizon of 20 steps, 8 mini-batches, and 4 epochs in each PPO update to train the policy. The reward, entropy, and value loss coefficients are set to 0.02, 0.001, and 0.5, respectively. We use Adam optimizer with a learning rate of \(2.5\times 10^{-5}\). The maximum number of steps in each episode is 500. The \(\lambda\) and \(\delta\) are experimentally set to 0.3 and 0.1, respectively. To fairly compare with previous methods, we set the number of training steps to 500k in all experiments. We pre-train a Mask-RCNN model with FPN [21] using ResNet-50 as the backbone on the COCO [40] dataset labeled with 6 overlapping categories with the Matterport3D, i.e., 'chair', 'couch', 'potted plant', 'bed', 'toilet' and 'tv'. Then we fine-tune this model on the gathered samples with a fixed learning rate of 0.001. All other hyper-parameters are set to default settings in Detectron2 [22]. We randomly collect the samples in test scenes of different episodes to evaluate the final perception model. The AP50 score is adopted as the evaluation metric, which is the average precision with at least \(50\%\) IOU. We further deploy our method to a real robot. Our robot is equipped with a Kinect V2 camera, a 2D LiDAR, and an onboard computer (with an Intel i5- 7500T CPU and an NVIDIA GeForce GTX 1060 GPU). Note that the LiDAR is only used with wheel odometers to perform localization. We test our method in a built 60\(m^{2}\) house with a dining room, a living room, and a bedroom. ### _Main Results_ #### Iv-B1 Simulation Environment To demonstrate the effectiveness of our method, we compare our fine-tuned object detection and instance segmentation results with the state-of-the-art methods as shown in Tab. I. Note that these methods all use around 20k training images. Pre-trained means the perception model was pre-trained on the raw COCO dataset. Re-trained means we re-train the pre-trained model utilizing COCO dataset labeled with 6 overlapping categories with the Matterport3D. Random is a baseline exploration policy that samples actions randomly. It can be seen that our model achieves the best performance and can further improve the performance when progressively training the exploration strategy three times based on the latest fine-tuned perception model. Specifically, compared with the pre-trained model, our fine-tuned model gives 10.80\(\%\) AP50 gains on the box detection metric. Compared with the previous best competitor Semantic Curiosity [12] which rewards trajectories with inconsistent labeling behavior and encourages the embodied agent to explore such areas, our model significantly outperforms it by 2.57\(\%\) absolute AP50 point on object box detection and 1.36\(\%\) on instance segmentation. The improved performances over the best competitor indicate that our proposed informative trajectory exploration and hard sample selection method is very effective for this task. In addition, we can see that our method is more friendly to instances with simple shapes, e.g., Bed and Tv. These instance's shapes are easier to be reconstructed through 3D mapping. Objects with much more complicated shapes, e.g., Potted Plant, are more likely to involve mapping errors, which in turn decreases the performance of instance segmentation. #### Iv-B2 Real Robot We also deploy our learned exploration policy on a real robot to explore informative trajectories and hard samples in an unseen environment. In practice, we gather 170 hard samples for fine-tuning the pre-trained model and an additional 50 randomly collected samples for validation, with an average of 4 objects in each image. Benefiting from the gathered informative images, the fine-tuned perception model can improve the detection and segmentation performances from 79.1% AP50 and 76.7% AP50 to 97.3% AP50 and 96.1% AP50, respectively. ### _Ablation Analysis_ Our method comprises two modules: informative trajectory exploration and hard sample selection. To investigate these two components, we perform a set of ablation studies with \(n\) = 1 for simplicity, as shown in Tab. III. We first investigate the importance of rewarding semantic distribution disagreement across viewpoints and semantic distribution uncertainty to explore the trajectory. It can be seen that the AP50 accuracy on object detection drops 0.71\(\%\) (SC+HSS vs. Ours) by replacing our exploration policy as SC. The exploration module proves the effectiveness of learning informative trajectories for subsequent sample selection. Then we investigate the importance of semantic distribution uncertainty based hard sample selection by removing it (Ours w/o HSS). The AP50 accuracy on object detection drops 0.96\(\%\), demonstrating that selecting hard samples enhances the perception results. In addition, by comparing the results between Ours and Ours w/o SDD, Ours and Ours w/o SDU (semantic distribution disagreement and uncertainty in informative trajectory exploration), we can find that utilizing SDD and SDU can generate more effective trajectories. We compare the effectiveness of setting different thresholds \(\delta\) in hard sample selection as shown in Tab. II. In this experiment, we sample the images from explored trajectories with 6 episodes and fixed steps in each training scene, resulting in different numbers of sampled training images at different thresholds. We can find that decent performances can be achieved by training very few hard samples, which demonstrates the effectiveness of selecting hard samples. Tab. IV shows the experimental results when progressively training the exploration policy multiple times based on the latest fine-tuned perception model. Note that they all use 20k training images. To exploit measures of uncertainty in semantic distributions, we utilize the entropy of categorical distribution (ECS) in place of the heuristic in Eq. 4 as shown in Tab. V. We experimentally set the threshold of entropy to 0.4. The improved performance indicates that the uncertainty between all distributions is more effective than between the two categories. the explored trajectories and sampled images from the Matterport3D dataset and real-world environment, as shown in Fig. 3. We can see that our model is able to gather inconsistent and uncertain detections via semantic distribution disagreement and uncertainty estimation. For example, the couch is detected as different objects (chair/couch) or distributions from different viewpoints on the first row. Besides, the couch is detected as couch and chair with almost close scores on the second row. By collecting these observations that are poorly identified by the pre-trained perception model for labeling, the model can be fine-tuned better. Fig. 4 shows the segmentation masks obtained by three different models, i.e., pre-trained, Semantic Curiosity [12], and our EITS, demonstrating our proposed method's benefits. As the figure shows, our generated segmentation masks have more obvious object shapes and finer outlines in the first column. Besides, our model, fine-tuned exclusively on hard samples, can detect the missed objects by the pre-trained and Semantic Curiosity [12] models, as shown in the third and fifth columns. ## V Discussion and Limitations We propose to generalize the perception model pre-trained on internet images to the unseen 3D environments with as few annotations as possible. Therefore, efficiently learning the exploration policy and selection method to gather training samples is the key to this task. In this work, we propose a novel informative trajectory exploration method via semantic distribution disagreement and semantic distribution uncertainty. Then the uncertainty-based hard sample selection method is proposed to further reduce unnecessary observations that can be correctly identified. Extensive ablation studies verify the effectiveness of each component of our method. Although our method is more efficient than previous works, there are still some limitations. Through exploring the informative trajectories and samples, we can efficiently generalize the pre-trained model to the embodied task, where labeling the segmentation mask is still costly. The weakly-supervised methods (e.g., utilizing box annotations to train segmentation models) can be utilized to fine-tune the perception model in the future. In addition, we collect all samples before fine-tuning the perception model, which results in our perception model not being updated. In the future, we can explore updating the perception module when learning the exploration policy. ## VI Acknowledgments We would like to thank Minzhao Zhu, Yifeng Li, Yuxi Liu, Tao Wang and Yunfei Liu for their help on the robot system, Hang Li for helpful feedback, and other colleagues at ByteDance AI Lab for support throughout this project. Fig. 4: Qualitative examples of instance segmentation by different models. Fig. 3: Qualitative examples of learned trajectories and sampled images from the Matterport3D environment and the real robot. The first row shows the explored informative trajectories trained by semantic distribution disagreement and uncertainty rewards. The second row shows the gathered hard images by semantic distribution uncertainty estimation.
2301.00384
Correlation Clustering Algorithm for Dynamic Complete Signed Graphs: An Index-based Approach
In this paper, we reduce the complexity of approximating the correlation clustering problem from $O(m\times\left( 2+ \alpha (G) \right)+n)$ to $O(m+n)$ for any given value of $\varepsilon$ for a complete signed graph with $n$ vertices and $m$ positive edges where $\alpha(G)$ is the arboricity of the graph. Our approach gives the same output as the original algorithm and makes it possible to implement the algorithm in a full dynamic setting where edge sign flipping and vertex addition/removal are allowed. Constructing this index costs $O(m)$ memory and $O(m\times\alpha(G))$ time. We also studied the structural properties of the non-agreement measure used in the approximation algorithm. The theoretical results are accompanied by a full set of experiments concerning seven real-world graphs. These results shows superiority of our index-based algorithm to the non-index one by a decrease of %34 in time on average.
Ali Shakiba
2023-01-01T10:57:36Z
http://arxiv.org/abs/2301.00384v1
# Correlation Clustering Algorithm for Dynamic Complete Signed Graphs: An Index-based Approach ###### Abstract In this paper, we reduce the complexity of approximating the correlation clustering problem from \(\mathcal{O}\left(m\times\left(2+\alpha(G)\right)+n\right)\) to \(\mathcal{O}\left(m+n\right)\) for any given value of \(\varepsilon\) for a complete signed graph with \(n\) vertices and \(m\) positive edges where \(\alpha(G)\) is the arboricity of the graph. Our approach gives the same output as the original algorithm and makes it possible to implement the algorithm in a full dynamic setting where edge sign flipping and vertex addition/removal are allowed. Constructing this index costs \(\mathcal{O}\left(m\right)\) memory and \(\mathcal{O}\left(m\times\alpha(G)\right)\) time. We also studied the structural properties of the non-agreement measure used in the approximation algorithm. The theoretical results are accompanied by a full set of experiments concerning seven real-world graphs. These results shows superiority of our index-based algorithm to the non-index one by a decrease of \(\%34\) in time on average. **Keywords:** Correlation clustering \(\cdot\) Dynamic graphs \(\cdot\) Online Algorithms ## 1 Introduction Clustering is one of the most studied problems in machine learning with various applications in analyzing and visualizing large datasets. There are various models and technique to obtain a partition of elements, such that elements belonging to different partitions are dissimilar to each other and the elements in the same partition are very similar to each other. The problem of correlation clustering, introduced in [1], is known to be an **NP**-hard problem for the disagree minimization. Therefore, several different approximation solutions based on its IP formulation exist in the literature. Recently, the idea of a \(2\)-approximation algorithm in [1] is extended in [4] for constructing a \(\mathcal{O}\left(1\right)\)-approximation algorithm. The experiments in [4] show acceptable performance for this algorithm in practice, although its theoretical guarantee can be too high, e.g. \(1\:442\) for \(\beta=\lambda=\frac{1}{36}\). In [3], this algorithm is extended to an online setting where just vertex additions are allowed, and whenever a new vertex is added, it reveals all its positively signed edges. Shakiba in [12] studied the effect of vertex addition/removal and edge sign flipping in the underlying graph to the final clustering result, in order to make the algorithm suitable for dynamic graphs. However, one bottleneck in this way is computing the values of NonAgreement among the edges and identifying the \(\varepsilon\)-lightness of vertices. The current paper proposes a novel indexing scheme to remedy this and make the algorithm efficient, not just in terms of dynamic graphs, but for even dynamic hyper-parameter \(\varepsilon\). Our proposed method, in comparison with the online method of [3] is that we allow a full dynamic setting, i.e. vertex addition/removal and edge's sign flipping. It is known that any online algorithm for the correlation clustering problem has at least \(\Omega(n)\)-approximation ratio [10]. Note that the underlying algorithm used in the current paper is consistent, as is shown via experimental results [3]. The rest of the paper is organized as follows: In Section 1.1, we highlight our contributions. This is followed by a reminding some basic algorithms and results in Section 2. Then, we introduce the novel indexing structure in Section 3.1 and show how it can be employed to enhance the running-time of the approximate correlation clustering algorithm. Then, we show how to maintain the proposed indices in a full dynamic settings in Section 3.2. In Section 4, we give an extensive experiments which accompanies the theoretical results and show the effectiveness of the proposed indexing structure. Finally, a conclusion is drawn. ### Our Contribution In this paper, we simply ask "How can one reduce the time to approximate a correlation clustering of the input graph [4] for varying values of \(\varepsilon\)?" We also ask "How can we make the solution to the first question an online solution for dynamic graphs?" Our answer to the first question is devising a novel indexing-structure which is constructed based on the structural properties of the approximation algorithm and its NonAgreement measure. As our experiments in Section 4 show, the proposed method enhanced the total running-time of querying the clustering for about \(\%34\) on average for seven real-world datasets. Then, we make this structure online to work with dynamic graphs based on theoretical results in [12]. The construction of the index itself is highly parallelizable, up to the number of the vertices in the input graph. The idea for parallelization is simple: construct each \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) in the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) with a separate parallel thread. We also study the intrinsic structures in the NonAgreement measure, to bake more efficient algorithms for index-maintenance due to updates to the underlying graph. More precisely, we show that using the proposed index structure, we can find a correlation clustering for a graph for any given value of \(\varepsilon\) in time \(\mathcal{O}\left(m+n\right)\), compared to the \(\mathcal{O}\left(m\times\left(2+\alpha(G)\right)+n\right)\) time for the CC. The pre-processing time of the ICC would be \(\mathcal{O}\left(m\times\alpha(G)\right)\) with \(\mathcal{O}\left(m\right)\) space complexity. ## 2 Preliminaries Let \(G=\left(V,E\right)\) be a complete undirected signed graph with \(\left|V\right|=n\) vertices. The set of edges \(E\) is naturally partitioned into positive and negative signed edges, \(E^{+}\) and \(E^{-}\), respectively. Then, we use \(m\) to denote \(\left|E^{+}\right|\). The correlation clustering problem is defined as \[\text{cost}(\mathcal{C})=\sum_{\begin{subarray}{c}\left\{u,v\right\}\in E^{+ }\\ u\in C_{i},v\in C_{j},i\neq j\end{subarray}}1+\sum_{\begin{subarray}{c}\left\{ u,v\right\}\in E^{-}\\ u,v\in C_{i}\end{subarray}}1, \tag{1}\] where \(\mathcal{C}=\left\{C_{1},\ldots,C_{\ell}\right\}\) is a clustering. Note that this is the min-disagree variant of the problem. The constant factor approximation algorithm of [4] is based on two main quantities: (1) \(\varepsilon\)-agreement of a positively signed edge \(\left\{u,v\right\}\), i.e. \(u\) and \(v\) are in \(\varepsilon\)-agreement if and only if \(\textsc{NonAgreement}_{G}\left(u,v\right)=\frac{\left|N_{G}(u)\Delta N_{G}(v) \right|}{\max\{\left|N_{G}(u)\right|,\left|N_{G}(v)\right|\}}<\varepsilon\), and (2) \(\varepsilon\)-lightness, where a vertex \(u\) is said to be \(\varepsilon\)-light if \(\frac{\textsc{AcreCnt}_{G^{+}}\left(u\right)}{\left|N_{G^{+}}(u)\right|}<\varepsilon\) where \(\textsc{AgreCnt}_{G^{+}}(u)=\left|\left\{w\in V|u\text{ and }v\text{ are in }\varepsilon\text{-agreement}\right\}\right|\). Note that a vertex which is not \(\varepsilon\)-light is called \(\varepsilon\)-heavy. This is a \(2+\frac{4}{\varepsilon}+\frac{1}{\varepsilon^{2}}\)-approximation algorithm, as is shown in [3]. This algorithm is described in Algorithm 1, which we will refer to the CC algorithm, for short. Shakiba in [12] studied theoretical foundation of the CC algorithm in a full dynamic setting. The following result is a summary of Table 1, Corollary 1, and Theorem 4 in [12]. **Theorem 1**.: _Suppose the sign of an edge \(u=\left\{u,v\right\}\) is flipped. Then, the non-agreement and \(\varepsilon\)-lightness of vertices other than the ones whose distance to either \(u\) and \(v\) is more than two would not change._ The arboricity of the graph \(G\) is the minimum number of edge-disjoint spanning forests into which \(G\) can be decomposed. The following lemma for arboricity is useful in bounding the number of operations. **Lemma 1**.: _Lemma 2 in [2] Suppose the graph \(G=\left(V,E\right)\) has \(n\) vertices with \(m\) edges. Then,_ \[\sum_{\left\{u,v\right\}\in E}\min\left\{\deg_{G}(u),\deg_{G}(v)\right\}\leq 2 a(G)\times m. \tag{2}\] ## 3 Proposed Method In this section, we describe our novel indexing structure. This structure allows dynamic queries of the correlation clustering with varying values of \(\varepsilon\) for dynamic graphs. The proposed algorithm which uses the indexing structure would be called ICC, or indexed-based correlation clustering. ### Indexing structure For an edge \(e=\{u,v\}\) with positive sign, we define its _\(\varepsilon\)-agreement distance_ as \(\textsc{NonAgreement}_{G^{+}}\left(u,v\right)\). Intuitively, this is the infimum of the values \(\varepsilon\) which the nodes \(u\) and \(v\) are not in \(\varepsilon\)-agreement. Let define the set \(\mathcal{E}=\left\{\textsc{NonAgreement}_{G^{+}}\left(u,v\right)|e=\{u,v\}\in E ^{+}\right\}\). Without loss of generality, let \(\mathcal{E}=\left\{\varepsilon_{0},\ldots,\varepsilon_{\ell-1}\right\}\) with the ordering \(\min\mathcal{E}=\varepsilon_{0}<\varepsilon_{1}<\cdots<\varepsilon_{\ell-1}= \max\mathcal{E}\). For a fixed value of \(\varepsilon\), let \(G^{+}_{\varepsilon}=\left(V,E^{+}_{\varepsilon}\right)\) where \(E^{+}_{\varepsilon}=\left\{e=\{u,v\}\in E^{+}|\textsc{NonAgreement}_{G^{+}} \left(u,v\right)<\varepsilon\right\}\). **Observation 1**.: _For all \(\varepsilon\leq\varepsilon_{0}\), \(G^{+}_{\varepsilon}\) is the null graph, i.e. a graph on all nodes without any edges. Moreover, for all \(\varepsilon>\varepsilon_{\ell}\), \(G^{+}_{\varepsilon}=G^{+}\)._ Next, we introduce the key ingredient to our indexing structure, called \(\mathtt{NAO}\). **Definition 1** (NonAgreement Node Ordering).: The \(\varepsilon\)-agreement ordering for each node \(v\in V\), denoted by \(\mathtt{NAO}\left(v\right)\), is defined as an ordered subset of vertices in \(G\) where: 1. node \(u\in V\) appears in the ordering \(\mathtt{NAO}\left(v\right)\) if and only if \(e=\{u,v\}\) is a positive edge in \(G\). 2. for each two distinct vertices \(u,w\in V\) which appear in \(\mathtt{NAO}\left(v\right)\), \[\textsc{NonAgreement}_{G^{+}}\left(v,u\right)<\textsc{NonAgreement}_{G^{+}} \left(v,w\right),\] (3) implies \(u\) appears before \(w\). 3. for each node \(u\in\mathtt{NAO}\left(v\right)\), its \(\varepsilon\)-agreement distance is also stored with that node. The NonAgreement node ordering of the graph \(G\) is defined as \(\mathtt{NAO}\left(G\right)=\{\left(v,\mathtt{NAO}\left(v\right)\right)|v\in V\}\). In other words, the \(\mathtt{NAO}\left(v\right)\) is a sorted array of neighboring nodes of \(v\) in \(G^{+}\) in their \(\varepsilon\)-agreement distance value. An example NonAgreement node ordering for all vertices in a sample graph is illustrated in Figure 1. The space and construction time complexities of the \(\mathtt{NAO}(G)\) are investigated in the next two lemmas. **Lemma 2**.: _The NonAgreement node ordering for a graph \(G\), \(\mathtt{NAO}\left(G\right)\), can be represented in \(\mathcal{O}\left(m\right)\) memory._ Proof.: The number of nodes inside \(\mathtt{NAO}\left(v\right)\) equals to the \(\deg_{G^{+}}(v)+1\), for all vertices in \(N_{G^{+}}(v)\) as well as the vertex \(v\) itself. Note that it is not required to explicitly store the vertex \(v\) itself in the ordering. Cumulatively, the total size required for representing \(\mathtt{NAO}\left(v\right)\) is \(2\times m\) entries. **Lemma 3**.: _The time complexity to construct the \(\mathbb{NAO}\left(v\right)\) for all vertices \(v\in V\) is \(\mathcal{O}\left(m\times\left(\alpha(G)+\lg m\right)\right)\) where \(\alpha(G)\) is the arboricity of the graph \(G\)._ Proof.: To compute the value of \(\textsc{NonAgreement}_{G^{+}}\left(u,v\right)\) for all edges \(e=\left\{u,v\right\}\in E^{+}\), one needs to compute \(\left|N_{G^{+}}(u)\Delta N_{G^{+}}(v)\right|\). This requires \(\mathcal{O}\left(\deg_{G^{+}}(u)+\deg_{G^{+}}(v)\right)\) operations to compute the symmetric difference, given the adjacency lists of \(u\) and \(v\) are sorted. However, we can compute their intersection and use it to compute the NonAgreement as follows \[\textsc{NonAgreement}_{G^{+}}\left(u,v\right)=\frac{\deg_{G^{+}}(u)+\deg_{G^{ +}}(v)-2\times\left|N_{G^{+}}(u)\cap N_{G^{+}}(v)\right|}{\max\left\{N_{G^{+ }}(u),N_{G^{+}}(v)\right\}}. \tag{4}\] Hence, the total time required to compute the values of NonAgreement is equal to \[\sum_{\left\{u,v\right\}\in E^{+}}\min\left\{\deg_{G^{+}}(u),\deg_{G^{+}}(v) \right\},\] which is known to be bounded by \(2\alpha(G)\times m\) (Lemma 1). Moreover, each \(\mathbb{NAO}\left(v\right)\) is of length \(1+\deg_{G^{+}}(v)\) which requires sorting by the value of their NonAgreement in \(\mathcal{O}\left(\deg_{G^{+}}(v)\lg\left(\deg_{G^{+}}(v)\right)\right)\), which accumulates to \(\mathcal{O}\left(m\lg m\right)\). The correlation clustering corresponding to a given value of \(\varepsilon\) and a graph \(G\) is the set of connected components of the graph \(\widetilde{G^{+}}\). Given access to \(\mathbb{NAO}\left(G\right)=\left\{\mathbb{NAO}\left(v\right)\left|v\in V(G)\right\}\right\), one can respond to the following queries: (1) Is the vertex \(v\) an \(\varepsilon\)-heavy vertex or an \(\varepsilon\)-light? (2) Are the endpoints of an edge \(\left\{u,v\right\}\) in \(\varepsilon\)-agreement? As we will show in this section, both of these questions can be answered in \(\mathcal{O}\left(1\right)\) time using the \(\mathbb{NAO}\) structure. For a vertex \(v\in V\), its \(\varepsilon\)-light threshold is defined as \(\mathbb{LTH}_{\varepsilon}\left(v\right)=\left\lceil\varepsilon\deg_{G^{+}}(v )\right\rceil+1\). **Lemma 4**.: _For an \(\varepsilon\) and a vertex \(v\in V\), \(v\) is \(\varepsilon\)-heavy if and only if the \(\varepsilon\)-agreement distance of the \(\mathbb{LTH}_{\varepsilon}\left(v\right)\)-th smallest vertex in \(\mathbb{NAO}\left(v\right)\) is less than \(\varepsilon\). Otherwise, it is \(\varepsilon\)-light._ Proof.: The vertices \(u\) and \(v\) would be in \(\varepsilon\)-agreement if and only if \(\textsc{NonAgreement}_{G^{+}}\left(u,v\right)<\varepsilon\). Whenever the \(\varepsilon\)-agreement distance of the \(\mathbb{LTH}_{\varepsilon}\left(v\right)\)-th smallest vertex in \(\mathbb{NAO}\left(v\right)\) is less than \(\varepsilon\), it means that there are at least \(\left\lceil\varepsilon\deg_{G^{+}}(v)\right\rceil+1\) vertices, including the \(v\) itself, in \(\varepsilon\)-agreement with \(v\). This is equivalent the \(\varepsilon\)-heavyness of the vertex \(v\). On the other hand, if the \(\varepsilon\)-agreement distance of the \(\mathbb{LTH}_{\varepsilon}\left(v\right)\)-th smallest vertex in \(\mathbb{NAO}\left(v\right)\) is greater than or equal to \(\varepsilon\), this is equivalent to that the number of vertices in \(\varepsilon\)-agreement with \(v\) is less than \(\varepsilon\times\deg_{G^{+}}(v)\), i.e. \(v\) is \(\varepsilon\)-light. **Lemma 5**.: _Given access to \(\mathbb{NAO}\left(v\right)\), for all values of \(\varepsilon\) and any vertex \(v\in V\), identifying the \(\varepsilon\)-lightness of \(v\) can be accomplished in \(\mathcal{O}\left(1\right)\)._ Figure 1: An illustrative example of \(\mathbb{NAO}\left(v\right)\) for an example graph. Proof.: This is implied by Lemma 4, assuming that the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) is implemented as an array. Given access to the index \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)=\left\{\left(v,\mathbb{N}\mathbb{A} \mathbb{O}\left(v\right)\right)\middle|v\in V\right\}\) for a graph \(G\), a query simply asks for a clustering of the graph for a given value of \(\varepsilon\). **Theorem 2**.: _Given access to the index \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\), computing a clustering for a given parameter \(\varepsilon\) can be accomplished in \(\mathcal{O}\left(m\left(a(n)+1\right)\right)\) amortized time, where \(a(n)\) is the slowly growing inverse of the single-valued Ackermann's function._ Proof.: Intuitively, we are going to use \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) to construct the graph \(\widetilde{G^{+}}\) and compute its connected components incrementally. More formally, we start by putting each vertex to its isolated set in a disjoint structure. Then, we identify \(\varepsilon\)-heavy vertices \(\mathcal{H}\subseteq V\), which takes \(\mathcal{O}\left(m\right)\). Next, consider \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) for \(v\in\mathcal{H}\). For each vertex \(u\in\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) whose key is smaller than \(\varepsilon\) and \(u\in\mathcal{H}\), we call \(\textsc{Union}(u,v)\), which merges the cluster which \(u\) and \(v\) belong. These operations takes \(\mathcal{O}\left(m\times a(n)\right)\) amortized time [5] using a Disjoint-Set data structure. The correctness of the query algorithm is implied by the Definition 1 and the Lemmas 4 and 5. Before going any further, we need to state the following results, which intuitively investigate the behavior of the \(\varepsilon\)-agreement and \(\varepsilon\)-light of edges and vertices through the \(\mathcal{E}\).Let \[\mathcal{E}=\left\{\textsc{NonAgreement}_{G^{+}}\left(u,v\right)\left|\left\{u,v\right\}\in E^{+}\right.\right\}=\left\{\varepsilon_{0},\varepsilon_{1}, \ldots,\varepsilon_{\ell-1}\right\}, \tag{5}\] where \(\varepsilon_{i-1}<\varepsilon_{i}\) for \(i=1,\ldots,\ell-1\). Moreover, assume that \(\varepsilon_{\ell}>\max\mathcal{E}\). **Observation 2**.: _For any \(\varepsilon\leq\varepsilon_{0}\), the endpoints of all positive signed edges \(\left\{u,v\right\}\) are not in \(\varepsilon\)-agreement._ **Observation 3**.: _For any \(\varepsilon>\varepsilon_{\ell-1}\), the endpoints of all positive signed edges \(\left\{u,v\right\}\) are in \(\varepsilon\)-agreement._ By the Observation 2, the correlation clustering output by the Algorithm 1 for all values of \(\varepsilon\leq\varepsilon_{0}\) would be the collection of singleton vertices, i.e. \(\left\{\left\{v\right\}\middle|v\in V\right\}\). However, we cannot say that for \(\varepsilon>\varepsilon_{ell-1}\), the output is a single cluster, i.e. the set \(V\). Why is that? As \(\varepsilon\) increases, the number of edges in \(\varepsilon\)-agreement would increase, however, the number of \(\varepsilon\)-light vertices would increase, too (By Corollary 1, which follows soon). Hence, there is a trade-off for choosing a suitable value of \(\varepsilon\) to get the minimum number of clusters. We would discuss this issue further in Section 4 (Experiments). **Theorem 3**.: _Let \(\varepsilon<\varepsilon^{\prime}\) and \(u,v\in V\) be two distinct vertices. If \(u\) and \(v\) are in \(\varepsilon\)-agreement, then they would be in \(\varepsilon^{\prime}\)-agreement, too. Also, if \(u\) and \(v\) are not in \(\varepsilon^{\prime}\)-agreement, then they would not be in \(\varepsilon\)-agreement._ Proof.: The proof is a direct implication of the NonAgreement definition. Given that \(u\) and \(v\) are in \(\varepsilon\)-agreement, we have \(\textsc{NonAgreement}_{G^{+}}\left(u,v\right)<\varepsilon<\varepsilon^{\prime}\), i.e. \(u\) and \(v\) are in \(\varepsilon^{\prime}\)-agreement, too. Similarly, given that \(u\) and \(v\) are not in \(\varepsilon^{\prime}\)-agreement, we have \(\textsc{NonAgreement}_{G^{+}}\left(u,v\right)\geq\varepsilon^{\prime}>\varepsilon\), i.e. \(u\) and \(v\) are not in \(\varepsilon\)-agreement, too. **Theorem 4**.: _Let \(\varepsilon<\varepsilon^{\prime}\) and \(u\in V\). If \(u\) is \(\varepsilon\)-light, then \(u\) would be \(\varepsilon^{\prime}\)-light, too. Also, if \(u\) is \(\varepsilon^{\prime}\)-heavy, then it would be \(\varepsilon\)-heavy._ Proof.: This is implied by the definition of \(\varepsilon\)-lightness and Theorem 3. Two other important cases are not considered in Theorem 4, i.e. (1) Is it possible that a vertex \(u\) be \(\varepsilon\)-heavy, but becomes \(\varepsilon^{\prime}\)-light for some values \(\varepsilon<\varepsilon^{\prime}\)?, and (2) Is it possible that a \(\varepsilon\)-light vertex becomes \(\varepsilon^{\prime}\)-heavy for some values of \(\varepsilon<\varepsilon^{\prime}\)? The answer to both of these questions is affirmative. By Theorems 3 and 4 and the previous discussion, we can state the following corollary. **Corollary 1**.: _Let \(\varepsilon<\varepsilon^{\prime}\)._ 1. _The number of positively signed edges whose endpoints are in_ \(\varepsilon^{\prime}\)_-agreement is greater than or equal to the number of positive edges whose endpoints are in_ \(\varepsilon\)_-agreement. In other words, the agreement relation is monotone._ 2. _The number of vertices which are_ \(\varepsilon^{\prime}\)_-light can be either greater or less than or even equal to the number of_ \(\varepsilon\)_-light vertices. In other words, the lightness relation is not necessarily monotone._ The Baseline idea is to recompute the NonAgreement for each new value of \(\varepsilon\), which takes \(\mathcal{O}\left(m\times\alpha(G)\right)\), as is described in the proof of Lemma 2, deciding on the \(\varepsilon\)-heaviness of the vertices in \(G\) in time \(\mathcal{O}\left(n\right)\) and computing the connected components of the graph \(\widehat{G^{+}}\) as the output in time \(\mathcal{O}\left(m+n\right)\). Totally, the time complexity of the Baseline is \(\mathcal{O}\left(m\times(2+\alpha(G))+n\right)\). To conclude, using the index structure we invest \(\mathcal{O}\left(m\right)\) space (Lemma 2) and \(\mathcal{O}\left(m\times(\alpha(G)+\lg m)\right)\) (Lemma 3) time to construct the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) which make it possible to answer each query with varying \(\varepsilon\) values in time \(\mathcal{O}\left(m(a(n)+1)\right)\) (Theorem 2). Comparing this with \(\mathcal{O}\left(m\times(2+\alpha(G))+n\right)\) time for the Baseline reveals that our index-based structure makes query times faster for variable values of \(\varepsilon\). Given access to NAO, the algorithm 2 constructs the graph \(\widehat{G^{+}}\). Note that the output of the algorithms 1 and 2 are the same. As answering whether a vertex \(v\) is \(\varepsilon\)-heavy or the endpoints of an edge \(\{u,v\}\) are in \(\varepsilon\)-agreement can be answered in \(\mathcal{O}\left(1\right)\) time, the running-time of the algorithm 2 would be \(\mathcal{O}\left(\max\left\{|V|,|E^{+}|\right\}\right)\) for the **for** loop and \(\mathcal{O}\left(|V_{\widehat{G^{+}}}|+|E_{\widehat{G^{+}}}^{+}|\right)\) to compute the connected components of \(\widehat{G^{+}}\). Totally, the running-time of the query algorithm in the ICC would be \(\mathcal{O}\left(m+n\right)\), compared to the \(\mathcal{O}\left(m\times(2+\alpha(G))+n\right)\) time for the CC. To model queries over a graph for various values of \(\varepsilon\), we use a \(\varepsilon\)-Schedule defined as \[\varepsilon\text{-Schedule}=\{\varepsilon_{0}<\varepsilon_{1}<\cdots<\varepsilon _{\ell}\}\subseteq[0,2]. \tag{6}\] Note that we consider \(\varepsilon\)-Schedule as a set of strictly increasing real numbers, just for the sake of simplicity in notation. In a real life scenario, any time one can ask for a clustering with any value of \(\varepsilon\). ``` 1:procedureIndexCorrelationClustering\(\left(G,\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right),\varepsilon\right)\) 2:for all\(v\in V\)do 3: Use the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(\left(\right)v\right)\) to identify and remove all the edges \(\{v,u\}\) which are in \(\varepsilon\)-NonAgreement 4: Use the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) to identify and remove edges \(\{u,v\}\) where \(u\) and \(v\) are both \(\varepsilon\)-light 5:endfor 6:return The connected components of the remaining graph as the clustering output. 7:endprocedure ``` **Algorithm 2** The index-based correlation clustering algorithm with \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) structure. ### Maintaining the index The index structure introduced in Section 3.1 is used to compute a clustering of a static graph for user-defined dynamic values of \(\varepsilon\). In this section, we revise this structure to make it suitable for computing a clustering of a dynamic graph for dynamic values of \(\varepsilon\). There are three different operations applicable to the underlying graph of the CorrelationClustering: (1) flipping the sign of an edge \(e=\{u,v\}\), (2) adding a new vertex \(v\), and (3) removing an existing vertex \(v\). These operations are considered in Lemmas 6, 7, and 8, respectively. Shakiba in [12] has shown that flipping the sign of an edge \(\{u,v\}\), the \(\varepsilon\)-agreement of edges whose both endpoints are not in the union of the positive neighborhood of its endpoints does not change (Propositions 2 and 4 in [12]). Therefore, we just need to compute the \(\varepsilon\)-agreement for positive edges \(\{x,w\}\) where \(x,w\in(N_{G^{+}}(u)\cup N_{G^{+}}(v))\). **Lemma 6**.: _Assuming the sign of an edge \(e=\{u,v\}\) is flipped._ 1. _Assuming the sign of edge was_ \(+\) _prior to the flipping, then_ \(u\) _and_ \(v\) _would be no more in_ \(\varepsilon\)_-agreement since their edge is now negatively signed. The vertices_ \(v\) _and_ \(u\) _are removed from the arrays_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(u\right)\) _and_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\)_, respectively. Moreover, we need to update the values of the non-agreement for vertices_ \(u\) _and_ \(v\) _in_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(w\right)\) _for all vertices_ \(w\in\mathbb{N}\mathbb{A}\mathbb{O}\left(u\right)\cup\mathbb{N}\mathbb{A} \mathbb{O}\left(v\right)\)_._ 2. _Assuming the sign of edge_ \(w-\) _prior to the flipping, then their_ NonAgreement__\({}_{G^{+}}\left(u,v\right)\) _is computed and the vertices_ \(v\) _and_ \(u\) _are added to proper sorted place in_ \(\mathbb{NAO}\left(u\right)\) _and_ \(\mathbb{NAO}\left(v\right)\)_, respectively. Moreover, we need to recompute the_ NonAgreement__\({}_{G^{+}}\left(x,w\right)\) _for all_ \(x,w\in\mathbb{NAO}\left(u\right)\cup\mathbb{NAO}\left(v\right)\) _and update their_ \(\varepsilon\)_-agreement values._ _Applying these changes is applicable in \(\mathcal{O}\left(\sum_{\left\{x,w\right\}\in E^{+}\cap X\times X}\left(\log \deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\right)\) where \(X=\mathbb{NAO}\left(u\right)\cup\mathbb{NAO}\left(v\right)\)._ Proof.: The correctness of this lemma follows from the discussion just before it. Therefore, we would just give the time analysis. In the first case, finding the vertices in each \(\mathbb{NAO}\) is of \(\mathcal{O}\left(\log\deg_{G^{+}}(u)+\log\deg_{G^{+}}(v)\right)\). The removal would be \(\mathcal{O}\left(1\right)\) amortized time. The second case, i.e. flipping the sign from \(-\) to \(+\), would cost more, as it may require changing all the positive edges with both endpoints the in the set \(X\). In the worst case, one may assume that all the \(\varepsilon\)-agreement between the edges with both endpoints inside \(X\) is changed. Therefore, we need to update each of them on their corresponding \(\mathbb{NAO}\) indices. We need to add vertices \(v\) and \(u\) to the \(\mathbb{NAO}(u)\) and \(\mathbb{NAO}(v)\), respectively, based on NonAgreement__\({}_{G^{+}}\left(u,v\right)\). Finding their correct position inside sorted \(\mathbb{NAO}\)s is possible with \(\mathcal{O}\left(\log\deg_{G^{+}}(u)+\log\deg_{G^{+}}(v)\right)\) comparisons. Again, adding them at their corresponding position would take \(\mathcal{O}\left(1\right)\) amortized time. For the other positive edges with both endpoints in the set \(X\), such as \(\left\{x,w\right\}\), we might need to update their \(\varepsilon\)-agreement. Without loss of generality, assume that \(\deg_{G^{+}}(x)\leq\deg_{G^{+}}(w)\). Doing so requires re-computation of NonAgreement__\({}_{G^{+}}\left(x,w\right)\), in \(\mathcal{O}\left(\min\left\{\deg_{G^{+}}(x),\deg_{G^{+}}(w)\right\}\right)\), querying the NonAgreement inside \(\mathbb{NAO}\left(x\right)\), in time \(\mathcal{O}\left(\log\deg_{G^{+}}(x)\right)\), and then possible updating its position, in both \(\mathbb{NAO}\left(x\right)\) and \(\mathbb{NAO}\left(w\right)\) requires \(\mathcal{O}\left(\log\deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\). In the worst case, all the elements in \(X\) may need update. Therefore, this case can be accomplished in \[\mathcal{O}\left(\sum_{\left\{x,w\right\}\in E^{+}\cap X\times X}\left(\log \deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\right),\] in the worst-case. Using Lemma 6, we can state the NAO-FlipEdge\(\left(u,v\right)\) algorithm (Algorithm 3) which flips the sign of the edge \(\left\{u,v\right\}\). The correctness and the running-time of this algorithm follows directly from Lemma 6. There are cases where a set of positive edges \(E_{v}^{+}\) are also given for a newly added vertex \(v\). One can first add in \(\mathbb{NAO}\left(G\right)\). ``` 1:procedureNAO-FlipEdge\(\left(u,v\right)\) 2: Let \(\mathcal{A}\leftarrow\mathbb{NAO}\left(u\right)\cup\mathbb{NAO}\left(v\right)\). 3:if Sign of edge \(\left\{u,v\right\}\) is positive then 4: Update the graph \(G^{+}\) by removing edge \(\left\{u,v\right\}\). 5: Remove the vertices \(u\) and \(v\) from \(\mathbb{NAO}\left(v\right)\) and \(\mathbb{NAO}\left(u\right)\), respectively. 6:else 7: Update the graph \(G^{+}\) by adding edge \(\left\{u,v\right\}\). 8: Compute NonAgreement__\({}_{G^{+}}\left(u,v\right)\). 9: Add the vertices \(u\) and \(v\) to \(\mathbb{NAO}\left(v\right)\) and \(\mathbb{NAO}\left(u\right)\), respectively. 10:endif 11:for all\(w\in\mathcal{A}\)do 12: Recompute NonAgreement__\({}_{G^{+}}\left(u,w\right)\) and update \(\mathbb{NAO}\left(u\right)\) and \(\mathbb{NAO}\left(w\right)\). 13: Recompute NonAgreement__\({}_{G^{+}}\left(v,w\right)\) and update \(\mathbb{NAO}\left(v\right)\) and \(\mathbb{NAO}\left(w\right)\). 14:endfor 15:endprocedure ``` **Algorithm 3** The NAO-FlipEdge\(\left(u,v\right)\) algorithm to apply the effect of flipping the sign of the edge \(\left(u,v\right)\) in \(\mathbb{NAO}\left(G\right)\). the vertex with all negative edges, and afterwards, flip all the edges in \(E_{v}^{+}\), so they would become positively signed. However, a batch operation would give us higher performance in practice. **Lemma 7**.: _Assume that a vertex \(v\) is added to graph \(G\) with new positive signed edges \(E_{v}^{+}\). Then,_ 1. _If_ \(E_{v}^{+}=\emptyset\)_, then_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) _does not need any updates and we just add a new_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) _to_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\) _with_ \(v\) _as its only element in constant-time._ 2. _Otherwise, let_ \(X=N_{G^{+}}(v)\cup\left(\cup_{x\in N_{G^{+}}(v)}N_{G^{+}}(x)\right)\)_. We need to compute the_ \(\textsc{NonAgreement}_{G^{+}}\left(x,w\right)\) _for each positive edges_ \(\left\{x,w\right\}\) _with_ \(x,w\in X\) _after adding all the new positive edges in_ \(E_{v}^{+}\)_, and possibly update_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(x\right)\) _and_ \(\mathbb{N}\mathbb{A}\mathbb{O}\left(w\right)\)_. This would require_ \[\mathcal{O}\left(\sum_{\left\{x,w\right\}\in(E^{+}\cup E_{v}^{+})\cap X\times X }\left(\log\deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\right),\] (7) _operations in the worst case._ Proof.: The correctness would follow immediately from the Lemma 6. Again, the first case can be accomplished in \(\mathcal{O}\left(1\right)\) time. For the second case, we need to calculate \(\left|X\right|\) values of NonAgreement, and add them or update them in their corresponding \(\mathbb{N}\mathbb{A}\mathbb{O}\)s. This would require \[\mathcal{O}\left(\sum_{\left\{x,w\right\}\in(E^{+}\cup E_{v}^{+})\cap X\times X }\left(\log\deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\right), \tag{8}\] by exactly the same discussion as in Lemma 6. _Remark 1_.: In comparing the time required for Lemmas 6 and 7, please note that the size of the set \(X\) can be much larger in Lemma 7 than the one in Lemma 6, depending on the size of \(E_{v}^{+}\) for the new vertex \(v\). The \(\textsc{NAO-AddVertex}(x,N_{x})\) algorithm (Algorithm 4) adds a new vertex \(x\) to \(G^{+}\) and all its neighboring positively signed edges \(N_{x}\), and updates the \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\). The correctness and the running-time of this algorithm follows directly from Lemma 7. ``` 1:procedureNAO-AddVertex\((x,N_{x})\) 2: Update graph \(G^{+}\) by adding the new vertex \(x\). 3: Construct a new \(\mathbb{N}\mathbb{A}\mathbb{O}\left(x\right)\) and add it to \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\). 4: Let \(X\gets N_{x}\cup\left(\cup_{z\in N_{x}}N_{G^{+}}(z)\right)\). 5:for all\(y\in X\)do 6:for all\(z\in X\setminus y\)do 7: Update \(\mathbb{N}\mathbb{A}\mathbb{O}\left(y\right)\) and \(\mathbb{N}\mathbb{A}\mathbb{O}\left(z\right)\) if the \(\textsc{NonAgreement}_{G^{+}}\left(y,z\right)\) changes. 8:endfor 9:endfor 10:endprocedure ``` **Algorithm 4** The \(\textsc{NAO-AddVertex}(x,N_{x})\) algorithm to add a new vertex \(x\) and all its positive neighbors \(N_{x}\) to \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\). For the last operation, removing an existing vertex \(v\) with a set of positive edges \(E_{v}^{+}\), can be accomplished by first flipping the sign of all of its adjacent edges in \(E_{v}^{+}\) from \(+\) to \(-\), and afterwards, removing its \(\mathbb{N}\mathbb{A}\mathbb{O}\left(v\right)\) from \(\mathbb{N}\mathbb{A}\mathbb{O}\left(G\right)\). Similar to vertex addition, we can do it also in batch-mode, hoping for better performance in practice. The algorithm for the \(\textsc{NAO-RemoveVertex}(x)\) is similar to the \(\textsc{NAO-AddVertex}(x,N_{x})\) algorithm (Algorithm 4). **Lemma 8**.: _Assume that an existing vertex \(v\) is removed from the graph \(G\) with the set of adjacent positive signed edges \(E_{v}^{+}\). Then,_ 1. _If_ \(E_{v}^{+}=\emptyset\)_, then_ \(\mathbb{NAO}\left(v\right)\) _has a single element, itself, and can be easily removed from_ \(\mathbb{NAO}\left(G\right)\)_. Nothing else would require a change, so it is of constant-time complexity._ 2. _Otherwise, let_ \(X=N_{G^{+}}(v)\cup\left(\cup_{x\in N_{G^{+}}(v)}N_{G^{+}}(x)\right)\)_. We need to compute the_ \(\textsc{NonAgreement}_{G^{+}}\left(x,w\right)\) _for each positive edges_ \(\{x,w\}\) _with_ \(x,w\in X\) _after removing all the edges in_ \(E_{v}^{+}\)_, and possibly update_ \(\mathbb{NAO}\left(x\right)\) _and_ \(\mathbb{NAO}\left(w\right)\)_. In the worst-case, this would require_ \[\mathcal{O}\left(\sum_{\{x,w\}\in(E^{+}\cup E_{v}^{+})\cap X\times X}\left( \log\deg_{G^{+}}(x)+\log\deg_{G^{+}}(w)\right)\right),\] (9) _operations._ Proof.: The correctness of this lemma is implied by the Lemma 6. The discussion of the running-time is exactly the same as in the proof of Lemma 7. ## 4 Experiments To evaluate the proposed method, we used 7 graphs which are formed by user-user interactions. These datasets are described in Table 1 and are accessible through the SNAP [8]. The smallest dataset consists of \(22\,470\) nodes with \(171\,002\) edges and the largest one consists of \(317\,080\) nodes with \(1\,049\,866\) edges. In all these graphs, we consider the existing edges as positively signed and non-existing edges as negatively signed. The distribution of the NonAgreements for each dataset is illustrated in Figures 1(a) to 1(a). The distribution of NonAgreements of the edges in almost all the datasets obeys the normal distribution, except small imperfections in Arxiv ASTRO-PH (Figure 1(a)), Arxiv COND-MAT (Figure 3(a)), and DBLP (Figure 7(a)). Moreover, more detailed statistics on these distributions are given in Tables 2. One single observation is that the most frequent value of the NonAgreement in all the sample datasets is 1. Why is that? Without loss of generality, assume that \(\deg_{G}(u)\geq\deg_{G}(v)\). Then, \(\left|N_{G}(v)\right|=2\left|N_{G}(u)\cap N_{G}(v)\right|\). By the assumption \(\left|N_{G}(u)\right|\geq 2\left|N_{G}(u)\cap N_{G}(v)\right|\), i.e. the intersection of the neighborhood of vertices \(u\) and \(v\) consists of at most half of the neighborhood of \(u\). Also, exactly half of the neighborhood of \(v\) falls at the intersection of the neighborhood of \(u\) and \(v\). Intuitively, the vertex \(u\) has clustering preference with extra vertices at least the number of vertices which both \(u\) and \(v\) have clustering preference. Similarly, the vertex \(v\) has clustering preference with exactly half extra vertices which both \(u\) and \(v\) have clustering preference. For each dataset, we have used a different set of \(\varepsilon\)-Schedules, depending on the distribution of their NonAgreements. More precisely: (1) we have sorted the values of non-agreements in each dataset in a non-decreasing order, with repetitions. (2) Then, we have selected 21 distinct values equally spaces of these values. (3) The \(\varepsilon\)-Schedule was set to the selected values in the second step, which appended the value of 0 at the beginning and the value of 1.99 to the end if either does not exists. Totally, the number of \(\varepsilon\)-Schedule for each dataset is either 21, 22 or 23. An interesting observation is the number of clusters for \(\varepsilon\approx 1^{-}\) in Figures 1(b) to 1(b). Note that we use \(1^{-}\) to denote the interval \([1-\epsilon,1]\) for some non-zero constant \(\epsilon>0\). When \(\varepsilon\approx 1\) but less than 1, the number of clusters is minimum. As it gets closer to 1, the number of clusters increases with a much greater descent than the decrease in the number of clusters as it gets close to 1 from 0. As in Corollary 1, by increasing the \(\varepsilon\), the number of vertices in \(\varepsilon\)-agreement is a non-decreasing function, which is confirmed by the plots in Figures 1(a) to 1(a) as the number of vertices in \(\varepsilon\)-non-agreement is given by a non-increasing function. By closer visual inspection of these figures, we can see that the shape of the plot for the number of \(\varepsilon\)-non-agreement vertices in all these graphs is almost the same, with inflection point around the value of \(\varepsilon\approx 1\). This is due to the intrinsic nature of the NonAgreement\({}_{G}\left(u,v\right)\). Similarly, the non-monotonicity result stated in Corollary 1 is observed in the same figures for the number of \(\varepsilon\)-light vertices. By a visual inspection, the trend of the number of \(\varepsilon\)-light vertices for almost all datasets, except for the Arxiv HEP-TH (Figure 4(a)) and the EU-Email (Figure 6(a)), is the same: the number of \(\varepsilon\)-light vertices increases as \(\varepsilon\) increases up to some point (first interval), then decreases slightly (second interval), and finally increases and would be asymptotically equal to the number of vertices in the graphs (last interval). For Arxiv HEP-TH and EU-Email, we have the same trend, however, the second interval is very small. All the algorithms for the naive and the proposed index-based correlation clustering algorithms are implemented in C++1 without any parallelization and the experiments are done using an Ubuntu 22.04.1 TLS with an Intel Core i7-10510U CPU @ 1.80GHz with 12 GB of RAM. The time for running the naive correlation clustering algorithm (Algorithm 1), denoted here as CC, as well as the time for the index-based correlation clustering algorithm denoted as ICC, is given in Figures 2d to 8d). Note that the time reported for the ICC in these figures does not include the required time for constructing the N\(\mathsf{A}\)O, as they are constructed once and used throughout the \(\varepsilon\)-Schedule. The running-time to read the graph as well as constructing the N\(\mathsf{A}\)O is reported in Table 3 in milliseconds. The CC and ICC algorithms are the same, except that in CC, the non-agreement values of the edges and the \(\varepsilon\)-lightness of the vertices are computed for each given value of \(\varepsilon\), however, in ICC these are computed and stored in the proposed N\(\mathsf{A}\)O structure once and used for clustering with respect to different values of \(\varepsilon\). As it can be observed in Figures 2d to 8d, the running time for the ICC, excluding the time to construct the N\(\mathsf{A}\)O for once, is largely smaller than the one for CC. On average, our approach for the described \(\varepsilon\)-Schedule lead to \(\%25\) decrease in clustering time. This enhancement comes at the cost of pre-computing the N\(\mathsf{A}\)O, which costs on average \(\%34\) of the time for a single run of CC, which is quite small and makes the ICC efficient in cases where one requires to have multiple clustering for various values of \(\varepsilon\). Footnote 1: [https://github.com/alishakiba/Correlation-Clustering-Algorithm-for-Dynamic-Complete-Signed-Graphs-An-Index-based-Approach](https://github.com/alishakiba/Correlation-Clustering-Algorithm-for-Dynamic-Complete-Signed-Graphs-An-Index-based-Approach) ## 5 Conclusion In this paper, we proposed a novel indexing structure to decrease the overall running-time of an approximation algorithm for the correlation clustering problem. This structure can be constructed in \(\mathcal{O}\left(m\times\alpha(G)\right)\) time with \(\mathcal{O}\left(m\right)\) memory. Then, we can output a correlation clustering for any value of \(\varepsilon\) in \(\mathcal{O}\left(m+n\right)\), compared with \(\mathcal{O}\left(m\times(2+\alpha(G))+n\right)\) time complexity of the ordinary correlation clustering algorithm. Moreover, \begin{table} \begin{tabular}{|l|r|r|} \hline **Dataset** & **Nodes** & **Edges** \\ \hline _Arxiv ASTRO-PH_[7] & 18 772 & 198 110 \\ \hline _MUSAE-Facebook_[11] & 22 470 & 171 002 \\ \hline _Arxiv COND-MAT_[7] & 23 133 & 93 497 \\ \hline _Arxiv HEP-TH_[6] & 27 770 & 352 807 \\ \hline _Enron-Email_[9] & 36 692 & 183 831 \\ \hline _EU-Email_[7] & 265 214 & 420 045 \\ \hline _DBLP_[13] & 317 080 & 1 049 866 \\ \hline \end{tabular} \end{table} Table 1: Description of the datasets. \begin{table} \begin{tabular}{|l|r|r|r||r|r|} \hline **Dataset** & **Distinct** & **Minimum** & **Maximum** & **Top 2 frequent values** \\ \hline _Arxiv ASTRO-PH_ & 16 436 & 0.015 873 0 & 1.967 21 & 1 — 9 664 & 0.5 — 2 992 \\ \hline _MUSAE-Facebook_ & 12 988 & 0.031 746 & 1.978 02 & 1 — 18 534 & 1.25 — 2 346 \\ \hline _Arxiv COND-MAT_ & 3 893 & 0.044 444 & 1.964 91 & 1 — 12 018 & 0.5 — 4 954 \\ \hline _Arxiv HEP-TH_ & 23 285 & 0.086 956 5 & 1.976 19 & 1 — 24 852 & 1.333 33 – 4 910 \\ \hline _Enron-Email_ & 20 273 & 0.090 909 1 & 1.954 55 & 1 — 31 704 & 0.5 — 6 796 \\ \hline _EU-Email_ & 28 612 & 0.117 647 & 1.990 52 & 1 — 441 520 & 0.997 658 — 682 \\ \hline _DBLP_ & 11 611 & 0.013 793 1 & 1.981 13 & 1 — 167 590 & 0.5 — 67 238 \\ \hline \end{tabular} \end{table} Table 2: Statistics of NonAgreement in each dataset. the proposed index can be efficiently maintained during updates to the underlying graph, including edge sign flip, vertex addition and vertex deletion. The theoretical results are accompanied with practical results in the experiments using seven real world graphs. The experimental results show about %34 decrease in the running-time of queries. A future research direction would be studying this algorithm in parallel frameworks such as Map-Reduce and make it scalable to very Big graphs. Another research direction would be enhancing the approximation guarantee of the algorithm, or devising more efficient algorithms in terms of approximation ratio.
2310.09387
Mass Spectrum of Non-Charmed and Charmed Meson States in Extended Linear-Sigma Model
The mass spectrum of different meson particles is generated using an effective Lagrangian of the extended linear-sigma model (eLSM) for scalar and pseudoscalar meson fields and quark flavors, up, down, strange, and charm. Analytical formulas for the masses of scalar, pseudoscalar, vector, and axialvector meson states are derived assuming global chiral symmetry. The various eLSM parameters are analytically deduced and numerically computed. This enables accurate estimations of the masses of sixteen non-charmed and thirteen charmed meson states at vanishing temperature. The comparison of these results to recent compilation of the particle data group (PDG) allows to draw the conclusion that the masses of sixteen non-charmed and thirteen charmed meson states calculated in the eLSM are in good agreement with the PDG. This shows that the eLSM, with its configurations and parameters, is an effective theoretical framework for determining the mass spectra of various non-charmed and charmed meson states, particularly at vanishing temperature.
Azar I. Ahmadov, Azza A. Alshehri, Abdel Nasser Tawfik
2023-10-13T20:12:59Z
http://arxiv.org/abs/2310.09387v2
# Mass Spectrum of Non-Charmed and Charmed Meson States in Extended Linear-Sigma Model ###### Abstract For scalar and pseudoscalar meson fields and the up, down, strange and charm quark flavors, we construct the effective Lagrangian of the extended linear-sigma model (eLSM). Based on its global chiral symmetry, analytical expressions for the masses of the scalar, pseudoscalar, vector and axial-vector meson states are derived. The various eLSM parameters are determined, analytically and estimated, numerically. This allows a precise calculations for the masses of sixteen non-charmed and thirteen charmed meson states. We confront these results with the recent compilation of the particle data group (PDG) and concluded that the masses of the sixteen non-charmed and thirteen charmed meson states calculated with the eLSM are in excellent agreement with the PDG. Thus, we suggest that the eLSM with its parameters is a suitable theoretical approach to determine the mass spectra of the various non-charmed and charmed meson states, especially at vanishing temperature. Chiral Lagrangians, Sigma model, Charmed mesons pacs: 2.39.Fe,12.40.Yx,13.25.Ft + Footnote †: preprint: ECTP-2023-11, WLCAPP-2023-11, FUE-2023-11 ## I Introduction The extended linear-sigma model (eLSM), in which four quark flavors in addition to the scalar and pseudoscalar meson fields are properly integrated in an effective Lagrangian [1; 2], allows the construction of meson states \(\langle\bar{q}q\rangle=\langle\bar{q}_{\ell}q_{r}-\bar{q}_{\ell}q_{r}\rangle\neq 0\) (with at least one charm quark) [3]. With their chiral structures, the various meson states can be classified according to the relevant quantum numbers, orbital angular momenta \(J\), parity \(P\), and charge conjugates \(C\) into scalar \(J^{PC}=0^{++}\) pseudoscalar \(J^{PC}=0^{-+}\), vector \(J^{PC}=1^{--}\), and axialvector mesons \(J^{PC}=1^{++}\)[4; 5]. This work presents a systematic estimation of different non-charmed and charmed meson states. To this end, the effective Lagrangian i) describes various low-lying non-charmed and charmed meson states and ii) determines their mass spectra, at vanishing temperature. We develop and introduce an extended version of eLSM in which four quark flavors (up, down, strange, and charm) are included together with scalar and pseudoscalar glueball fields (dilaton Lagrangian). These are the basic degrees of freedom in the eLSM, which was succeeded in describing the vacuum phenomenology of the non-strange, strange, and charm mesons [6; 7; 8; 9; 10]. In this regard, we recall the quantum choromodynamics (QCD), the a gauge theory for color interactions of quarks and gluons, in which the Lagrangian is invariant under local color transformations, i.e., the physical content remains unchanged if colors of quarks and gluons are transformed, while the interactions themselves are flavor-blind. In eLSM, the global chiral symmetry is explicitly broken by non-vanishing quark masses and quantum effects [11], and spontaneously broken by a non-vanishing expectation value of the quark condensate in the QCD vacuum. The new version of eLSM is an effective framework to study the QCD symmetries including i) the isospin symmetry which is a global transformation referring to SU(2) rotation in flavor space of up and down quarks, where the Lagrangian is invariant for identical or vanishing masses [12; 13; 14], ii) the global chiral symmetry, which is exact in the chiral limit of massless QCD, i.e., left- and right-handed pieces of the Lagrangian are separately invariant, \(\Psi=\Psi_{R}+\Psi_{L}\) and \(\Psi_{R,L}=(1/2)(1\pm\gamma^{5})\Psi\)[15], iii) spontaneous chiral symmetry breaking. i.e., the chiral symmetry broken by properties of the QCD vacuum, e.g., Higgs mechanism, where pions are the lightest Goldstone bosons of the broken symmetry [16; 17], iv) discrete \(C\), \(P\), and \(T\) symmetries [18], and v) classical dilatation (scale) symmetry [19]. As for SU(4), we emphasize that good four decades ago, the spin-zero mass spectrum and the leptonic decay constants were determined in the one-loop approximation of SU(4) linear-sigma model [20]. The phenomenology of charmed mesons was investigated in eLSM [18]. Also recently, the quark-hadron phase structure was studied in the mean-field approximation of the SU(4) Polyakov linear-sigma model, at finite temperatures and densities [6] The present script is organized as follows. The whole formalism of the mesonic part of the Lagrangian of the SU(4) extended linear-sigma model is outlined in section II. Our results on the non-charmed and charmed meson states and their mass spectra are discussed in section III. Section IV is devoted to the final conclusions and outlook. Mesonic Lagrangian of extended linear-sigma model The extended linear-sigma model (eLSM) is an effective QCD-like model, in which the chiral symmetry is realized in the hadronic approaches in a linear representation [21]. The nonlinear representation considers only the Goldstone bosons besides the vector mesons [22; 23]. The linear representation, on the other hand, allows to consider the chiral partners of the Goldstone bosons, the scalar mesons. Its extension to the vector sector allows the introduction of vector and axialvector mesons. Therefore, the construction of an extended linear-sigma model (eLSM), with either \(N_{f}=2\) or \(N_{f}=3\) or recently \(N_{f}=4\) attracted the attention of many theoreticians. As shortly discussed in the introduction, eLSM takes into account the chiral symmetry besides many other QCD symmetries [24]. The Lagrangian for \(N_{f}=4\) with a global chiral invariance [6] has an analogous form as that of the corresponding Lagrangian for \(N_{f}=3\)[10]. Only for \(N_{f}=4\) the mass term \(-2\mathtt{Tr}[\epsilon\Phi^{\dagger}\Phi]\) must be added [11]. Then, the Lagrangian of the mesonic sector including scalar, pseudo-scalar, vector, and axial-vector meson states in addition to scalar and pseudo-scalar glueballs and the interactions together with their corresponding anomalies becomes [4; 5] \[\mathcal{L}_{\mathtt{m}} = \mathcal{L}_{\mathtt{ps}}+\mathcal{L}_{\mathtt{av}}+\mathcal{L}_ {\mathtt{int}}+\mathcal{L}_{\mathtt{anomaly}}+\mathcal{L}_{\mathtt{dilaton}} +\mathcal{L}_{\mathtt{emass}}, \tag{1}\] where \[\mathcal{L}_{\mathtt{ps}} = \mathtt{Tr}\left[\left(\mathcal{D}^{\mu}\Phi\right)^{\dagger} \left(\mathcal{D}^{\mu}\Phi\right)-m_{0}^{2}\left(\frac{G}{G_{0}}\right)^{2} \mathtt{Tr}\left(\Phi^{\dagger}\Phi\right)\right]-\lambda_{1}\left[\mathtt{Tr }\left(\Phi^{\dagger}\Phi\right)\right]^{2}-\lambda_{2}\mathtt{Tr}\left[ \left(\Phi^{\dagger}\Phi\right)\right]^{2} \tag{2}\] \[+ \mathtt{Tr}\left[H\left(\Phi+\Phi^{\dagger}\right)\right],\] \[\mathcal{L}_{\mathtt{av}} = -\frac{1}{4}\mathtt{Tr}\left[\left(L^{\mu\nu}\right)^{2}+\left(R ^{\mu\nu}\right)^{2}\right]+\mathtt{Tr}\left\{\left[\left(\frac{G}{G_{0}} \right)^{2}\frac{m^{2}}{2}+\Delta\right]\left[\left(L^{\mu}\right)^{2}+\left( R^{\mu}\right)^{2}\right]\right\}\] (3) \[-2ig_{2}\left\{\mathtt{Tr}\left(L_{\mu\nu}\left[L^{\mu},L^{\nu} \right]\right)+\mathtt{Tr}\left(R_{\mu\nu}\left[R^{\mu},R^{\nu}\right]\right) \right\},\] \[\mathcal{L}_{\mathtt{int}} = \frac{h_{1}}{2}\mathtt{Tr}\left(\Phi^{\dagger}\Phi\right)\mathtt{ Tr}\left[\left(L^{\mu}\right)^{2}+\left(R^{\mu}\right)^{2}\right]+h_{2}\mathtt{Tr} \left[\left(R^{\mu}\right)^{2}\Phi+\left(L^{\mu}\Phi\right)^{2}\right]+2h_{3} \mathtt{Tr}\left(L^{\mu}\Phi R_{\mu}\Phi^{\dagger}\right),\] (4) \[\mathcal{L}_{\mathtt{anomaly}} = C\left(\det\Phi+\det\Phi^{\dagger}\right)^{2}+iC_{\tilde{G}} \left(\det\Phi+\det\Phi^{\dagger}\right),\] (5) \[\mathcal{L}_{\mathtt{dilaton}} = \frac{1}{2}\left(\mathcal{D}_{\mu}G\right)^{2}-\frac{1}{4}\frac{m_ {G}^{2}}{\Lambda^{2}}\left(G^{4}\ln\frac{G^{2}}{\Lambda^{2}}-\frac{G^{4}}{4} \right),\] (6) \[\mathcal{L}_{\mathtt{emass}} = -2\mathtt{Tr}\left[\epsilon\left(\Phi^{\dagger}\Phi\right)\right], \tag{7}\] where \(G\) is the scalar glueball and \(\tilde{G}\) is the pseudoscalar glueball. \(C\), \(C_{\tilde{G}}\) are constants introduced for a better fits of the mesons and glueballs, respectively. \(m_{G}\) is the lowest-lying glueball mass, which is determined in the quenched approximation, i.e., no quarks [25]. \(\Lambda\) is the QCD scaling parameter. The dilaton Lagrangian is conjectured to mimic the QCD trace anomaly. The field \(\Phi\) is a complex \(N_{f}\times N_{f}\) matrix for scalar \(\sigma_{a}\) with \(J^{PC}=0^{++}\), pseudoscalar \(\pi_{a}\) with \(J^{PC}=0^{-+}\), vector with \(J^{PC}=0^{--}\), and axialvector mesons with \(J^{PC}=0^{++}\), Appendix B \[\Phi = \sum_{a=0}^{N_{f}^{2}-1}T_{a}\left(\sigma_{a}+i\pi_{a}\right), \tag{8}\] where the scalar mesons are given as \[T_{a}\pi_{a} = \frac{1}{\sqrt{2}}\left(\begin{array}{cccc}\frac{\sigma_{0}}{ 2}+\frac{\sigma_{3}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{6}}+\frac{\sigma_{15}} {2\sqrt{3}}&\frac{\sigma_{1}-i\sigma_{2}}{\sqrt{2}}&\frac{\sigma_{4}-i\sigma_ {5}}{\sqrt{2}}&\frac{\sigma_{9}-i\sigma_{10}}{\sqrt{\sqrt{2}}}\\ \frac{\sigma_{1}+i\sigma_{2}}{\sqrt{2}}&\frac{\sigma_{0}}{\sqrt{2}}-\frac{ \sigma_{3}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{6}}+\frac{\sigma_{15}}{2\sqrt{3 }}&\frac{\sigma_{6}-i\sigma_{7}}{\sqrt{2}}&\frac{\sigma_{11}-i\sigma_{12}}{ \sqrt{2}}\\ \frac{\sigma_{4}+i\sigma_{5}}{\sqrt{2}}&\frac{\sigma_{6}+i\sigma_{7}}{\sqrt{2} }&\frac{\sigma_{0}}{2}-\sqrt{\frac{3}{2}}\sigma_{8}+\frac{\sigma_{15}}{2\sqrt {3}}&\frac{\sigma_{13}-i\sigma_{14}}{\sqrt{2}}\\ \frac{\sigma_{9}+i\sigma_{10}}{\sqrt{2}}&\frac{\sigma_{11}+i\sigma_{12}}{\sqrt {2}}&\frac{\sigma_{13}+i\sigma_{14}}{\sqrt{2}}&\frac{\sigma_{0}}{2}-\frac{ \sqrt{3}}{2}\sigma_{15}\end{array}\right). \tag{9}\] Similarly, the pseudo-scalar mesons become \[T_{a}\pi_{a} = \frac{1}{\sqrt{2}}\left(\begin{array}{cccc}\frac{\pi_{0}}{2}+ \frac{\pi_{3}}{\sqrt{2}}+\frac{\pi_{8}}{\sqrt{6}}+\frac{\pi_{15}}{2\sqrt{3}}& \frac{\pi_{1}-i\pi_{2}}{\sqrt{2}}&\frac{\pi_{4}-i\pi_{5}}{\sqrt{2}}&\frac{\pi _{9}-i\pi_{10}}{\sqrt{\sqrt{2}}}\\ \frac{\pi_{1}+i\pi_{2}}{\sqrt{2}}&\frac{\pi_{0}}{\sqrt{2}}-\frac{\pi_{3}}{ \sqrt{2}}+\frac{\pi_{8}}{\sqrt{6}}+\frac{\pi_{15}}{2\sqrt{3}}&\frac{\pi_{6}-i \pi_{7}}{\sqrt{2}}&\frac{\pi_{11}-i\pi_{12}}{\sqrt{2}}\\ \frac{\pi_{4}+i\pi_{5}}{\sqrt{2}}&\frac{\pi_{6}+i\pi_{7}}{\sqrt{2}}&\frac{\pi _{0}}{2}-\sqrt{\frac{3}{2}}\pi_{8}+\frac{\pi_{15}}{2\sqrt{3}}&\frac{\pi_{13}- i\pi_{14}}{\sqrt{2}}\\ \frac{\pi_{9}+i\pi_{10}}{\sqrt{2}}&\frac{\pi_{11}+i\pi_{12}}{\sqrt{2}}&\frac{ \pi_{13}+i\pi_{14}}{\sqrt{2}}&\frac{\pi_{0}}{2}-\frac{\sqrt{3}}{2}\pi_{15} \end{array}\right). \tag{10}\] The chiral symmetry is explicitly broken by nonvanishing external field matrices \(H\), \(\Delta\), and \(\epsilon\), \[H = \sum_{a=0}^{N_{f}^{2}-1}h_{a}T_{a}=h_{0}T_{0}+h_{8}T_{8}+h_{15}T_{ 15}, \tag{11}\] \[\Delta = \sum_{a=0}^{N_{f}^{2}-1}h_{a}\delta_{a}=h_{0}\delta_{0}+h_{8} \delta_{15}+h_{a}\delta_{15},\] (12) \[\epsilon = \epsilon_{c}=m_{c}^{2}=\frac{1}{2}\left[m_{\chi^{c0}}^{2}-m_{0}^{ 2}-\lambda_{1}\left(\sigma_{x}^{2}+\sigma_{y}^{2}\right)-3\sigma_{c}^{2}\left( \lambda_{1}+\lambda_{2}\right)\right], \tag{13}\] where \(T_{a}=\lambda_{a}/2\) are the generators of the group U(\(N_{f}\)) with \(\lambda_{a}\) being the Gell-Mann matrices, Appendix A. In SU(4)\({}_{\ell}\times\)SU(4)\({}_{r}\) model, the quark condensates are given as \[\sigma_{x} = \frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{3}}+\frac{ \sigma_{15}}{\sqrt{6}}, \tag{14}\] \[\sigma_{y} = \frac{\sigma_{0}}{2}-\sqrt{\frac{2}{3}}\sigma_{8}+\frac{1}{2\sqrt {3}}\sigma_{15},\] (15) \[\sigma_{c} = \frac{\sigma_{0}}{2}+\frac{\sqrt{3}}{2}\sigma_{15}, \tag{16}\] where \(\sigma_{x}\) counts for the light quark (up and down) condensate and \(\sigma_{y}\) counts for the strange quark condensate. \(\sigma_{c}\) as the name says is the charm quark condensates. On the other hand, the quark masses \(m_{q}\) could be related to the resulting complex \(N_{f}\times N_{f}\) matrix for scalar \(J^{PC}=0^{++}\), pseudoscalar \(J^{PC}=0^{-+}\), vector \(J^{PC}=0^{--}\), and axialvector mesons \(J^{PC}=0^{++}\), i.e., to \(\Phi\), Eq. (B1). With \(m_{0}=(g/2)\Phi\), where \(g\) is the Yukawa coupling, we get \[m_{u} = \frac{g}{2}\left[\frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{ \sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}\right]=\frac{g}{2}\sigma_{x}, \tag{17}\] \[m_{d} = \frac{g}{2}\left[\frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{ \sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}\right]=\frac{g}{2}\sigma_{x},\] (18) \[m_{s} = \frac{g}{2}\left[\frac{\sigma_{0}}{\sqrt{2}}-\frac{2\sigma_{8}}{ \sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}\right]=\frac{g}{\sqrt{2}}\sigma_{y},\] (19) \[m_{c} = \frac{g}{2}\left[\frac{\sigma_{0}}{\sqrt{2}}-\sqrt{\frac{3}{2}} \sigma_{15}\right]=\frac{g}{\sqrt{2}}\sigma_{c}. \tag{20}\] Then, following quantities could be defined \[\texttt{Tr}\left(\Phi^{\dagger}\Phi\right) = \frac{1}{4}\left[(\sigma_{x})^{2}+(\sigma_{y})^{2}+(\sigma_{c})^{ 2}\right], \tag{21}\] \[\left[\texttt{Tr}\left(\Phi^{\dagger}\Phi\right)\right]^{2} = \frac{1}{4}\left[(\sigma_{x})^{4}+(\sigma_{y})^{4}+(\sigma_{c})^ {4}+2\left(\sigma_{x}\right)^{2}(\sigma_{y})^{2}+2\left(\sigma_{y}\right)^{2} (\sigma_{c})^{2}+(\sigma_{x})^{2}\left(\sigma_{c}\right)^{2}\right],\] (22) \[\texttt{Tr}\left[\epsilon\Phi^{\dagger}\Phi\right] = \frac{1}{2}e_{c}\left(\sigma_{c}\right)^{2},\] (23) \[\texttt{Tr}\left[H\left(\Phi^{\dagger}+\Phi\right)\right] = h_{x}\sigma_{x}+h_{y}\sigma_{y}+h_{c}\sigma_{c},\] (24) \[c\left(\det\Phi+\det\Phi^{\dagger}\right) = \frac{c}{4}\left(\sigma_{x}\right)^{2}\sigma_{y}\sigma_{c}, \tag{25}\] The covariant derivative of the scalar mesons is expressed as \[\mathcal{D}^{\mu}\Phi = \delta_{\mu}\Phi-ig_{1}\left(L^{\mu}\Phi-\Phi R^{\mu}\right)-ieA^ {\mu}\left[T_{3},\Phi\right], \tag{26}\] where \(A_{\mu}=gA_{\mu}^{a}\lambda^{a}/2\) is the electromagnetic field. For vector and axialvector meson nonets, we have \[L^{\mu\nu} = \delta_{\mu}L^{\nu}-ieA^{\mu}\left[T_{3},L^{\nu}\right]-\left\{ \delta^{\nu}\mathcal{L}^{\mu}-ieA^{\nu}\left[T_{3},L^{\mu}\right]\right\}, \tag{27}\] \[R^{\mu\nu} = \delta_{\mu}R^{\nu}-ieA^{\mu}\left[T_{3},R^{\nu}\right]-\left\{ \delta^{\nu}R^{\mu}-ieA^{\nu}\left[T_{3},R^{\mu}\right]\right\}, \tag{28}\] where \(L^{\mu}=\sum_{a=0}^{N_{f}^{2}-1}T_{a}(V_{a}^{\mu}+A_{a}^{\mu})\) and \(R^{\mu}=\sum_{a=0}^{N_{f}^{2}-1}T_{a}(V_{a}^{\mu}-A_{a}^{\mu})\). ## III Results ### Meson States From Eqs. (9) and (10), we can now determine various meson states, \[a_{0}^{\pm}\equiv\frac{\sigma_{1}\mp i\sigma_{2}}{\sqrt{2}}, a_{0}^{0}\equiv\sigma_{3}, \tag{29}\] \[k^{\pm}\equiv\frac{\pi_{4}\mp i\pi_{5}}{\sqrt{2}}, k^{0}\equiv\pi_{6}-i\pi_{7},\qquad\bar{k^{0}}\equiv\frac{\pi_{6}+i\pi_{7}}{ \sqrt{2}},\] (30) \[\frac{1}{\sqrt{2}}\left(\sigma_{N}+a_{0}^{0}\right)=\frac{a_{0}^ {0}}{\sqrt{2}}+\frac{\sigma_{0}}{2}+\frac{\sigma_{8}}{\sqrt{6}}+\frac{\sigma_ {15}}{2\sqrt{3}}, \frac{1}{\sqrt{2}}\left(\sigma_{N}-a_{0}^{0}\right)=-\frac{a_{0}^ {0}}{\sqrt{2}}+\frac{\sigma_{0}}{2}+\frac{\sigma_{8}}{\sqrt{6}}+\frac{\sigma_ {15}}{2\sqrt{3}},\] (31) \[\sigma_{s}=\frac{\sigma_{0}}{2}-\sqrt{\frac{2}{3}}\sigma_{8}+ \frac{\sigma_{15}}{2\sqrt{3}}, \chi_{co}=\frac{\sigma_{0}}{2}-\frac{\sqrt{3}}{2}\sigma_{15},\] (32) \[\frac{1}{\sqrt{2}}\left(\eta_{N}+\pi^{0}\right)=\frac{\pi^{0}}{ \sqrt{2}}+\frac{\pi_{0}}{2}+\frac{\pi_{8}}{\sqrt{6}}+\frac{\pi_{15}}{2\sqrt{3}}, \frac{1}{\sqrt{2}}\left(\eta_{N}-\pi^{0}\right)=-\frac{\pi^{0}}{ \sqrt{2}}+\frac{\pi_{0}}{2}+\frac{\pi_{8}}{\sqrt{6}}+\frac{\pi_{15}}{2\sqrt{3}},\] (33) \[\eta_{s}=\frac{\pi_{0}}{2}-\sqrt{\frac{2}{3}}\pi_{8}+\frac{\pi_{1 5}}{2\sqrt{3}}, \eta_{c}=\frac{\pi_{0}}{2}-\frac{\sqrt{3}}{2}\pi_{15},\] (34) \[D^{0}\equiv\frac{\pi_{0}+i\pi_{10}}{\sqrt{2}}, \bar{D}^{0}\equiv\frac{\pi^{0}-i\pi_{10}}{\sqrt{2}},\] (35) \[D^{\pm}\equiv\frac{\pi_{11}\pm i\pi_{12}}{\sqrt{2}}, D^{\pm}_{s}\equiv\frac{\pi_{13}\pm i\pi_{14}}{\sqrt{2}},\] (36) \[D^{0}_{0}\equiv\frac{\sigma_{9}+i\sigma_{10}}{\sqrt{2}}, \bar{D}^{0}_{0}\equiv\frac{\sigma_{9}-i\pi_{10}}{\sqrt{2}},\] (37) \[D^{\pm}_{0}\equiv\frac{\sigma_{11}\pm i\sigma_{12}}{\sqrt{2}}, \bar{D}^{\pm}_{s,0}\equiv\frac{\sigma_{13}\pm i\pi_{14}}{\sqrt{2}}. \tag{38}\] Section III.2 introduced analytical expressions for the mass spectra of non-charmed and charmed meson states. ### Mass Spectra of Non-Charmed and Charmed Meson States At vanishing temperature, the mass spectra of non-charmed meson states can be classified into: * Pseudo-scalar mesons \[m_{\pi}^{2} = Z_{\pi}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2}\lambda_ {2}\right)\sigma_{x}^{2}+\lambda_{1}\sigma_{y}^{2}+\lambda_{1}\sigma_{c}^{2} \right],\] (39) \[m_{K}^{2} = Z_{K}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2}\lambda_ {2}\right)\sigma_{x}^{2}-\frac{1}{2}\lambda_{2}\sigma_{x}\sigma_{y}+\lambda_{1 }\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+\lambda_{2}\sigma_{y}^{2}\right],\] (40) \[m_{\eta_{N}}^{2} = Z_{\pi}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2}\lambda_ {2}\right)\sigma_{x}^{2}+\lambda_{1}\left[\sigma_{y}^{2}+\sigma_{c}^{2} \right]+\frac{c}{2}\left(\sigma_{x}^{2}+\sigma_{y}^{2}+\sigma_{c}^{2}\right) \right],\] (41) \[m_{\eta_{S}}^{2} = Z_{\eta_{S}}^{2}\left[m_{0}^{2}+\lambda_{1}\left(\sigma_{x}^{2} +\sigma_{y}^{2}+\sigma_{c}^{2}\right)+\lambda_{2}\sigma_{y}^{2}+\frac{c}{8} \left(\sigma_{x}^{2}+\sigma_{x}^{2}+\sigma_{c}^{2}\right)\right],\] (42) where, \(Z_{\pi}\), \(Z_{K}\), \(Z_{\eta_{S}}\), the various wavefunction renormalization factors, are listed in Appendix D. * Scalar mesons \[m_{a_{0}}^{2} = m_{0}^{2}+\lambda_{1}\left[\sigma_{x}^{2}+\sigma_{y}^{2}+\sigma_{ c}^{2}\right]+\frac{3}{2}\lambda_{2}\sigma_{x}^{2},\] (43) \[m_{k_{0}^{*}}^{2} = Z_{k_{0}^{*}}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2} \lambda^{2}\right)\sigma_{x}^{2}+\frac{1}{\sqrt{2}}\lambda_{2}\sigma_{x} \sigma_{y}+\lambda_{1}\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+\lambda_{2} \sigma_{c}^{2}\right],\] (44) \[m_{\sigma_{N}}^{2} = m_{0}^{2}+3\left(\lambda_{1}+\frac{1}{2}\lambda_{2}\right) \sigma_{x}^{2}+\lambda_{1}\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right],\] (45) \[m_{\sigma_{S}}^{2} = m_{0}^{2}+\lambda_{1}\left[\sigma_{x}^{2}+\sigma_{c}^{2}\right] +3\left(\lambda_{1}+\lambda_{2}\right)\sigma_{y}^{2},\] (46) where \(Z_{k_{0}^{*}}\) is another wavefunction renormalization factor. * Vector mesons \[m_{\omega_{N}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}\left(h_{1}+h_{2}+h_{3}\right) \sigma_{x}^{2}+\frac{1}{2}h_{1}\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+2 \delta_{x},\] (47) \[m_{\omega_{S}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}h_{1}\left[\sigma_{x}^{2}+\sigma_ {c}^{2}\right]+\frac{1}{2}\left(h_{1}+2h_{2}+2h_{3}\right)\sigma_{y}^{2}+2 \delta_{x},\] (48) \[m_{K^{*}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{4}\sigma_{x}^{2}\left[g_{1}^{2}+2h_ {1}+h_{2}\right]+\frac{1}{\sqrt{2}}\sigma_{x}\sigma_{y}\left[h_{3}-g_{1}^{2}\right]\] (49) \[+ \frac{1}{2}\sigma_{y}^{2}\left[g_{1}^{2}+h_{1}+h_{2}\right]+\frac {1}{2}h_{1}\sigma_{c}^{2}+\delta_{x}+\delta_{y},\] \[m_{\rho}^{2} = m_{\omega_{N}}^{2}.\] (50) * Axial-vector mesons \[m_{a_{1}}^{2} = m_{1}^{2}-m_{0}^{2}+g_{1}^{2}\sigma_{x}^{2}+\frac{1}{2}h_{1} \left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+\frac{1}{2}\sigma_{x}^{2}\left[h_{1 }+h_{2}+h_{3}\right]+2\delta_{x},\] (51) \[m_{f_{1s}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}h_{1}\left[\sigma_{x}^{2}+\sigma_ {c}^{2}\right]+2g_{1}^{2}\sigma_{c}^{2}+\frac{1}{2}\left[h_{1}+2h_{2}-2h_{3} \right]\sigma_{y}^{2}+2\delta_{y},\] (52) \[m_{k_{1}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{4}\sigma_{x}^{2}\left[g_{1}^{2}+2h _{1}+h_{2}\right]+\frac{1}{2}\sigma_{y}^{2}\left[g_{1}^{2}+h_{1}+h_{2}\right]\] (53) \[+ \frac{1}{\sqrt{2}}\sigma_{x}\sigma_{y}\left[g_{1}^{2}-h_{3}\right] +\frac{1}{2}h_{1}\sigma_{c}^{2}+\delta_{x}+\delta_{y},\] \[m_{1N}^{2} = m_{a_{1}}^{2}.\] (54) Also, we determine the mass spectra of the charmed meson states. * Pseudo-scalar charmed mesons \[m_{D}^{2} = Z_{D}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2}\lambda_{2} \right)\sigma_{x}^{2}+\lambda_{1}\sigma_{y}^{2}+\left(\lambda_{1}+\lambda_{2} \right)\sigma_{c}^{2}-\frac{1}{\sqrt{2}}\lambda_{2}\sigma_{x}\sigma_{y}+ \epsilon_{c}\right],\] (55) \[m_{\eta_{c}}^{2} = Z_{\eta_{c}}^{2}\left[m_{0}^{2}+\lambda_{1}\left[\sigma_{x}^{2}+ \sigma_{y}^{2}\right]+\left(\lambda_{1}+\lambda_{2}\right)\sigma_{c}^{2}-\frac {c}{8}\left(\sigma_{x}^{2}+\sigma_{y}^{2}\right)+2\epsilon_{c}\right],\] (56) \[m_{D_{s}}^{2} = Z_{D_{s}}^{2}\left[m_{0}^{2}+\lambda_{1}\sigma_{x}^{2}+\left( \lambda_{1}+\lambda_{2}\right)\sigma_{y}^{2}+\left(\lambda_{1}+\lambda_{2} \right)\sigma_{c}^{2}-\lambda_{2}\sigma_{y}\sigma_{c}+\epsilon_{c}\right],\] (57) where \(Z_{D}\), \(Z_{\eta_{c}}\) and \(Z_{D_{s}}\) are wavefunction renormalization factors. * Scalar charmed mesons \[m_{\chi_{c0}}^{2} = m_{0}^{2}+\lambda_{1}\left[\sigma_{x}^{2}+\sigma_{y}^{2}\right] +3\left(\lambda_{1}+\lambda_{2}\right)\sigma_{c}^{2}+2\epsilon_{c},\] (58) \[m_{D_{0}^{0}}^{2} = Z_{D_{0}^{*}}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2} \lambda_{2}\right)\sigma_{x}^{2}+\lambda_{1}\sigma_{y}^{2}+\frac{1}{\sqrt{2}} \lambda_{2}\sigma_{x}\sigma_{c}+\left(\lambda_{1}+\lambda_{2}\right)\sigma_{c }^{2}+\epsilon_{c}\right],\] (59) \[m_{D_{0}^{*0}}^{2} = Z_{D_{0}^{*0}}^{2}\left[m_{0}^{2}+\left(\lambda_{1}+\frac{1}{2} \lambda_{2}\right)\sigma_{x}^{2}+\lambda_{1}\sigma_{y}^{2}+\frac{1}{\sqrt{2}} \lambda_{2}\sigma_{x}\sigma_{c}+\left(\lambda_{1}+\lambda_{2}\right)\sigma_{c }^{2}+\epsilon_{c}\right],\] (60) \[m_{D_{s}^{*}}^{2} = Z_{D_{s0}^{*}}^{2}\left[m_{0}^{2}+\lambda_{1}\sigma_{x}^{2}+ \left(\lambda_{1}+\lambda_{2}\right)\sigma_{y}^{2}+\lambda_{2}\sigma_{y}\sigma _{c}+\left(\lambda_{1}+\lambda_{2}\right)\sigma_{c}^{2}+\epsilon_{c}\right],\] (61) where \(Z_{D_{0}^{*}}\), \(Z_{D_{0}^{*0}}\) and \(Z_{D_{s0}^{*}}\) are additional wavefunction renormalization factors. * Vector charmed mesons \[m_{D^{*}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{4}\left(g_{1}^{2}+2h_{1}+h_{2} \right)\sigma_{x}^{2}+\frac{1}{\sqrt{2}}\sigma_{x}\sigma_{y}\left[h_{3}-g_{1} ^{2}\right]\] (62) \[+ \frac{1}{2}\left(g_{1}^{2}+h_{1}+h_{2}\right)\sigma_{c}^{2}+ \frac{1}{2}h_{1}\sigma_{y}^{2}+\delta_{x}+\delta_{c},\] \[m_{J/\psi}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}h_{1}\left[\sigma_{x}^{2}+\sigma_ {y}^{2}\right]+\frac{1}{2}\left(h_{1}+2h_{2}+2h_{3}\right)\sigma_{c}^{2}+2 \delta_{c},\] (63) \[m_{D_{s}^{*}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}\left(g_{1}^{2}+h_{1}+h_{2} \right)\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+\frac{1}{2}h_{1}\sigma_{x}^{2}\] (64) \[+ \left(h_{3}-g_{1}^{2}\right)\sigma_{y}\sigma_{c}+\delta_{y}+ \delta_{c}.\] * Axial-vector charmed mesons \[m_{D_{s1}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}\left(g_{1}^{2}+h_{1}+h_{2} \right)\left[\sigma_{y}^{2}+\sigma_{c}^{2}\right]+\sigma_{y}\sigma_{c}\left(g_ {1}^{2}-h_{3}\right)+\frac{1}{2}h_{1}\sigma_{x}^{2}+\delta_{y}+\delta_{c},\] (65) \[m_{D_{1}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{4}\left(g_{1}^{2}+2h_{1}+h_{2} \right)\sigma_{x}^{2}+\frac{1}{2}\left(g_{1}^{2}+h_{1}+h_{2}\right)\sigma_{c}^{2} +\frac{1}{\sqrt{2}}\left(g_{1}^{2}-h_{3}\right)\sigma_{x}\sigma_{c}\] (66) \[+ \frac{1}{2}h_{1}\sigma_{y}^{2}+\delta_{y}+\delta_{c},\] \[m_{\chi_{c1}}^{2} = m_{1}^{2}-m_{0}^{2}+\frac{1}{2}h_{1}\left[\sigma_{x}^{2}+\sigma_ {y}^{2}\right]+2g_{1}^{2}\sigma_{c}^{2}+\frac{1}{2}\left(h_{1}+2h_{2}-2h_{3} \right)\sigma_{c}^{2}+2\delta_{c}.\] (67) The Lagrangian of the extended linear-sigma model, Eq. (1), has various parameters including \(h_{1}\), \(h_{2}\), \(h_{3}\), \(\delta_{u}\), \(\delta_{d}\), \(\delta_{s}\), \(\delta_{c}\), \(\delta_{x}\), \(\delta_{y}\), \(\epsilon_{u}\), \(\epsilon_{d}\), \(\epsilon_{s}\), \(\epsilon_{c}\), \(C\), \(\lambda_{1}\), \(\lambda_{2}\), \(g_{1}\), and \(g_{2}\). All these parameters are explained and determined in the Appendix E. For large \(N_{c}\), both parameters \(h_{1}\) and \(\lambda_{1}\) vanish, especially for SU(3) meson masses. The other set of parameters, \(\sigma_{u}\), \(\sigma_{d}\), \(\sigma_{s}\), \(\sigma_{c}\), \(m_{u}\), \(m_{d}\), \(m_{s}\), \(m_{c}\), \(f_{\pi}\), and \(f_{K}\), can be fixed from recent compilation of particle data group [26]. Our calculations for the masses of various meson states are summarized in Tabs. I and II. Our calculations, in which \(h_{1}\), \(\lambda_{1}\), \(\lambda_{2}\) and \(g_{1}\) are the only free parameters, are in excellent agreement with the recent compilation of the particle data group (PDG) [26]. The percent error is listed in the last column. The reason why some of these free parameters play crucial roles in some meson states is to be found in the corresponding analytical expressions of such meson states. The section that follows is devoted to the final conclusions and outlook. ## IV Conclusions and Outlook We have constructed the meson states \(\langle\bar{q}q\rangle=\langle\bar{q}_{\ell}q_{r}-\bar{q}_{\ell}q_{r}\rangle\neq 0\) with and without charm quarks from the effective Lagrangian of the extended linear-sigma model. With respect to their quantum numbers, orbital angular momenta \(J\), parity \(P\), and charge conjugates \(C\), the meson states could be classified into pseudoscalar \(J^{PC}=0^{-+}\), scalar \(J^{PC}=0^{++}\), vector \(J^{PC}=1^{--}\), and axialvector \(J^{PC}=1^{++}\). We have introduced analytical expressions for the mass spectrum of sixteen non-charmed and thirteen charmed meson states, section III.2. The Appendix gives further details needed for the analytical analysis. The numerical analysis and the corresponding free parameters are summarized in Tab. 1 and 2, in which the calculations are confronted with the recent compilation of PDG. We conclude that the masses of the sixteen non-charmed and thirteen charmed meson states are in excellent agreement with PDG. Therefore, we also conclude that the eLSM with its set of parameters represents a suitable theoretical approach for the mass spectrum of non-charmed and charmed meson states, at vanishing temperature. The temperature dependence of these meson states represent the natural outlook to be accomplished elsewhere. Also, the in-medium modifications of these meson masses either in dense or magnetic or electric medium shall be derived elsewhere. ## Appendix A Gell-Mann Matrices in SU(4) In SU(4), the linearly independent traceless Hermitian Gell-Mass matrices are defined as \[\lambda_{1}=\left(\begin{array}{cccc}0&1&0&0\\ 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{2}=\left(\begin{array}{cccc}0&-i&0&0 \\ i&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{3}=\left(\begin{array}{cccc}1&0&0&0 \\ 0&-1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{4}=\left(\begin{array}{cccc}0&0&1&0 \\ 0&0&0&0\\ 1&0&0&0\\ 0&0&0&0\end{array}\right),\] \[\lambda_{5}=\left(\begin{array}{cccc}0&0&-i&0\\ 0&0&0&0\\ i&0&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{6}=\left(\begin{array}{cccc}0&0&0&0 \\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{7}=\left(\begin{array}{cccc}0&0&0&0 \\ 0&0&-i&0\\ 0&i&0&0\\ 0&0&0&0\end{array}\right),\qquad\lambda_{8}=-\frac{1}{\sqrt{3}}\left( \begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&-2&0\\ 0&0&0&0\end{array}\right),\] and \(\lambda_{0}=\mathtt{I}/\sqrt{\mathbf{2}}\) with \(\mathtt{I}\) is \(3\times 3\) identity matrix [21]. ## Appendix B \(4\times 4\)\(\Phi\) fields \[\Phi = T_{0}\sigma_{0}+T_{8}\sigma_{8}+T_{15}\sigma_{15}=\frac{1}{2} \lambda_{0}\sigma_{0}+\frac{1}{2}\lambda_{8}\sigma_{8}+\frac{1}{2}\lambda_{15 }\sigma_{15} \tag{16}\] \[= \frac{1}{2}\sqrt{\frac{1}{2}}\left(\begin{array}{cccc}\sigma_{ 0}&0&0&0\\ 0&\sigma_{0}&0&0\\ 0&0&\sigma_{0}&0\end{array}\right)+\frac{1}{2}\sqrt{\frac{1}{3}}\left( \begin{array}{cccc}\sigma_{8}&0&0&0\\ 0&\sigma_{8}&0&0\\ 0&0&-2\sigma_{8}&0\\ 0&0&0&0\end{array}\right)+\frac{1}{2}\sqrt{\frac{1}{6}}\left(\begin{array}[] {cccc}\sigma_{15}&0&0&0\\ 0&\sigma_{15}&0&0\\ 0&0&-2\sigma_{15}&0\\ 0&0&0&-3\sigma_{15}\end{array}\right)\] \[= \frac{1}{2}\left(\begin{array}{cccc}\frac{\sigma_{0}}{\sqrt{2}} +\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}&0&0&0\\ 0&\frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15}}{ \sqrt{6}}&0&0\\ 0&0&\frac{\sigma_{0}}{\sqrt{2}}-\frac{2\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15} }{\sqrt{6}}&0\\ 0&0&0&\frac{\sigma_{0}}{\sqrt{2}}-\sqrt{\frac{3}{2}}\sigma_{15}\end{array} \right).\] Similarly, we get \[\Phi^{\dagger} = \frac{1}{2}\left(\begin{array}{cccc}\frac{\sigma_{0}}{\sqrt{2} }+\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}&0&0&0\\ 0&\frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15}}{ \sqrt{6}}&0&0\\ 0&0&\sqrt{2}\left(\frac{\sigma_{0}}{2}-\sqrt{\frac{2}{3}}\sigma_{8}+\frac{ \sigma_{15}}{2\sqrt{3}}\right)&0\\ 0&0&0&\sqrt{2}\left(\frac{\sigma_{0}}{2}-\frac{\sqrt{3}}{2}\sigma_{15}\right) \end{array}\right) \tag{17}\] so that \[\Phi\Phi^{\dagger} = \frac{1}{4}\left(\begin{array}{cccc}\left(\frac{\sigma_{0}}{\sqrt{2 }}+\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{2}&0&0&0\\ 0&\left(\frac{\sigma_{0}}{\sqrt{2}}+\frac{\sigma_{8}}{\sqrt{3}}+\frac{\sigma_ {15}}{\sqrt{6}}\right)^{2}&0&0\\ 0&0&2\left(\frac{\sigma_{0}}{2}-\sqrt{\frac{2}{3}}\sigma_{8}+\frac{\sigma_{15 }}{2\sqrt{3}}\right)^{2}&0\\ 0&0&0&2\left(\frac{\sigma_{0}}{2}-\frac{\sqrt{3}}{2}\sigma_{15}\right)^{2} \end{array}\right). \tag{10}\] ## Appendix C Explicit Chiral Symmetry Breaking The chiral symmetry in explicitly broken \[H = \sum_{a=0}^{15}h_{a}T_{a}=h_{0}T_{0}+h_{8}T_{8}+h_{15}T_{15} \tag{11}\] \[= \frac{1}{2}\left(\begin{array}{cccc}\frac{h_{0}}{\sqrt{2}}+h_ {3}+\frac{h_{8}}{\sqrt{3}}+\frac{h_{15}}{\sqrt{6}}&h_{1}-ih_{2}&h_{4}-ih_{5}&h _{9}-ih_{10}\\ h_{1}+ih_{2}&\frac{h_{0}}{\sqrt{2}}-h_{3}+\frac{h_{8}}{\sqrt{3}}+\frac{h_{15}} {\sqrt{6}}&h_{6}-ih_{7}&h_{11}-ih_{12}\\ h_{4}+ih_{5}&h_{6}+ih_{7}&\frac{h_{0}}{\sqrt{2}}+2\frac{h_{8}}{\sqrt{3}}+\frac {h_{15}}{\sqrt{6}}&h_{13}-ih_{14}\\ h_{9}-ih_{10}&h_{11}+ih_{12}&h_{13}+ih_{14}&\frac{h_{0}}{\sqrt{2}}+\sqrt{\frac {2}{3}}h_{15}\end{array}\right).\] ## Appendix D Wavefunction renormalization factors Here, we summarize the wavefunction renormalization factors needed for different meson masses, \[Z_{\pi}=Z_{\eta}=\frac{m_{a_{1}}}{\sqrt{m_{a_{1}}^{2}-g_{1}^{2} \sigma_{x}^{2}}}, Z_{k}=\frac{2m_{k_{1}}}{\sqrt{4m_{k_{1}}^{2}-g_{1}^{2}\left[ \sigma_{x}+\sqrt{2}\sigma_{y}\right]^{2}}}, \tag{12}\] \[Z_{k_{s}}=\frac{2m_{K}}{\sqrt{4m_{K}^{2}-g_{1}^{2}\left[\sigma_{ x}+\sqrt{2}\sigma_{y}\right]^{2}}}, Z_{\eta_{s}}=\frac{m_{f_{1s}}}{\sqrt{m_{f_{1s}}^{2}-2g_{1}^{2} \sigma_{y}^{2}}},\] (13) \[Z_{\eta_{c}}=\frac{m_{\chi_{c1}}}{\sqrt{m_{\chi_{c1}}^{2}-2g1^{2 }\sigma_{c}^{2}}}, Z_{D}=\frac{2m_{D_{1}}}{\sqrt{4m_{D_{1}}-g_{1}^{2}\left(\sigma _{c}+\sqrt{2}\sigma_{c}\right)^{2}}},\] (14) \[Z_{\eta_{s}}=\frac{m_{f_{1s}}}{\sqrt{m_{f_{1s}}^{2}-2g_{1}^{2} \sigma_{y}^{2}}}, Z_{D_{s}}=\frac{\sqrt{2}m_{D_{s_{1}}}}{\sqrt{2m_{D_{s_{1}}}^{2} -g_{1}^{2}(\sigma_{y}^{2}+\sigma_{c})^{2}}},\] (15) \[Z_{D_{0}^{*}}=\frac{2m_{D^{*}}}{\sqrt{4m_{D^{*}}-g_{1}^{2}( \sigma_{x}^{2}-\sqrt{2}\sigma_{c})^{2}}}, Z_{D_{0}^{*0}}=\frac{2m_{D^{*0}}}{\sqrt{4m_{D^{*0}}^{2}-g_{1}^{2} (\sigma_{x}-\sqrt{2}\sigma_{c})^{2}}},\] (16) \[Z_{D_{s0}^{*}}=\frac{\sqrt{2}m_{D_{s}^{*}}}{\sqrt{2m_{D_{s}^{*} }^{2}-g_{1}^{2}\left(\sigma_{y}-\sigma_{c}\right)^{2}}}, \tag{17}\] where \[g_{1}^{2} = \frac{m_{a_{1}}^{2}}{f_{\pi}^{2}Z_{\pi}^{2}}\left(1-\frac{1}{Z_{ \pi}^{2}}\right). \tag{18}\] ### Mesonic Potential of SU(4) Linear-Sigma Model We start with the mesonic potential of the SU(3) linear-sigma model, \[U_{m}^{SU(3)}(\Phi) = m^{2}\texttt{Tr}\left(\Phi^{\dagger}\Phi\right)+\lambda_{1}\left[ \texttt{Tr}\left(\Phi^{\dagger}\Phi\right)\right]^{2}+\lambda_{2}\left[ \texttt{Tr}\left(\Phi^{\dagger}\Phi\right)\right]^{2} \tag{10}\] \[- c\left[\det\left(\Phi\right)+\det\left(\Phi^{\dagger}\right) \right]-\texttt{Tr}\left[H\left(\Phi+\Phi^{\dagger}\right)\right].\] The mesonic potential of SU(4) has an analogous form as that of SU(3). For a precise estimation of the mass spectrum, a new term must be added, namely \(-2\texttt{Tr}\left[\epsilon\Phi^{\dagger}\Phi\right]\) so that \[U_{m}^{SU(4)}(\Phi) = \frac{m^{2}}{2}\left(\sigma_{x}^{2}+\sigma_{y}^{2}+\sigma_{15}^{ 2}\right)^{2}+\lambda_{1}\left[4\left(\sigma_{x}+\frac{\sigma_{15}}{\sqrt{6}} \right)^{4}+\left(\sqrt{2}\sigma_{y}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{4}+ \left(\sqrt{\frac{2}{3}}\sigma_{0}-\sqrt{\frac{3}{2}}\sigma_{15}\right)^{4}\right. \tag{11}\] \[\left.+4\left(\sigma_{x}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{2} \left(\sqrt{2}\sigma_{y}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{2}+4\left(\sigma _{x}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{2}\left(\sqrt{\frac{3}{2}}\sigma_{0 }-\sqrt{\frac{3}{2}}\sigma_{15}\right)^{2}\right.\] \[\left.+2\left(\sqrt{2}\sigma_{y}+\frac{\sigma_{15}}{\sqrt{6}} \right)^{2}\left(\sqrt{\frac{2}{3}}\sigma_{0}-\sqrt{\frac{3}{2}}\sigma_{15} \right)^{2}\right]\] \[+ \lambda_{2}\left[2\left(\sigma_{x}+\frac{\sigma_{15}}{\sqrt{6}} \right)^{4}+\left(\sqrt{2}\sigma_{y}+\frac{\sigma_{15}}{\sqrt{6}}\right)^{4}+ \left(\sqrt{\frac{2}{3}}\sigma_{0}-\sqrt{\frac{3}{2}}\sigma_{15}\right)^{4}\right]\] \[- \frac{c}{8}\left[\frac{2}{3}\sigma_{x}^{2}\sigma_{y}\sigma_{0}+ \frac{\sigma_{y}\sigma_{15}^{2}\sigma_{0}}{3\sqrt{3}}+\frac{2\sqrt{2}}{3} \sigma_{x}\sigma_{y}\sigma_{15}\sigma_{0}+\frac{1}{3}\sigma_{x}^{2}\sigma_{15} \sigma_{0}+\frac{\sigma_{15}^{3}\sigma_{0}}{8}\right.\] \[\left.+\frac{\sqrt{2}\sigma_{x}\sigma_{15}\sigma_{0}^{2}}{3\sqrt{ 3}}-\sqrt{3}\sigma_{x}^{2}\sigma_{y}\sigma_{15}-\frac{\sigma_{15}^{3}\sigma_{y }}{2\sqrt{3}}-\sqrt{2}\sigma_{x}\sigma_{y}\sigma_{15}^{2}-\frac{\sigma_{x}^{2} \sigma_{15}^{2}}{2}-\frac{\sigma_{15}^{4}}{12}-\frac{\sigma_{x}\sigma_{15}^{3} }{\sqrt{6}}\right].\] ## Appendix E Parameters of the SU(4) Linear-Sigma Model We start with the potential of the SU(4) linear-sigma model, Eq. (11). The global minimum, i.e., vanishing partial derivatives with respect to \(\sigma_{x}\), \(\sigma_{y}\), and \(\sigma_{c}\) lead to \[h_{x} = m^{2}\sigma_{x}-\frac{c}{2}\sigma_{x}\sigma_{y}\sigma_{c}+ \lambda_{1}\sigma_{x}\sigma_{y}^{2}+\lambda_{2}\sigma_{x}\sigma_{c}^{2}+\frac{ 1}{2}\left(2\lambda_{1}+\lambda_{2}\right)\sigma_{x}^{2}, \tag{12}\] \[h_{y} = m^{2}\sigma_{y}-\frac{c}{2}\sigma_{x}^{2}\sigma_{c}+\lambda_{1} \sigma_{x}^{2}\sigma_{y}+\lambda_{2}\sigma_{y}\sigma_{c}^{2}+\left(\lambda_{1} +\lambda_{2}\right)\sigma_{y}^{2},\] (13) \[h_{c} = m^{2}\sigma_{c}-\frac{c}{2}\sigma_{x}^{2}\sigma_{y}+\lambda_{1} \sigma_{x}^{2}\sigma_{c}+\lambda_{2}\sigma_{y}^{2}\sigma_{c}+\left(\lambda_{1} +\lambda_{2}\right)\sigma_{c}^{2}, \tag{14}\] where \(\lambda_{1}\) and \(\lambda_{2}\) are respectively defined as \[\lambda_{1} = \frac{m_{\sigma}^{2}-m_{\pi}^{2}-m_{a_{0}}^{2}+m_{\eta}^{2}}{3f_{ \pi}^{2}}, \tag{15}\] \[\lambda_{2} = \frac{3\left(2f_{K}-f_{\pi}\right)m_{K}^{2}-3\left(2f_{K}-f_{\pi} \right)m_{\pi}^{2}-2\left(f_{K}-f_{\pi}\right)\left(m_{\eta^{\prime}}^{2}+m_{ \eta}^{2}\right)}{\left(f_{K}-f_{\pi}\right)\left(3f_{\pi}^{2}+8f_{K}\left(f_{ K}-f_{\pi}\right)\right)}, \tag{16}\] where \(f_{K}\) and \(f_{\pi}\) are the decay constants of \(K\) and \(\pi\) mesons which can be taken from the recent compilation of the particle data group [27]. The U(1)\({}_{\texttt{A}}\) anomaly breaking term \(C\) is fixed by \(\lambda_{2}\) and the difference of pion and Kaon masses, \[C = \frac{m_{K}^{2}-m_{\pi}^{2}}{f_{K}-f_{\pi}}-\lambda_{2}\left(2f_{K}-f_{ \pi}\right). \tag{100}\] The external field \(\Delta\) could be expressed from the Lagrangian term \(\texttt{Tr}[\Delta(L^{\mu\nu}+L^{\mu\nu})]\) \[\Delta = \left(\begin{array}{cccc}\delta_{u}&0&0&0\\ 0&\delta_{d}&0&0\\ 0&0&\delta_{s}&0\\ 0&0&0&\delta_{c}\end{array}\right), \tag{101}\] from which we deduce that \[\left(\begin{array}{c}\delta_{u}\\ \delta_{d}\\ \delta_{s}\\ \delta_{c}\end{array}\right) = \left(\begin{array}{c}m_{u}^{2}\\ m_{d}^{2}\\ m_{s}^{2}\\ m_{c}^{2}\end{array}\right). \tag{102}\] In the isospin-symmetric limit, it turns possible to set \(\delta_{u}=\delta_{d}=0\). In this case, for \(\delta_{x}\), \(\delta_{y}\), and \(\delta_{c}\), we might use the mass equations of vector mesons, for example, \(m_{\omega_{N}}^{2}\), \(m_{\omega_{S}}^{2}\), and \(m_{\chi_{c1}}^{2}\). Then, we get \[\delta_{x} = \frac{1}{2}\left[m_{\omega_{N}}^{2}-m_{1}^{2}+m_{0}^{2}-\frac{ \sigma_{x}^{2}}{2}\left(h_{1}+h_{2}+h_{3}\right)-\frac{h_{1}}{2}\left(\sigma_{ y}^{2}+\sigma_{c}^{2}\right)\right], \tag{103}\] \[\delta_{y} = \frac{1}{2}\left[m_{\omega_{S}}^{2}-m_{1}^{2}+m_{0}^{2}-\frac{ \sigma_{y}^{2}}{2}\left(\frac{h_{1}}{2}+h_{2}+h_{3}\right)-\frac{h_{1}}{2} \left(\sigma_{x}^{2}+\sigma_{c}^{2}\right)\right],\] (104) \[\delta_{c} = \frac{1}{2}\left[m_{\chi_{c1}}^{2}-m_{1}^{2}+m_{0}^{2}-2g_{1}^{2 }\sigma_{c}^{2}-\sigma_{c}^{2}\left(\frac{h_{1}}{2}+h_{2}-h_{3}\right)-\frac{ h_{1}}{2}\left(\sigma_{y}^{2}+\sigma_{y}^{2}\right)\right]. \tag{105}\] For the mass parameters, \[m^{2} = m_{\pi}^{2}-\frac{f_{\pi}^{2}}{2}\lambda_{2}+\frac{c}{2}\left(2f _{K}-f_{\phi}\right)-\lambda_{1}\left(\frac{1}{2}\left(2f_{K}-f_{\pi}\right)^{ 2}\right), \tag{106}\] \[m_{0}^{2} = \frac{1}{2}\left[m_{a_{0}}^{2}+m_{\sigma_{s}}^{2}-\lambda_{2} \left(\frac{3}{2}\sigma_{x}^{2}+3\sigma_{y}^{2}\right)\right],\] (107) \[m_{1}^{2} = m_{\omega_{s}}^{2}-\left(\frac{h_{1}}{2}+h_{2}+h_{3}\right) \sigma_{s}^{2}-\frac{h_{1}}{2}\sigma_{x}^{2}-2\delta_{s}. \tag{108}\] Finally, for the parameters \(h_{1}\), \(h_{2}\). and \(h_{3}\), we recall the potential of the SU(3) linear-sigma model, Eq. (100), where \(\sigma_{u}\), \(\sigma_{d}\), and \(\sigma_{s}\) can be related to \(\sigma_{0}\), \(\sigma_{3}\), and \(\sigma_{8}\), \[\sigma_{u} = \sqrt{2}\sigma_{0}+\sigma_{3}+\sigma_{8}, \tag{101}\] \[\sigma_{d} = \sqrt{2}\sigma_{0}-\sigma_{3}+\sigma_{8},\] (102) \[\sigma_{s} = \sigma_{0}-\sqrt{2}\sigma_{8}. \tag{103}\] Then, \(h_{0}\), \(h_{3}\), and \(h_{8}\) become \[h_{0} = \frac{1}{\sqrt{6}}\left[f_{\pi}m_{\pi}^{2}+2f_{K}m_{k}^{2}\right], \tag{104}\] \[h_{3} = \left[m^{2}+\frac{c}{\sqrt{6}}\sigma_{0}-\frac{c}{\sqrt{6}} \sigma_{8}+\lambda_{1}\left(\sigma_{0}^{2}+\sigma_{3}^{2}+\sigma_{8}^{2} \right)+\lambda_{2}\left(\sigma_{0}^{2}+\frac{\sigma_{3}^{2}}{2}+\frac{\sigma _{8}^{2}}{2}+\sqrt{2}\sigma_{0}\sigma_{8}\right)\right]\sigma_{3},\] (105) \[h_{8} = \frac{2}{3}\left[f_{\pi}m_{\pi}^{2}-2f_{K}m_{k}^{2}\right]. \tag{106}\]
2301.10197
A Practitioner's Guide to MDP Model Checking Algorithms
Model checking undiscounted reachability and expected-reward properties on Markov decision processes (MDPs) is key for the verification of systems that act under uncertainty. Popular algorithms are policy iteration and variants of value iteration; in tool competitions, most participants rely on the latter. These algorithms generally need worst-case exponential time. However the problem can equally be formulated as a linear program, solvable in polynomial time. In this paper, we give a detailed overview of today's state-of-the-art algorithms for MDP model checking with a focus on performance and correctness. We highlight their fundamental differences, and describe various optimisations and implementation variants. We experimentally compare floating-point and exact-arithmetic implementations of all algorithms on three benchmark sets using two probabilistic model checkers. Our results show that (optimistic) value iteration is a sensible default, but other algorithms are preferable in specific settings. This paper thereby provides a guide for MDP verification practitioners -- tool builders and users alike.
Arnd Hartmanns, Sebastian Junges, Tim Quatmann, Maximilian Weininger
2023-01-24T18:13:25Z
http://arxiv.org/abs/2301.10197v1
# A Practitioner's Guide to ###### Abstract Model checking undiscounted reachability and expected-reward properties on Markov decision processes (MDPs) is key for the verification of systems that act under uncertainty. Popular algorithms are policy iteration and variants of value iteration; in tool competitions, most participants rely on the latter. These algorithms generally need worst-case exponential time. However the problem can equally be formulated as a linear program, solvable in polynomial time. In this paper, we give a detailed overview of today's state-of-the-art algorithms for MDP model checking with a focus on performance and correctness. We highlight their fundamental differences, and describe various optimisations and implementation variants. We experimentally compare floating-point and exact-arithmetic implementations of all algorithms on three benchmark sets using two probabilistic model checkers. Our results show that (optimistic) value iteration is a sensible default, but other algorithms are preferable in specific settings. This paper thereby provides a guide for MDP verification practitioners--tool builders and users alike. ## 1 Introduction The verification of MDPs is crucial for the design and evaluation of cyber-physical systems with sensor noise, biological and chemical processes, network protocols, and many other complex systems. MDPs are the standard model for sequential decision making under uncertainty and thus at the heart of reinforcement learning. Many dependability evaluation and safety assurance approaches rely in some form on the verification of MDPs with respect to temporal logic properties. Probabilistic model checking [4, 5] provides powerful tools to support this task. The essential MDP model checking queries are for the _worst-case probability that something bad happens_ (reachability) and the _expected resource consumption until task completion_ (expected rewards). These are _indefinite (undiscounted) horizon_ queries: They ask about the probability of or the expectation of a random variable up until an event--which forms the horizon--but are themselves unbounded. Many more complex properties internally reduce to solving either reachability or expected rewards. For example, if the description of _something ###### Abstract We propose a novel approach to the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ approximation of the problem of finding a _local_ local_ approximation of the problem of finding a _local_ approximation last decade [8, 26, 29, 33, 41, 43]. We show that PI faces a similar problem. When using floating-point arithmetic, additional issues may arise [27, 46]. Our use of various LP solvers exhibits concerning results for a variety of benchmarks. We therefore also include results for _exact_ computation using rational arithmetic. _Limitations of this study._ A thorough experimental study of algorithms requires a carefully scoped evaluation. We work with flat representations of MDPs that fit completely into memory (i.e. we ignore the state space exploration process and symbolic methods). We selected algorithms that are tailored to converge to _the_ optimal value. We also exclude approaches that incrementally build and solve (partial or abstract) MDPs using simulation or model checking results to guide exploration: they are an orthogonal improvement and would equally profit from faster algorithms to solve the partial MDPs. Moreover, this study is on algorithms, not on their implementations. To reduce the impact of potential implementation flaws, we use two independent tools where possible. Our experiments ran on a single type of machine--we do not study the effect of different hardware. Contributions.This paper contributes a thorough overview on how to model-check indefinite horizon properties on MDPs, making MDP model checking more accessible, but also pushing the state-of-the-art by clarifying open questions. Our study is built upon a thorough empirical evaluation using two independent code bases, sources benchmarks from the standard benchmark suite and recent publications, compares 10 LP solvers, and studies the influence of various prominent preprocessing techniques. The paper provides new insights and reviews folklore statements: Particular highlights are a new simple but challenging MDP family that leads to wrong results on all floating-point LP solvers (Section 2.3), a negative result regarding the soundness of PI with epsilon-precise policy evaluators (Section 4), and an evaluation on numerically challenging benchmarks that shows the limitations of value iteration in a practical setting (Section 5.3). ## 2 Background We recall MDPs with reachability and reward objectives, describe solution algorithms and their guarantees, and address commonly used optimisations. ### Markov Decision Processes Let \(\mathsf{D}_{X}\coloneqq\{\mathsf{d}\colon X\to[0,1]\mid\sum_{x\in X}\mathsf{ d}(x)=1\}\) be the set of distributions over \(X\). Definition 1: A Markov decision process (MDP) [42] is a tuple \(\mathcal{M}=(\mathsf{S},\mathsf{A},\delta)\) with finite sets of states \(\mathsf{S}\) and actions \(\mathsf{A}\), and partially defined transition function \(\delta\colon\mathsf{S}\times\mathsf{A}\to\mathsf{D}_{\mathsf{S}}\) such that \(\mathsf{A}(s)\coloneqq\{\,a\mid(s,a)\in\text{domain}(\delta)\,\}\neq\emptyset\) for all \(s\in\mathsf{S}\). \(\mathsf{A}(s)\) is the set of enabled actions at state \(s\). \(\delta\) maps enabled state-action pairs to distributions over successor states. A Markov chain (MC) is an MDP with \(|\mathsf{A}(s)|=1\) for all \(s\). The _semantics_ of an MDP are defined in the usual way, see, e.g. [6, Chapter 10]. A (memoryless deterministic) policy--a.k.a. strategy or scheduler--is a function \(\pi\colon\mathsf{S}\to\mathsf{A}\) that, intuitively, given the current state \(s\) prescribes what action \(a\in\mathsf{A}(s)\) to play. Applying a policy \(\pi\) to an MDP induces an MC \(\mathcal{M}^{\pi}\). A path in this MC is an infinite sequence \(\rho=s_{1}s_{2}\ldots\) with \(\delta(s_{i},\pi(s_{i}))(s_{i+1})>0\). Paths denotes the set of all paths and \(\mathbb{P}_{s}^{\pi}\) denotes the unique probability measure of \(\mathcal{M}^{\pi}\) over infinite paths starting in a state \(s\). A _reachability objective_\(\mathrm{P_{opt}}(\mathsf{T})\) with set of target states \(\mathsf{T}\subseteq\mathsf{S}\) and \(\mathsf{opt}\in\{\max,\min\}\) induces a random variable \(X\colon\mathsf{Paths}\to[0,1]\) over paths by assigning \(1\) to all paths that eventually reach the target and \(0\) to all others. \(\mathrm{E_{opt}}(\mathsf{rew})\) denotes an _expected reward objective_, where \(\mathsf{rew}\colon\mathsf{S}\to\mathbb{Q}_{\geq 0}\) assigns a reward to each state. \(\mathsf{rew}(\rho):=\sum_{i=1}^{\infty}\mathsf{rew}(s_{i})\) is the accumulated reward of a path \(\rho=s_{1}s_{2}\ldots\). This yields a random variable \(X\colon\mathsf{Paths}\to\mathbb{Q}\cup\{\infty\}\) that maps paths to their reward. For a given objective and its random variable \(X\), the _value of a state_\(s\in\mathsf{S}\) is the expectation of \(X\) under the probability measure \(\mathbb{P}_{s}^{\pi}\) of the the MC induced by an optimal policy \(\pi\), formally \(\mathsf{V}(s):=\mathsf{opt}_{\pi\in\Pi}\mathbb{E}_{s}^{\pi}[X]\). ### Solution Algorithms _Value iteration (VI)_, e.g. [14], computes a sequence of value vectors converging to the optimum in the limit. In all variants of the algorithm, we start with a function \(x\colon\ \mathsf{S}\to\mathbb{Q}\) that assigns to every state an estimate of the value. The algorithm repeatedly performs an update operation to improve the estimates. After some preprocessing, this operation has a unique fixpoint when \(x=\mathsf{V}\). Thus, value iteration converges to the value in the limit. Variants of VI include interval iteration [26], sound VI [43] and optimistic VI [29]. We do not discuss these in detail, but instead refer to the respective papers. _Linear Programming (LP)_, e.g. [6, Chapter 10], encodes the transition structure of the MDP and the objective as a linear optimization problem. For every state, the LP has a variable representing an estimate of its value. Every state-action pair is encoded as a constraint on these variables, as are the target set or rewards. The unique optimum of the LP is attained if an only if for every state its corresponding variable is set to the value of the state. We provide an in-depth discussion of theoretical and practical aspects of LP in Section 3. _Policy iteration (PI)_, e.g. [10, Section 4], computes a sequence of policies. Starting with an initial policy, we evaluate its induced MC, improve the policy by switching suboptimal choices and repeat the process on the new policy. As every policy improves the previous one and there are only finitely many memoryless deterministic policies (a number exponential in the number of states), eventually we obtain an optimal policy. We further discuss PI in Section 4. ### Guarantees Given the stakes in many model checking applications, we require guarantees about the relation between an algorithm's result \(\bar{v}\) and the true probability or expected reward \(v\). First, implementations are subject to floating-point errors and imprecision unless they use exact (rational) arithmetic or safe rounding [27]. This can result in arbitrary differences between result and true value. Second are the inherent properties of the algorithm: VI is an approximating algorithm that converges to the true value only in the limit. In theory, it is possible to obtain the exact result by rounding after an exponential number of iterations [14]; in practice, this results in excessive runtime. Instead, for years, implementations used a naive stopping criterion that could return arbitrarily wrong results [25]. This problem's discovery [25] sparked the development of various sound variants of VI [8, 26, 29, 33, 41, 43], including interval iteration, sound value iteration, and optimistic value iteration. A sound VI algorithm guarantees \(\varepsilon\)-precise results, i.e. \(|v-\bar{v}|\leq\epsilon\) or \(|v-\bar{v}|\leq v\cdot\epsilon\). For LP and PI, the question of guarantees has not yet been thoroughly investigated. Theoretically, both are exact, but implementations are often not. We discuss the problems in detail in Sections 3 and 4. The handcrafted MC of [25, Figure 2] highlights the lack of guarantees of VI: standard implementations return vastly incorrect results. We extended it with action choices to obtain the MDP \(M_{n}\) shown in Figure 1 for \(n\in\mathbb{N}\), \(n\geq 2\). It has \(2n+1\) states; we compute \(\mathrm{P}_{\min}(\{\,n\,\})\) and \(\mathrm{P}_{\max}(\{\,n\,\})\). The policy that chooses action \(\mathtt{m}\) wherever possible induces the MC of [25, Figure 2] with \((\mathrm{P}_{\min}(\{\,n\,\}),\mathrm{P}_{\max}(\{\,n\,\}))=(\frac{1}{2}, \frac{1}{2})\). In every state \(s\) with \(0<s<n\), we added the choice of action \(\mathtt{j}\) that jumps to \(n\) and -\(n\). With that, the (optimal) values over all policies are \((\frac{1}{3},\frac{2}{3})\). In VI, starting from value \(0\) for all states except \(n\), initially taking \(\mathtt{j}\) everywhere looks like the best policy for \(\mathrm{P}_{\max}\). As updated values slowly propagate, at some point, state-by-state, \(\mathtt{m}\) becomes the optimal choice. We thus layered a "deceptive" decision problem on top of the slow convergence of the original MC. Consequently, for \(n=20\), VI with \(\mathsf{Storm}\) and \(\mathsf{mcsta}\) delivers the incorrect results \((0.247,0.500)\). For \(\mathsf{Storm}\)'s PI and various LP solvers, we show in Table 1 the largest \(n\) for which they return a \(\pm\,0.01\)-correct result. For larger \(n\), PI and all LP solvers claim \(\approx(\frac{1}{2},\frac{1}{2})\) as the correct solution except for \(\mathsf{Glop}\) and \(\mathsf{GLPK}\), which return \(\approx(\frac{1}{3},\frac{1}{2})\) until giving up for the minimum at \(n=29\) and \(52\), respectively. Sound VI algorithms and \(\mathsf{Storm}\)'s exact-arithmetic engine produce (\(\epsilon\)-)correct results, though the former at excessive runtime for larger \(n\). We used default settings for all tools and solvers. ### Optimizations VI, LP, and PI can all benefit from the following optimizations. Figure 1: A hard MDP for all algorithms Graph-theoretic algorithms can be used for qualitative analysis of the MDP, i.e. finding states with value \(0\) or (only for reachability objectives) \(1\). These qualitative approaches are typically a lot faster than the numerical computations for quantitative analysis. Thus, we always apply them first and only run the numerical algorithms on the remaining states with non-trivial value. Topological methods, e.g. [16], does not consider the whole MDP at once. Instead, we first compute a topological ordering of the strongly connected components (SCCs)5 and then analyze each SCC individually. This can improve the runtime, as we decompose the problem into smaller subproblems. The subproblems can be solved with any of the solution methods. Note that when considering acyclic MDPs, the topological approach does not need to call the solution methods, as the resulting values can immediately be backpropagated. Footnote 5: A set \(\mathsf{S}^{\prime}\subseteq\mathsf{S}\) is a connected component if for all \(s,s^{\prime}\in\mathsf{S}^{\prime}\) s can be reached from s’. We call \(\mathsf{S}^{\prime}\) strongly connected component if there is no superset of \(\mathsf{S}^{\prime}\) that is a connected component. Collapsing of maximal end components (MECs), e.g., [12, 26], transforms the MDP into one with equivalent values but simpler structure. After collapsing MECs, the MDP is contracting, i.e. we almost surely reach a target state or a state with value zero. VI algorithms rely on this property for convergence [26, 43, 29]. For PI and LP, simplifying the graph structure before applying the solution method can speed up the computation. Warm starts, e.g. [21, 34], may adequately initialize an algorithm, i.e., we may provide it with some prior knowledge so that the computation has a good starting point. We implement warm starts by first running VI for a limited number of iterations and using the resulting estimate to guess bounds on the variables in an LP or a good initial policy for PI. See Sections 3 and 4 for more details. ## 3 Practically solving MDPs using Linear Programs This section considers the LP-based approach to solving the optimal policy problem in MDPs. To the best of our knowledge, this is the only polynomial time approach. We discuss various configurations. These configuration are a combination of the LP formulation, the choice of software, and their parameterization. ### How to encode MDPs as LPs? For objective \(\mathrm{P}_{\max}(\mathsf{T})\) we formulate the following LP over variables \(x_{s}\), \(s\in\mathsf{S}\setminus\mathsf{T}\): minimize \[\sum_{s\in\mathsf{S}}x_{s}\quad\text{s.t. }lb(s)\leq x_{s}\leq ub (s)\quad\text{and}\] \[x_{s}\geq\sum_{s^{\prime}\in\mathsf{S}\setminus\mathsf{T}} \delta(s,a)(s^{\prime})\cdot x_{s^{\prime}}+\sum_{t\in\mathsf{T}}\delta(s,a)( t)\quad\text{ for all }s\in\mathsf{S}\setminus\mathsf{T},a\in\mathsf{A}\] We assume bounds \(lb(s)=0\) and \(ub(s)=1\) for \(s\in\mathsf{S}\setminus\mathsf{T}\). The unique solution \(\eta\colon\{\,x_{s}\mid s\in\mathsf{S}\setminus\mathsf{T}\,\}\to[0,1]\) to this LP coincides with the desired objective values \(\eta(x_{s})=V(s)\). Objectives \(\mathrm{P}_{\min}(\mathsf{T})\) and \(\mathrm{E}_{\mathsf{opt}}(\mathsf{rew})\) have similar encodings: minimising policies require maximisation in the LP and flipping the constraint relation. Rewards can be added as an additive factor on the right-hand side. For practical purposes, the LP formulation can be tweaked. Footnote 5: [http://www.lcs.org/](http://www.lcs.org/) _The choice of bounds._ Any bounds that respect the unique solution will not change the answer. That is, any \(lb\) and \(ub\) with \(0\leq lb(s)\leq V(s)\leq ub(s)\) yield a sound encoding. While these additional bounds are superfluous, they may significantly prune the search space. We investigate trivial bounds, e.g., knowing that all probabilities are in \([0,1]\), bounds from a structural analysis as discussed by [8], and bounds induced by a warm start of the solver. For the latter, if we have obtained values \(V^{\prime}\leq V\), e.g., induced by a suboptimal policy, then \(V^{\prime}(s)\) is a lower bound on the value \(x_{s}\), which is particularly relevant as the LP minimizes. _Equality for unique actions._ Markov chains, i.e., MDPs where \(|\mathsf{A}|=1\), can be solved using linear equation systems. The LP encoding uses one-sided inequalities and the objective function to incorporate nondeterministic choices. We investigate adding constraints for all states with a unique action. \[x_{s}\leq\sum_{s^{\prime}\in S\setminus T}\delta(s,a)(s^{\prime})\cdot x_{s^{ \prime}}+\sum_{t\in T}\delta(s,a)(t)\quad\text{ for all }s\in S\setminus T\text{ with }\mathsf{A}(s)=\{a\}\] These additional constraints may trigger different optimizations in a solver, e.g., some solvers use Gaussian elimination for variable elimination. _A simpler objective._ The standard objective assures the solution \(\eta\) is optimal for _every_ state, whereas most invocations require only optimality in some specific states - typically the initial state \(s_{0}\) or the entry states of a strongly connected component. In that case, the objective may be simplified to optimize only the value for those states. This potentially allows for multiple optimal solutions: in terms of the MDP, it is no longer necessary to optimize the value for states that are not reached under the optimal policy. _Encoding the dual formulation._ Encoding a dual formulation to the LP is interesting for mixed-integer extensions to the LP, relevant for computing, e.g., policies in POMDPs [35], or when computing minimal counterexamples [45]. For LPs, due to the strong duality, the internal representation in the solvers we investigated is (almost) equivalent and all solvers support both solving the primal and the dual representation. We therefore do not further consider constructing them. ### How to solve LPs with existing solvers? We rely on the performance of state-of-the-art LP solvers. Many solvers have been developed and are still actively advanced, see [2] for a recent comparison on general benchmarks. We list the LP solvers that we consider for this work in Table 2. The columns summarize for each solver the type of license, whether it uses exact or floating point arithmetic, whether it supports multithreading, and what type of algorithms it implements. We also list whether the solver is available from the two model checkers used in this study6. Footnote 6: Support for Gurobi, GLPK, and Z3 was already available in Storm. Support for Glop was already available in mcsta. All other solver interfaces have been added. Methods.We briefly explain the available methods and refer to [11] for a thorough treatment. Broadly speaking, the LP solvers use one out of two families of methods. _Simplex_-based methods rely on highly efficient pivot operations to consider vertices of the simplex of feasible solutions. Simplex can be executed either in the _primal_ or _dual_ fashion, which changes the direction of progress made by the algorithm. Our LP formulation has more constraints than variables, which generally means that the dual version is preferable. _Interior methods_, often the subclass of _barrier methods_, do not need to follow the set of vertices. These methods may achieve polynomial time worst-case behaviour. It is generally claimed that simplex has superior average-case performance but is highly sensitive to perturbations, while interior-point methods have a more robust performance. Warm startsLP-based model checking can be done using two types of warm starts. Either by providing a (feasible) basis point as done in [21] or by presenting bounds. The former, however, comes with various remarks and limitations, such as the requirement to disable preprocessing. We therefore used warm starts only by using bounds as discussed above. Multithreading.We generally see two types of parallelisation in LP solvers. Some solver support a _portfolio_ approach that run different approaches and finishes with the first one that yields a result. Other solvers parallelise the interior-point and/or simplex methods themselves. Guarantees for numerical LP solvers.All LP solvers allow tweaking of various parameters, including _tolerances_ to manage whether a point is considered feasible or optimal, respectively. However, the experiments in Tab. 1 already indicate that \begin{table} \begin{tabular}{l c c c c c c} \hline \hline solver & version & license & exact/fp & parallel & algorithms & mcsta & Storm \\ \hline CPLEX7 & 22.10 & academic & fp & yes & \(\mathrm{intr}+\mathrm{simplex}\) & yes & no \\ COPT8 & 5.0.5 & academic & fp & yes & \(\mathrm{intr}+\mathrm{simplex}\) & yes & no \\ Gurobi [24] & 9.5 & academic & fp & yes & \(\mathrm{intr}+\mathrm{simplex}\) & yes & yes \\ GLPK9 & 4.65 & GPL & fp & no & \(\mathrm{intr}+\mathrm{simplex}\) & no & yes \\ Glop10 & 9.4.1874 & Apache & fp & no & simplex only & yes & no \\ HiGHS11 & 1.2.2 & MIT & fp & yes & \(\mathrm{intr}+\mathrm{simplex}\) & yes & no \\ lp\_solve12 & 5.5.2.11 & LGPL & fp & no & simplex only & yes & no \\ Mosek13 & 10.0 & academic & fp & yes & \(\mathrm{intr}+\mathrm{simplex}\) & yes & no \\ SoPlex [23] & 6.0.1 & academic & both & no & simplex only & no & yes \\ Z3 [40] & 4.8.13 & MIT & exact & no & simplex only & no & yes \\ \hline \hline \end{tabular} \end{table} Table 2: Available LP solvers (“intr” = interior point) these guarantees are _not_ absolute. A limited experiment indicated that reducing these tolerances towards zero did remove some incorrect results, but not all. Exact solving.Soplex supports exact computations, with a Boost library wrapping GMP rational numbers14, while exploiting floating point arithmetic [22]. While this is beneficial for performance in most settings, it seems to raise errors for the numerical challenging models. Z3 supports only exact arithmetic (also wrapping GMP numbers with their own interface). We observe that the price of converting large rational numbers may be substantial. Furthermore, SMT-solvers like Z3 use a simplex variation [17] tailored towards finding feasible points and in an incremental fashion, optimized for problems with a nontrivial Boolean structure. In contrast, our LP formulation is easily feasible and are a pure conjunction. Footnote 14: [https://gmplib.org/](https://gmplib.org/) ## 4 Sound Policy Iteration Starting with an initial policy, PI-based algorithms iteratively improve the policy based on the values obtained for the induced MC. The algorithm for solving the induced MC crucially affects performance and accuracy of the overall approach. This section addresses the solvers available in Storm, possible precision issues and how to utilize a warm start, while Section 5 discusses PI performance15. Footnote 15: [34] addresses performance in the context of PI for stochastic games. Markov Chain Solvers.To solve the induced MC, Storm can employ all linear equation solvers listed in [31] and all variants of VI implemented in Storm. In our experiments, we consider (i) the generalised minimal residual method (GMRES) [44] implemented in GMM++16, (ii) VI [14] with a standard (relative) termination criterion, (iii) optimistic VI (OVI) [29], and (iv) the sparse LU decomposition implemented in Eigen17 using either floating point or exact arithmetic (LU\({}^{\mathrm{X}}\)); LU and LU\({}^{\mathrm{X}}\) provide exact results (modulo floating point errors in LU) OVI yields \(\varepsilon\)-precise results. VI and GMRES do not provide any guarantees. Footnote 16: [https://getfem.org/gmm.html](https://getfem.org/gmm.html) Footnote 17: [https://eigen.tuxfamily.org/index.php](https://eigen.tuxfamily.org/index.php) Correctness of PI.The accuracy of PI is affected by the MC solver: Firstly, PI cannot be more precise as its underlying solver: the result of PI has the same precision as the result obtained for the final MC. Secondly, inaccuracies by the solver can hide policy improvements which may lead to premature convergence with a sub-optimal policy. The example below shows that PI can return arbitrarily wrong results--_even if the intermediate results are \(\varepsilon\)-precise_. Consider the MDP in Fig. 2 with objective \(\mathrm{P}_{\max}(\{\,G\,\})\). There is only one nondeterministic choice, namely in state \(s_{0}\). The optimal policy is to pick b, obtaining a value of \(0.5\). Picking a only yields \(0.1\). However, when starting from the initial policy \(\pi(s_{0})=\mathsf{a}\), an \(\varepsilon\)-precise MC solver may return \(0.1+\varepsilon\) for both \(s_{0}\) and \(s_{1}\) and \(\nicefrac{{\delta}}{{2}}+(1-\delta)\cdot 0.1\) for \(s_{2}\). This solution is Figure 2: Example MDP indeed \(\varepsilon\)-precise. However, when evaluating which action to pick in \(s_{0}\), we can choose \(\delta\) such that \(\mathsf{a}\) seems to obtain a higher value. Concretely, we require \(\nicefrac{{\delta}}{{2}}+(1-\delta)\cdot 0.1<0.1+\varepsilon\). For every \(\varepsilon>0\), this can be achieved by setting \(\delta<2.5\cdot\varepsilon\). In this case, PI would terminate with the final policy inducing a severly suboptimal value. If every Markov chain is solved precisely, PI is correct. Indeed, it suffices to be certain that one action is better than all others. This is the essence of modified policy iteration as described in [42, Chapters 6.5 and 7.2.6]. Similarly, [34, Section 4.2] suggests to use interval iteration when solving the system induced by the current policy and stopping when the under-approximation of one action is higher than the over-approximation of all other actions. _Warm starts_. PI profits from providing a _good_ initial policy. If the initial policy is already optimal, PI terminates after a single iteration. We can inform our choice of the initial policy by providing estimates for all states as computed by VI. For every state, we choose the action that is optimal according to the estimate. This is a good way to leverage VI's ability to quickly deliver good estimates [29], while at the same time providing the exactness guarantees of PI. ## 5 Experimental Evaluation To understand the practical performance of the different algorithms, we performed an extensive experimental evaluation. We used three sets of benchmarks: all applicable benchmark instances18 from the Quantitative Verification Benchmark Set (QVBS) [30] (the _qvbs_ set), a subset of hard QVBS instances (the _hard_ set), and numerically challenging models from a runtime monitoring application [32] (the _premise_ set, named for the corresponding prototype). We consider two probabilistic model checkers, Storm[31] and the Modest Toolset's [28] mcsta. We used Intel Xeon Platinum 8160 systems running 64-bit CentOS Linux 7.9, allocating 4 CPU cores and 32 GB RAM per experiment unless noted otherwise. Footnote 18: A _benchmark instance_ is a combination of model, parameter valuation, and objective. We plot algorithm runtimes in seconds in _quantile plots_ as on the left and _scatter plots_ as on the right of Figure 3. The former compares multiple tools or configurations; for each, we sort the instances by runtime and plot the corresponding monotonically increasing line. Here, a point \((x,y)\) on the \(a\)-line means that the \(x\)-th fastest instance solved by \(a\) took \(y\) seconds. The latter compares two tools or configurations. Each point \((x,y)\) is for one benchmark instance: the x-axis tool took \(x\) while the y-axis tool took \(y\) seconds to solve it. The shape of points indicates the model type; the mapping from shapes to types is the same for all scatter plots and is only given explicitly in the first one in Figure 3. Additional plots to support the claims in this section are provided in the appendix. The depicted runtimes are for the respective algorithm and all necessary and/or stated preprocessing, but do not include the time for constructing the MDP state spaces (which is independent of the algorithms). mcsta reports all time measurements rounded to multiples of 0.1 s. We summarise timeouts, out-of-memory, errors, and incorrect results as "n/a". Our timeout is 30 minutes for the algorithm and 45 minutes for total runtime including MDP construction. We consider a result \(\bar{v}\) incorrect if \(|v-\bar{v}|>v\cdot 10^{-3}\) (i.e. relative error \(10^{-3}\)) whenever a reference result \(v\) is available. We however do not flag a result as incorrect if \(v\) and \(\bar{v}\) are both below \(10^{-8}\) (relevant for the _premise_ set). Nevertheless, we configure the (unsound) convergence threshold for VI as \(10^{-6}\) relative; among the sound VI algorithms, we include OVI, with a (sound) stopping criterion of relative \(10^{-6}\) error. To only achieve the \(10^{-3}\) precision we actually test, OVI could thus be even faster than it appears in our plots. We make this difference to account for the fact that many algorithms, including the LP solvers, do not have a sound error criterion. We mark exact algorithms/solvers that use rational arithmetic with a superscript 1. The other configurations use floating points (fp). Footnote 1: MA and PTA are converted to MDP via embedding and digital clocks [36]. ### The QVBS Benchmarks The _qvbs_ set comprises all QVBS benchmark instances with an MDP, Markov automaton (MA), or probabilistic timed automaton (PTA) model19 and a reachability or expected reward/time objective that is quantitative, i.e. not a query that yields a zero or one probability. We only consider instances where both Storm and mcsta can build the explicit representation of the MDP within 15 minutes. This yields 367 instances. We obtain reference results for 344 of them from either the QVBS database or by using one of Storm's exact methods. Reference results from different methods are always consistent. Footnote 19: MA and PTA are converted to MDP via embedding and digital clocks [36]. For LP, we have various solvers with various parameters each, cf. Section 3. For conciseness, we first compare all available LP solvers on the _qvbs_ set. For the best-performing solver, we then evaluate the benefit of different solver configurations. We do the same for the choice of Markov chain solution method in PI. We then focus on these single, reasonable, setups for LP and PI each. Figure 3: Comparison of LP solver runtime on the _qvbs_ set _LP solver comparison._ The left-hand plot of Figure 3 summarises the results of our comparison of the different LP solvers. Subscript \({}_{\mathsf{s}}\) and \({}_{\mathsf{m}}\) indicates whether solver is embedded in either \(\mathsf{Storm}\) or \(\mathsf{mcsta}\). We apply no optimisations or reductions to the MDPs except for the precomputation of probability-0 states (and in \(\mathsf{Storm}\) also of probability-1 states), and use the default settings for all solvers, with the trivial variable bounds \([0,1]\) and \([0,\infty)\) for probabilities and expected rewards, respectively. We include VI as baseline. In Table 3, we summarise the results. In terms of **performance** and scalability, \(\mathsf{Gurobi}\) solves the highest number of benchmarks in any given time budget, closely followed by \(\mathsf{COPT}\). \(\mathsf{CPLEX}\), \(\mathsf{HiGHS}\), and \(\mathsf{Mosek}\) make up a middle-class group. While the exact solver Z3 is very slow, \(\mathsf{SoPlex}\)'s exact mode actually competes with some fp solvers. The quantile plots do not tell the whole story. On the right of Figure 3, we compare \(\mathsf{COPT}\) and \(\mathsf{Gurobi}\) which each of them has a large number of instances on which it is (much) better. In terms of **reliability** of results, the exact solvers as expected produce no incorrect results; so does the slowest fp solver, \(\mathsf{lp\_solve}\). \(\mathsf{COPT}\), \(\mathsf{CPLEX}\), \(\mathsf{HiGHS}\), \(\mathsf{Mosek}\), and \(\mathsf{fp\_SoPlex}\) perform badly in this metric, producing more errors than VI. Interestingly, these are mostly the faster solvers, the exception being \(\mathsf{Gurobi}\). Overall, \(\mathsf{Gurobi}\) achieves highest performance at decent reliability; in the remainder of this section, we thus use \(\mathsf{Gurobi}_{\mathsf{s}}\) whenever we apply non-exact LP. _LP solver tweaking._ \(\mathsf{Gurobi}\) can be configured to use an "_auto_" portfolio approach, potentially running multiple algorithms concurrently on multiple threads, a primal or a dual simplex algorithm, or a barrier method algorithm. We compared each option with 4 threads and found no significant performance difference. Similarly, running the _auto_ method with 1, 4, and 16 threads (only here, we allocate 16 threads per experiment) also failed to bring out noticeable performance differ Figure 4: Performance impact of LP problem formulation variants (using \(\mathsf{Gurobi}_{\mathsf{s}}\)) \begin{table} \begin{tabular}{l r r r} \hline \hline solver & \multicolumn{2}{c}{correct incorr. no result} \\ \hline \(\mathsf{VI}_{\mathsf{s}}\) & 359 & 8 & 0 \\ \(\mathsf{VI}_{\mathsf{ms}}\) & 357 & 8 & 2 \\ \(\mathsf{COPT}_{\mathsf{m}}\) & 312 & 12 & 43 \\ \(\mathsf{CPLEX}_{\mathsf{m}}\) & 291 & 10 & 66 \\ \(\mathsf{Gop}_{\mathsf{m}}\) & 257 & 4 & 106 \\ \(\mathsf{GLPK}_{\mathsf{s}}\) & 199 & 5 & 163 \\ \(\mathsf{Gurobi}_{\mathsf{s}}\) & 331 & 4 & 32 \\ \(\mathsf{Gurobi}_{\mathsf{m}}\) & 323 & 4 & 40 \\ \(\mathsf{HiGHS}_{\mathsf{m}}\) & 288 & 10 & 69 \\ \(\mathsf{I\_solve}_{\mathsf{ms}}\) & 209 & 0 & 158 \\ \(\mathsf{Mosek}_{\mathsf{ms}}\) & 287 & 15 & 65 \\ \(\mathsf{SoPlex}_{\mathsf{s}}\) & 226 & 9 & 132 \\ \(\mathsf{SoPlex}_{\mathsf{s}}^{\mathsf{X}}\) & 218 & 0 & 149 \\ \(\mathsf{Z3}_{\mathsf{s}}^{\mathsf{X}}\) & 148 & 0 & 219 \\ \hline \hline \end{tabular} \end{table} Table 3: LP summary ences. Using more threads results in a few more out-of-memory errors, though. We thus fix Gurobi on _auto_ with 4 threads. Figure 4 shows the performance impact of supplying Gurobi with more precise bounds on the variables for expected reward objectives using methods from [8, 39] ("bounds" instead of "simple"), of optimising only for initial state ("init") instead of the sum over all states ("all"), and of using equality ("eq") instead of less-/greater-than-or-equal ("ineq") for unique action states. More precise bounds yield a very small improvement at essentially no cost. Optimising for the initial state only results in a little better overall performance (in the "pocket" in the quantile plot around \(x=375\) that is also clearly visible in the scatter plot). However, it also results in 2 more incorrect results in the _qvbs_ set. Using equality for unique actions noticeably decreases performance and increases the incorrect result count by 9 instances. For all experiments that follow, we thus use the more precise bounds, but do not enable the other two optimisations. PI methods comparisonThe main choice in PI is which algorithm to use to solve the induced Markov chains. On the right, we show the performance of the different algorithms available in Storm (cf. Section 4). \(\mathrm{LU}^{\mathrm{X}}\) yields a fully exact PI. This interestingly performs better than the fp version, potentially because fp errors induce spurious policy changes. The same effect likely also hinders the use of OVI, whereas VI leads to good performance. Nevertheless, gmres is best overall, and thus our choice for all following experiments with non-exact PI. VI and gmres yield 6 and 4 incorrect results, respectively. OVI and the exact methods are always correct on this benchmark set. Best MDP algorithms for QVBSWe now compare all MDP model checking algorithms on the _qvbs_ set: with floating-point numbers, LP and PI configured as described above, plus unsound VI, sound OVI, and the warm-start variants of PI and LP denoted "VI2PI" and "VI2LP", respectively. Exact results are provided by rational search (RS, essentially an exact version of VI) [38], PI with exact LU, and LP with exact solvers (SoPlex and Z3). All are implemented in Storm. In a first experiment, we evaluated the impact of using the topological approach and of collapsing MECs (cf. Section 2.4). The results, for which we omit Figure 5: Comparison of MDP model checking algorithms on the _qvbs_ set plots, are that the topological approach noticeably improves performance and scalability for _all_ algorithms, and we therefore always use it from now on. Collapsing MECs is necessary to guarantee termination of OVI, while for the other algorithms it is a potential optimisation; however we found it to overall have a minimal positive performance impact only. Since it is required by OVI and does not reduce performance, we also always use it from now on. Figure 5 shows the complete comparison of all the methods on the _qvbs_ set, for fp algorithms on the left and exact solutions on the right. Among the fp algorithms, OVI is clearly the fastest and most scalable. VI is somewhat faster but incurs several incorrect results that diminish its appearance in the quantile plot. OVI is additionally special among these algorithms in that it is sound, i.e. provides guaranteed \(\epsilon\)-correct results--though up to fp rounding errors, which can be eliminated following the approach of [27]. On the exact side, PI with an inexact-VI warm start works best. The scatter plot in Fig. 6 shows the performance impact of computing an exact instead of an approximate solution. ### The Hard QVBS Benchmarks The QVBS contains many models built for tools that use VI as default algorithm. The other algorithms may actually be important to solve key challenging instances where VI/OVI perform badly. This contribution could be hidden in the sea of instances trivial for VI. We thus zoom in on a selection of QVBS Figure 6: Additional direct performance comparisons Figure 7: Comparison of MDP model checking algorithms on the _hard_ subset instances that appear "hard" for VI: those where VI takes longer than the prior MDP state space construction phase in both Storm and mcsta, and additionally both phases together take at least \(1\,\mathrm{s}\). These are \(18\) of the previously considered \(367\) instances. In Figure 7, we show the behaviour of all the algorithms on this _hard_ subset. OVI again works better than VI due to the incorrect results that VI returns. We see that the performance and scalability gap between the algorithms has narrowed; although OVI still "wins", LP in particular is much closer than on the full _qvbs_ set. We also investigated the LP outcomes with solvers other than Gurobi: even on this set, Gurobi and COPT remain the fastest and most scalable solvers. With mcsta, in the basic configuration, they solve \(16\) and \(17\) instances, the slowest taking \(835\,\mathrm{s}\) and \(1334\,\mathrm{s}\), respectively; with topo, the numbers become \(17\) and \(15\) instances with the slowest at \(1373\,\mathrm{s}\) and \(1590\,\mathrm{s}\) seconds. We show the detailed comparison of OVI and LP, noting that there are a few instances where LP is much faster (Fig. 6(c)), and repeat the comparison between the best fp and exact algorithms (Fig. 6(b)). ### The Runtime Monitoring Benchmarks While the QVBS is intentionally diverse, our third set of benchmarks is intentionally focused: We study \(200\) MDPs from a runtime monitoring study [32]. The original problem is to compute the normalised risk of continuing to operate the system being monitored subject to stochastic noise, unobservable and uncontrollable nondeterminism, and partial state observations. This is a query for a conditional probability. It is answered via probabilistic model checking by unrolling an MDP model model along an observed history trace of length \(n\in\{\,50,\ldots,1000\,\}\) following the approach of Baier et al. [7]. The MDPs contain many transitions back to the initial state, ultimately resulting in numerically challenging instances (compare the structure of \(M_{n}\) in Section 2.3). We were able to compute a reference result for all instances. Figure 8 compares the different MDP model checking algorithms on this set. In line with the observations in [32], we see very different behaviour compared to the QVBS. Among the fp solutions on the left, LP with Gurobi terminates very quickly (under \(1\,\mathrm{s}\)), and either produces a correct (\(155\) instances) or a completely incorrect result (mostly \(0\), on \(45\) instances). VI behaves similarly, but is slower. Figure 8: Comparison of MDP model checking algorithms on the _premise_ set OVI, in contrast, delivers no incorrect result, but instead fails to terminate on all but 116 instances. In the exact setting, warm starts using VI inherit its relative slowness and consequently do not pay off. Exact PI outperforms both exact LP solvers. In the case of exact SoPlex, out of the 112 instances it does not manage to solve, 98 are errors likely related to a confirmed bug in the current version. The _premise_ set highlights that the best MDP model checking algorithm depends on the application. Here, in the fp case, LP appears best but produces unreliable (incorrect) results; the seemingly much worse OVI at least does not do so. Given the numeric challenge, an exact method should be chosen, and we show that these actually perform well here. ## 6 Conclusion We thoroughly investigated the state of the art in MDP model checking, showing that there is no single best algorithm for this task. For benchmarks which are not numerically challenging, OVI is a sensible default, closely followed by PI and LP with a warm start--although using the latter two means losing soundness as confirmed by a number of incorrect results in our experiments. For numerically hard benchmarks, PI and LP as well as computing exact solutions are more attractive, and clearly preferable in combination. Overall, although LP has the superior (polynomial) theoretical complexity, it in our practical evaluation almost always performs worse than the other (exponential) approaches. This is even though we use modern commercial solvers and tune both the LP encoding of the problem as well as the solvers' parameters. While we _observed_ the behaviour of the different algorithms and have some intuition into what makes the _premise_ set hard, an entire research questions of its own is to identify and quantify the structural properties that make a model hard. Our evaluation also raises the question of how prevalent MDPs that challenge VI are in practice. Aside from the _premise_ benchmarks, we were unable to find further sets of MDPs that are hard for VI. Notably, several stochastic games (SGs) difficult for VI were found in [34]; the authors noted that using PI for the SGs was better than applying VI to the SGs. However, when we extracted the induced MDPs, we found them all easy for VI. Similarly, [3] used a random generation of SGs of at most 10,000 states, many of which were challenging for the SG algorithms. Yet the same random generation modified to produce MDPs delivered only MDPs easily solved in seconds, even with drastically increased numbers of states. In contrast, Alagoz et al. [1] report that their random generation returned models where LP beat PI. However, their setting is discounted, and their description of the random generation was too superficial for us to be able to replicate it. We note that, in several of our scatter plots, the MA instances from the QVBS (where we check the embedded MDP) appeared more challenging overall than the MDPs. We thus conclude this paper with a call for challenging MDP benchmarks--as separate benchmark sets of unique characteristics like _premise_, or for inclusion in the QVBS.
2306.00323
Thought Cloning: Learning to Think while Acting by Imitating Human Thinking
Language is often considered a key aspect of human thinking, providing us with exceptional abilities to generalize, explore, plan, replan, and adapt to new situations. However, Reinforcement Learning (RL) agents are far from human-level performance in any of these abilities. We hypothesize one reason for such cognitive deficiencies is that they lack the benefits of thinking in language and that we can improve AI agents by training them to think like humans do. We introduce a novel Imitation Learning framework, Thought Cloning, where the idea is to not just clone the behaviors of human demonstrators, but also the thoughts humans have as they perform these behaviors. While we expect Thought Cloning to truly shine at scale on internet-sized datasets of humans thinking out loud while acting (e.g. online videos with transcripts), here we conduct experiments in a domain where the thinking and action data are synthetically generated. Results reveal that Thought Cloning learns much faster than Behavioral Cloning and its performance advantage grows the further out of distribution test tasks are, highlighting its ability to better handle novel situations. Thought Cloning also provides important benefits for AI Safety and Interpretability, and makes it easier to debug and improve AI. Because we can observe the agent's thoughts, we can (1) more easily diagnose why things are going wrong, making it easier to fix the problem, (2) steer the agent by correcting its thinking, or (3) prevent it from doing unsafe things it plans to do. Overall, by training agents how to think as well as behave, Thought Cloning creates safer, more powerful agents.
Shengran Hu, Jeff Clune
2023-06-01T03:43:41Z
http://arxiv.org/abs/2306.00323v3
# Thought Cloning: Learning to Think while Acting ###### Abstract Language is often considered a key aspect of human thinking, providing us with exceptional abilities to generalize, explore, plan, replan, and adapt to new situations. However, Reinforcement Learning (RL) agents are far from human-level performance in any of these abilities. We hypothesize one reason for such cognitive deficiencies is that they lack the benefits of thinking in language and that we can improve AI agents by training them to _think like humans do_. We introduce a novel Imitation Learning framework, Thought Cloning, where the idea is to not just clone the behaviors of human demonstrators, _but also the thoughts humans have as they perform these behaviors_. While we expect Thought Cloning to truly shine at scale on internet-sized datasets of humans thinking out loud while acting (e.g. online videos with transcripts), here we conduct experiments in a domain where the thinking and action data are synthetically generated. Results reveal that Thought Cloning learns much faster than Behavioral Cloning and its performance advantage grows the further out of distribution test tasks are, highlighting its ability to better handle novel situations. Thought Cloning also provides important benefits for AI Safety and Interpretability, and makes it easier to debug and improve AI. Because we can observe the agent's thoughts, we can (1) more easily diagnose why things are going wrong, making it easier to fix the problem, (2) steer the agent by correcting its thinking, or (3) prevent it from doing unsafe things it plans to do. Overall, by training agents _how to think_ as well as behave, Thought Cloning creates safer, more powerful agents.1 Footnote 1: The code and dataset are available in [https://github.com/ShengranHu/Thought-Cloning](https://github.com/ShengranHu/Thought-Cloning). ## 1 Introduction Language may be the key to what separates humans from all other animals, endowing us with an amazing level of general intelligence [1; 2; 3; 4]. Crucially, the benefits of language are not confined to improving our ability to communicate with others: language also helps us _think_ better [2; 3; 4]. We first describe the benefits of agents that can _understand_ language (a common topic in AI) before moving to the benefits of agents that _think_ in language (a topic that has received far less attention). There are many benefits that arise if our agents can understand language. Doing so is crucial for agents to generalize to new tasks we want them to perform. This is because it is drastically more sample efficient if one can tell an agent what the task is, rather than requiring the agent to figure out the task through trial and error [5; 6]. Moreover, agents that can understand language allow us to define new tasks at test time without having to anticipate every wish we might eventually have for our trained agents [7]. That is in contrast to conventional hand-designed task descriptions, which can be vast, but still place constraints on what we can ask an agent to perform [8]. While the benefits of agents that can understand language are commonly discussed, there has been relatively little discussion in AI, especially in Reinforcement Learning (RL), regarding the many benefits of agents that _think in language_. Thinking in language helps humans generalize, extrapolate, adapt to new situations, combine old knowledge in new ways, explore, plan, replan when necessary or beneficial, and the list goes on [2; 3; 4]. Despite these benefits, AI agents rarely, if ever, _think_, at least not in human language. While neural networks have internal vector activations that can be considered thinking, many hypothesize that there are specific benefits to thinking in the discrete, symbolic form of language (e.g. combining ideas in an exponential number of ways) [6; 9; 10], meaning that agents that think in language might learn faster, perform better, and generalize better than non-lingual agents. In addition to agents being more capable, there are major benefits regarding AI Safety and Interpretability that arise when agents think in our language. If one can watch an agent think during training, one can recognize deficiencies in skills or values that can be improved, or one could decide the agent is not ready to be deployed. During testing, one can constantly scan the thoughts of the agent and intervene when the agent plans to do something undesirable. For example, if an agent thinks "My goal is to take my passenger to the store as fast as possible so I will run through this red light without stopping" one could intervene to stop that behavior ahead of time. Furthermore, watching agents think enhances the steerability of agents. If an agent is confused when solving challenging tasks, one can inject their thoughts into the agent to help it solve the task in a desired way. A final major benefit of agents that think in human language is it makes it easier to train more capable, safer AI agents. One can spot _why_ things are not working, instead of just seeing that they are not working, and that provides ideas for how to debug and or improve AI training. For all these reasons, adding the ability of AI agents to think in language could produce many significant advantages, and we suggest that the most effective way to achieve this goal is by _imitating human thinking_. Humans do not acquire thinking skills in isolation; instead, they are learned in part through demonstrations and feedback provided by teachers [2; 11; 12; 13]. As such, a promising method is to have agents learn from demonstrations where humans think out loud while acting. This approach is distinct from existing works that leverage pre-trained Large Language Models (LLMs) for planning [14; 15], because such LLMs are not trained on data where humans think out loud _while_ _acting_. Thought data, such as YouTube videos and transcripts [16; 17], contains millions of hours of people talking out loud while performing tasks, revealing the thinking behind their actions, planning, decisions, and replanning, such as when they play video games [17]. This thought data is greatly valuable and widely available (Section 2), but has not yet been extensively explored, and this work hopes to encourage further research into the utilization of thought data to teach thinking skills to agents. Provided we can solve the real, significant challenges of AI Safety and existential risk [18; 19; 20; 21; 22], there are tremendous gains to be had by creating more powerful AI or even AGI. In this paper, we propose a novel Imitation Learning framework, Thought Cloning, where agents not only learn to act from human demonstrations, as in Behavioral Cloning [23], but also _learn to think_ from demonstrations where human think out loud while acting. Although we expect Thought Cloning to truly shine when trained on vast online datasets of synchronized human thoughts and actions, this paper validates the concept with synthetic thought data in a challenging domain, BabyAI [24]. Our experimental results illustrate that Thought Cloning outperforms Behavioral Cloning, even when Behavioral Cloning agents have the ability to think (in latent vectors), but have to learn that skill without the supervision of thinking provided by Thought Cloning. We also demonstrate that Thought Cloning generalizes better than Behavioral Cloning in out-of-distribution tasks in both zero-shot and fine-tuning settings. Finally, we provide empirical evidence for the previously discussed advantages of Thought Cloning in terms of Safety and Interpretability, where unsafe behavior can be near perfectly stopped before execution. All told, the results are promising and offer a glimpse of the enormous potential of Thought Cloning to not only make AI smarter, but also safer and more interpretable. ## 2 Proposed Method Conventional Imitation Learning methods [25; 26], such as Behavioral Cloning [23], strive to construct a policy that accurately replicates the distribution of behavior in a given dataset of demon strations. However, our proposed framework, Thought Cloning, diverges from this approach by aiming to teach agents how to also _think_ while acting, utilizing a synchronized dataset of human thinking. The thought dataset, denoted as \(\mathcal{D}=\{D_{i}\}_{i=1}^{N}\), comprises a series of trajectories, \(D_{i}=(m,\{(o_{t},th_{t},a_{t})\}_{t=1}^{T})\). Each trajectory encompasses a mission, \(m\), defined in natural language, along with an observation \(o_{t}\), an action \(a_{t}\), and a corresponding thought \(th_{t}\) at each timestep, \(t\). Such datasets are widely available online. For example, by inferring action labels from Youtube videos with VPT [17] and then retrieving the corresponding transcripts, we can obtain a thought dataset that contains both human thinking and action [17; 16]. In such a dataset for Minecraft, a thought like "I need to gather wood to build a shelter before nightfall" might correspond to the player moving towards a tree and collecting wood. To validate Thought Cloning, we construct a synthetic thought dataset to simulate having internet-scale datasets (see Section 3.1). In the Thought Cloning training framework, agents learn to produce natural language thoughts at each timestep and subsequently condition their actions based on these generated thoughts. This learning process gives rise to a bi-level architecture (Fig. 1). The architecture comprises an Upper-level Component responsible for thought generation, and a Lower-level Component tasked with executing actions based on the thoughts generated by the Upper-level Component. While different choices of what to condition the Upper-level and Lower-level Components are possible, in this work, for a particular trajectory of length \(T\) in the thought dataset we minimize: \[\min_{\theta_{u},\theta_{l}}\sum_{t=1}^{T}-\alpha\log\pi_{\theta_{u}}(th_{t}|m, \{o_{\tau}\}_{\tau=1}^{t},\{th_{\tau}\}_{\tau=1}^{t-1})-\log\pi_{\theta_{l}}( a_{t}|m,\{o_{\tau}\}_{\tau=1}^{t},th_{t}) \tag{1}\] Here, \(\theta_{u}\) and \(\theta_{l}\) represent the weights for the Upper-Level and Lower-level Components; \(\alpha\) represents the coefficient for Thought Cloning loss; \(th\), \(o\), \(a\), and \(m\) denote thought, observation, action, and mission, as previously described. For more complex or large-scale scenarios, the Upper-level Component can be implemented with pre-trained Vision-Language Models (VLM) either zero-shot or fine-tuned [27], while the Lower-level Component can be trained from scratch or adapted from existing language-conditioned controllers in the target domain [6; 14]. In this paper, we base both components on the BabyAI 1.1 model architecture [28], which utilizes a memory-augmented architecture-an LSTM [29]-to address the partial observability challenge. The model also employs FiLM [30] for modality fusion, effectively combining visual and text input. The detailed architecture adopted in this paper can be found in Supplementary Material A. While all models in this paper are trained from scratch, we anticipate that the utilization of pre-trained models in complex domains will be beneficial. Figure 1: Overall framework for Thought Cloning (TC). The TC agent has two components: the Upper-Level and Lower-level Components. At each timestep, the TC agent receives an observation, a mission, and a history of thoughts as inputs. The Upper-Level Component generates thoughts, and the Lower-Level Component generates actions conditioned on these thoughts. Generated thoughts and actions are compared to the ground truth from the demonstration dataset to calculate the loss. ## 3 Experimental Results ### Domain and Synthetic Thought Data This paper employs BabyAI [24], a simulated partially observable 2D gridworld domain. We focus on the most challenging environment, _BossLevel_, in BabyAI. An overview of the domain is shown in Fig. 2 (left). Each BabyAI environment consists of a randomly generated room layout, item configuration, and a mission described in natural language, sampled on an environment distribution. Colored items (_balls_, _keys_, _box_, _doors_) and the initial position of the agent are randomly distributed across a \(27\times 27\) grid world containing nine \(3\times 3\) rooms. Missions comprise four possible tasks (_GoTo_, _PickUp_, _OpenDoor_, _PutNextTo_), connected by _then/after_ and _and_ (with or without ordering constraints). _GoTo_ and _PickUp_ require agents to go to or pick up an object; _OpenDoor_ requires agents to open or unlock a door; _PutNextTo_ requires the agent to pick up object A, find object B, and drop A next to B. The mission may implicitly require the agent to open or unlock doors to find the target objects. Relative directional instruction in the mission, e.g., _on your right_, is based on the agent's initial position. An environment is solved when all tasks in the mission are completed. The agent's observation consists of the \(7\times 7\) grid cells in front of the agent, except the agent cannot see through walls (Fig. 2 yellow square). This work features the state-based observations provided by BabyAI [24]. Each grid cell in the \(7\times 7\) observation is represented by three integers: [the item ID, the color ID, and a status code], resulting in a \(7\times 7\times 3\) observation matrix. The status code is 1 for closed doors and 2 for locked doors, with 0 for open doors and other items. Occluded grid cells are assigned an item ID of 0. The agent's action space includes [left, right, forward, pickup, drop, toggle door (unlock, open, close)]. The key challenges in BabyAI revolve around partial observability, hard-to-explore mazes, complex missions in natural language, and long-horizon planning. The \(7\times 7\) observation field is limited compared to the \(27\times 27\) maze, and the agent cannot see through walls and closed doors. The maze containing multiple closed rooms is difficult to navigate and explore as the agent needs to find target items across multiple closed (even locked) rooms. The missions are challenging because (1) they are described in natural language and (2) they can consist of multiple tasks, each requiring complicated navigation and actions. Combining all these factors results in a long horizon, with hundreds or even thousands of steps needed to solve a single environment. One significant advantage of BabyAI is that it provides an Oracle Solver (named BOT in [24]) capable of generating step-by-step solutions for any given environment. This is achieved through hand-coded rules and an internal stack machine to generate plans for solving environments. In our work, we translate the Oracle Solver's internal states into natural language thoughts with pre-defined rules. For example, if the inner logic is to open a red door to explore the room, the translated thought will read, Figure 2: **Left**: A BabyAI [24] environment example. The environment contains various colored items (_ball_, _key_, _box_, _door_). The agent can pick up, drop, and move objects or open and close doors, while locked doors can only be unlocked with color-matched keys. The agent can observe the \(7\times 7\) grid cells in front of it, which can be blocked by walls and closed doors. **Right**: An example from a trained Thought Cloning agent planning and replanning. The mission requires reaching the purple box (highlighted), but a purple ball blocks the way. The agent’s thoughts and actions show replanning when encountering the obstacle, removing it, and resuming the previous goal. "open red door to explore". This translation process is combined with the generated demonstrations to synthesize the thought dataset with 1 million trajectories. To make the dataset more realistic, noise is added, with a 1% chance of adding a random noisy segment at each timestep, consisting of a random thought and several random actions, with a random length sampled from 1-6. A trajectory with example noise is shown in Supplementary Material B. ### Experiment Setup To verify the effectiveness of _learning to think_, we compare our Thought Cloning (TC) approach to the conventional Imitation Learning algorithm, Behavioral Cloning (BC). BC shares most of its architecture with the Lower-level Component of TC (Fig. 1), and because it is trained only on action loss, it does not encode thought like the lower-level component of TC. Additionally, since BC has fewer parameters than TC, we introduce an ablation variant called TC w/o Imitating Thought that is trained without the Thought Cloning loss to demonstrate that TC's superiority is not solely due to its larger number of parameters. The architecture of the variant is mostly identical to the TC architecture, except for a minor architectural difference where the latent vector from the upper level is directly input to the lower level as thought. This adjustment is required when training occurs without thought supervision because the discrete sampling to produce a sequence of thoughts (words) is not differentiable. Our training setup is based on BabyAI [24; 28], with BC agents identical to the Imitation Learning baseline from it. The training iterates for 8 epochs on the 1 million episode dataset, corresponding to a total of 160 training steps and \(7\times 10^{8}\) training frames. The Thought Cloning loss parameter \(\alpha\) (Eq. 1) is set to 2. During training, we employ teacher-forcing [31], which is adopted when decoding thoughts. It conditions the Lower-level Component on the ground truth thoughts from the dataset. The teacher-forcing ratio linearly decreases from 100% to 0% from the 10th training step to the end. Producing all the main results in the paper took about ten A40 GPUs for one week. More details on training can be found in Supplementary Material A. In our experiments, The performance of agents is evaluated based on their success rate in held-out test environments. Success for an environment is defined as the completion of all specified tasks in the mission. By controlling random seeds, all test environments are unseen during the training process. All experiment results from Sections 3.3, 3.4 and 3.5 are calculated from five independent runs. The success rate results presented in Section 3.3 are obtained by testing agents on a set of 512 sampled environments. In Section 3.4, agents are tested in a larger set of 1,024 test environments. During the testing phase, the TC agent has identical observations as the BC agent, i.e. it has no extra information. ### Imitation Learning In this section, we show the main performance results of training TC, BC, and TC w/o Imitating Thought. The results illustrate that TC learns faster than BC, where BC requires orders of magnitude more time to achieve a performance similar to TC's early-stage results, and TC ultimately outperforms BC at the end of training (Fig. 3). The outperformance of TC compared to BC at 25%, 50%, 75%, and 100% of the way through training is statistically significant, as confirmed by the Mann-Whitney U test, with \(p=[0.012,0.008,0.021,0.008]<0.05\). These results support our hypothesis that natural language can help the agent learn to explore and plan. Another comparison is between TC and an ablation variant TC w/o Imitating Thought that shares the same architecture with TC, but without the Thought Cloning loss in training. The results show that TC also substantially outperforms TC w/o Imitating Thought (Fig. 3). Similar to the previous, the results are statistically significant (\(p=[0.008,0.012,0.008,0.008]<0.05\)). The results reveal that TC's superior performance is not solely due to a larger number of parameters than BC, and also supports our argument that learning from human thought boosts an agent's ability to think. An example of a TC agent planning and replanning is shown in Fig 1 (right). After opening the blue door, the agent discovers the target (a purple box) within its observation and thinks about going to it to complete the task. However, the agent realizes that a purple ball is blocking its path. A smart replan emerges here, with the agent inserting a new plan to remove the ball. The agent achieves this subgoal by picking up the ball in its way, finding an empty space, and then dropping it. After completing this new, necessary, intermediate task, the agent resumes its original mission to go to the purple box and successfully solves the environment. From this example, we can see that by thinking like humans in natural language, the agent demonstrates successful planning and replanning abilities. We also see the interpretability benefits, as it is easy to follow along and understand _why_ the agent executes certain actions. ### Generalization to Out-of-Distribution Environments This section compares the generalization abilities between the TC and BC agents by testing them on environments that are increasingly out of distribution. We define the distribution of environments with two difficulty dimensions: Behavioral Difficulty and Cognitive Difficulty. Behavioral Difficulty is based on the length of the action sequence required to solve the environment (provided by Oracle Solver, see Section 3.1). The simplest environments require approximately 20 steps, while the most challenging environments require more than 500 steps. Cognitive Difficulty reflects the complexity of the mission, with more difficult environments requiring stronger planning abilities to complete complex tasks. The calculation formula for Cognitive Difficulty is adapted from the maxStep parameter calculation in BabyAI environments [24] and is given by (\(\#\) of {_GoTo, PickUp, OpenDoor_} +2\times\#\) of {_PutNextTo_} +\#\) of ordering constraints). The PutNextTo task is assigned a higher weight because it involves a combination of picking up, navigating, and dropping, making it the most challenging task among the four. The range of cognitive difficulty spans from 1 (simplest) to 9 (most difficult). In the training distribution, the environments exhibit means and standard deviations of Behavioral and Cognitive Difficulties of \(84.2\pm 68.8\) and \(2.7\pm 1.6\), respectively. In this paper, we define out-of-distribution (OOD) environments as those with a Behavioral Difficulty \(>175\) or a Cognitive Difficulty \(\geq\) 4, each being approximately more than one standard deviation away from the mean. The furthest OOD environments, with a Behavioral Difficulty greater than 425 or a Cognitive Difficulty of 9, had less than \(5.7\times 10^{-5}\) and \(1.6\times 10^{-4}\) probability of being sampled during training (calculated with rejection sampling). For testing both in-distribution and out-of-distribution environments, we sample various sets of environments that extend away from the distribution in terms of both Behavioral and Cognitive Difficulty, and then evaluate agents on these sets. For Cognitive Difficulty, we sample sets of environments across the full range of Cognitive Difficulty levels 1-9. For Behavioral Difficulty, we sample sets of environments within intervals of 50 (e.g., 125-175, 175-225, etc.), starting from 25. Environments with a Behavioral Difficulty \(>\) 425 are grouped into one set. First, we test the zero-shot performance of TC and BC agents in OOD environments. The results show that the TC agent substantially outperforms the BC agent with environments being increasingly out of distribution (Fig. 4a), and the results are statistically significant across all testing difficulties (Mann-Whitney U test \(p<0.05\)), thereby supporting our hypothesis that language utilization can enhance agents' generalization capabilities. Moreover, we observe that the Oracle Thoughts + TC Learned Control achieves near-optimal performance even on the most challenging environments. Figure 3: Training progress comparison of Thought Cloning (TC), Behavioral Cloning (BC), and a TC ablation variant without the Thought Cloning loss. The BC architecture is identical to the Lower-level Component of TC and the TC w/o Imitating Thought has the same architecture as TC, without the TC loss. BC and the ablation variant are trained solely with the action loss (which leads to some minor architectural differences, see Section 3.2.) The results indicate that TC learns faster than BC and also outperforms it. Furthermore, the comparison between TC and TC w/o Imitating Thought demonstrates that the superiority of TC is not simply due to having more parameters. This indicates that the current limitation of TC performance lies in high-level thinking. As we scale our approach to leverage internet-sized datasets of human thinking, the high-level thinking capability is expected to improve substantially, thereby enhancing the power of the TC agent. Next, we investigate how well the agents adapt to new situations by fine-tuning them on OOD environments. We fine-tune the TC and BC agents on the corresponding environments for 15 epochs, with the same settings described in Section 3.2. The results demonstrate that the TC agent is better at adapting to OOD environments (Fig. 3(b)). The superiority of TC over BC is statistically significant across all testing difficulties, as supported by the Mann-Whitney U test \(p<0.05\), with the exception of Cognitive Difficulty 4, where both methods already achieve near-perfect performance. The results support our argument that language can better assist agents in adapting to novel situations. ### AI Safety and Interpretability The ability to observe the agent's thought process gives our model a high degree of interpretability. To empirically assess the interpretability of TC, we introduce a metric named the Future Action Declaration Score. This metric quantifies the fraction of times when an agent, preparing to execute an action other than navigation, declares this impending action in its thoughts beforehand. In the training distribution, TC agents performed exceptionally well (green square in Fig. 4(a)). Interestingly, TC agents also scored near-perfectly across all out-of-distribution environments (rest of Fig. 4(a)), demonstrating the robust and consistent interpretability of our model even under novel, out-of-distribution situations, which is an important property for AI safety and interpretability. Figure 4: The zero-shot and fine-tuning success rate of Thought Cloning (TC) and Behavioral Cloning (BC) agents on environments that are increasingly out of distribution. Behavioral and Cognitive Difficulties are defined by the length of the solutions to environments and the mission complexity of environments respectively (Section 3.4). (**a**): The gray region indicates the training distribution. The Oracle Thought + TC Learned Control refers to the TC agent with oracle high-level thoughts. The results demonstrate TC generalizes much better than BC. They also illustrate that with a more powerful Upper-level Component trained from vast human thought data, the agent should become drastically more capable. (**b**): TC is much better at adapting to novel situations than BC. Due to its high degree of interpretability, Thought Cloning allows a simple method that can considerably enhance AI safety. We call it Precrime Intervention. In practical settings, an agent might employ dangerous or undesirable strategies to accomplish challenging tasks. However, because Thought Cloning features such strong interpretability, we can simply halt the agent upon detecting dangerous thoughts and thereby prevent the unsafe behavior it was planning to conduct. Additionally and importantly, Precrime Intervention does not require any changes to the weights of the model. If we learn or decide _after_ training that a certain behavior is unsafe or undesirable, Precrime Intervention can still prevent it. The same flexibility allows different definitions of what is allowable and unsafe behavior in different settings (e.g. in the presence of adults vs. children or customized to the preferences of different countries with different regulations). To demonstrate this flexibility and test to what extent Precrime Intervention works, we conducted three separate tests, where we declared three different behaviors as unsafe (1) touching any red item, (2) picking up any ball, and (3) picking up the object the agent is being asked to pick up in its mission. The last one is particularly interesting because the agent has a strong prior to want to perform that action, which Precrime Intervention has to combat. We report the fraction of episodes where such unsafe behaviors occurred with and without Precrime Intervention (Fig. 4(b)). Remarkably, Precrime Intervention almost entirely eliminates all unsafe behaviors, thereby demonstrating the promising potential of TC agents in advancing AI safety. Moreover, the interpretability of the model also greatly aids in diagnosing problems, thus simplifying the development of more capable and safer AI. This feature actually proved beneficial during the development phase of this paper. Initially in our development, the TC agent showed promising performance in training, but frequently failed during testing, repetitively oscillating between incorrect thoughts (plans) without actively exploring new ideas. This observation helped us to recognize that, because we had trained with teacher forcing throughout with oracle (i.e. perfect) thoughts, the agent had never practiced having incorrect thoughts, and thus had never practiced recovering from them by trying alternate ideas. Thus, at inference time when it has to generate its own thoughts, which are sometimes incorrect, it did not know how to recover. We then instead tested an immediate transition from teacher-forcing to 100% auto-regressive sampling and training (i.e. from 100% teacher-forcing on one training step to 0% on the next), but the agent generated too many nonsensical thoughts, which prevented stable training and harmed performance. Thanks to the model's interpretability, we were able to recognize the situation and try an alternative strategy that worked well, and is the one we report results for in this paper: we gradually decay the teacher-forcing rate (fraction) during training, which dramatically improved performance (Section. 3.3). Supplementary Material C contains more details about this example. Figure 5: (**a**): A heatmap illustrating the Future Action Declaration Score, a metric designed to evaluate the interpretability of Thought Cloning agents (Section 3.5). The \(x\) and \(y\) axes denote various levels of difficulty. Each cell represents a region of sampled environments, with the color intensity reflecting the mean score. Brighter cells indicate a higher degree of match between the agent’s declared thoughts and subsequent actions. The green square denotes the training distribution, while the rest of the regions are out of distribution (Section 3.4). The results illustrate the robust and consistent interpretability of Thought Cloning agents. (**b**): A bar chart demonstrating the effectiveness of the Precrime Intervention mechanism, which is to halt the Thought Cloning agents upon detecting dangerous plans in their thoughts and thus prevent unsafe behaviors. We show three tests (\(x\) axis) where (1) touching red items, (2) picking up balls, and (3) picking up requested items were declared unsafe. We report the fraction of episodes where unsafe behaviors occurred (\(y\) axis). The results show that Precrime Intervention effectively eliminates almost all unsafe behaviors. Lastly, TC enables steerability. That is because the actions of TC agents are conditioned on their thoughts, and we can manually inject alternate thoughts to have the agents do what we wish. We can also take advantage of this capability to help agents in challenging tasks. The TC agent, when provided with oracle high-level thoughts, is capable of near-perfect performance across almost all environments (Fig. 4a). Additional empirical evidence of this is the high fraction of tasks successfully performed by following these oracle thoughts. This oracle task success rate starts at a median of 96.0 (95% confident interval: -8.4,+4.0)% for tasks furthest out of distribution (Behavioral Difficulty \(>\) 450 and Cognitive difficulty \(=\) 9), and rises to a near-perfect median of 99.2 (95% confident interval: -5.5,+0.8)% when within the training distribution. These findings highlight the promising potential of TC agents in effectively collaborating with humans to accomplish challenging missions. ## 4 Related Works ### Planning in RL with Language Recent work leverages the reasoning capability, the ability to flexibly combine abstractions, and the interpretability offered by natural language to address high-level planning challenges in real-world domains. We augment this approach by enabling agents to _think in language_, facilitating the capability of agents, AI Safety, and Interpretability. There are two major categories of works in the literature that enable language planning. The first involves Hierarchical RL methods, where the language represents the hierarchy [32; 33; 34; 35; 36]. However, the planning space in these works is constrained to a pre-defined subgoal set, limiting their generalization to novel scenarios and preventing them from utilizing the reasoning and powerful commonsense found in pre-trained LLMs [37; 38; 39]. The second category of work involves pre-trained LLMs that generate plans in language for RL systems. Earlier works [14; 15] allow the LLM to predict step-by-step plans for a specific task. However, these works are open-loop methods, as the LLM cannot perceive the environment while acting, and thus cannot adapt and change once things do not go according to plan, which is a crucial capability in complex environments. Some recent approaches have developed closed-loop methods to provide LLMs with dynamic information for planning [40; 15; 41]. While these works show exciting performance in different environments, their closed-loop feedback mechanisms for the LLMs either rely on an oracle from the environment or complicated captioning models. The work most relevant to our vision is PALM-E [27], in which a pre-trained Vision-Language Model is adopted as the planner, allowing it to recognize patterns from observations directly. However, PALM-E was not trained on synchronized videos of humans thinking out loud and acting, meaning it does not benefit from learning from human thought demonstrations how to do things like plan, replan, create high-level goals and the subgoals required to achieve them, and the many other benefits of thinking intelligently during acting. ### Learning from Dataset Aligning Action and Language Several studies have recognized the value of datasets that align action with language. DIAL [42] employs such a dataset to train language-conditioned agents with Behavioral Cloning, achieving impressive results in real-world robotic tasks. However, it is limited by a pre-defined instruction set. Another work, (SL)\({}^{3}\)[43], generates a hierarchical dataset for agents to learn from, demonstrating superiority in a challenging 3D simulation domain, but has the drawback discussed in the previous section of being open-loop. Finally, in the study most similar to our own, Hu et al. [44] collected a dataset from two human players collaborating on an RTS game. However, the agent in [44] is not language conditioned, which limits its potential to learn to do any task (e.g. arbitrary tasks requested of it in natural language). Similarly, a work concurrent to ours constructed a dataset with BabyAI oracle plans [45]. However, their architecture, unlike ours, is not compatible with most pre-trained models, making ours more able to harness new, powerful foundation models to tackle increasingly complex challenges. Additionally, although previously mentioned two methods [44; 45] employ learning frameworks similar to ours, they do not explore the full potential of learning from datasets that align action with language, particularly in terms of the resulting benefits in terms of generalization, AI Safety, and Interpretability. Discussion and Conclusion Our research findings are focused on two main areas. First, our Thought Cloning (TC) agent demonstrated superior performance compared to Behavioral Cloning (BC), effectively showcasing its capabilities in generalization, exploration, planning, replanning, and adaptation to various situations. Second, we presented empirical evidence underscoring the benefits Thought Cloning provides in AI Safety and Interpretability. The robust interpretability of the TC agent not only help developers in diagnosing AI systems, but also contributes to AI safety, as evidenced by mechanisms such as Precrime Intervention. Our empirical results on steerability further spotlight the potential of TC agents in effectively collaborating with humans to tackle complex tasks. We utilized a synthetic dataset and trained a model from scratch as a proof of concept. However, the full vision for the TC framework, and where we expect it to truly shine, will be when Thought Cloning agents are trained on internet-scale datasets of humans thinking out loud while acting, such as YouTube videos and their narration [17, 16] (whether in closed caption transcripts or directly from audio). Consider the prospect of an agent that has both learned to think and act like humans in a huge variety of settings. Much like the thoughts of human children are guided by teachers, our agents could become skilled at planning, replanning, reasoning, and explaining their thinking to us (either via their outputs or because we have the unique ability to observe the thoughts in their minds). The vision for utilizing internet-scale datasets is also supported by the experimental results presented in Section 3.4, which suggest that the current bottleneck in agent capability is its high-level thinking, a skill that could be substantially enhanced by scaling to vast online data. Of course, there are also risks associated with such agents, from the same ones that occur with language models trained on our written thoughts, such as bias [46, 47, 48] and otherwise emulating undesirable human thoughts, but also in terms of AI Safety and Existential Risk [18, 19, 21]. In conclusion, this paper has introduced Thought Cloning, where agents not only simply learn to act from human demonstrations, as in Behavioral Cloning, but also _learn to think_ from demonstrations where humans think out loud while acting. Through Thought Cloning, we have illustrated how an agent can become more capable, interpretable, and safe by _imitating human thinking_. This work facilitates the training of increasingly powerful agents and opens up numerous avenues for future scientific investigation in Artificial General Intelligence, AI Safety, and Interpretability. ## Acknowledgments and Disclosure of Funding This work was supported by the Vector Institute, a grant from Schmidt Futures, an NSERC Discovery Grant, and a generous donation from Rafael Cosman. We also thank Aaron Dharna, Ben Norman, and Jenny Zhang (sorted alphabetically) in our lab at the University of British Columbia for insightful discussions.
2304.10991
Dilaton photoproduction in a magnetic dipole field of pulsars and magnetars
According to Einstein-Maxwell-Dilaton theory, the dilaton field $\psi$ can be produced by electromagnetic fields with non-zero Maxwell invariant. So electromagnetic wave propagating in an external electromagnetic field is a typical source of dilaton radiation. For study dilaton photoproduction in astrophysical conditions it's interesting to consider plane elliptically polarized electromagnetic wave propagating in the electromagnetic field of magnetic dipole ${\bf m}$ of pulsars and magnetars. The dilation field equation is solved in case $|\psi| \ll 1$. The angular distribution dilaton radiation is studied in every point of space. It's shown that spectral composition of dilatons is similar to spectral composition of plane electromagnetic wave. Amount of dilaton energy radiated in time and all directions is greatest in condition $(B_1^2-B_2^2)(m_x^2-m_y^2)\geq 0,$ where $B_1$ and $B_2$ are electromagnetic wave amplitudes along the axes of polarization ellipse. This condition is valid for many neutron star systems.
Mikhail Astashenkov
2023-04-21T14:39:47Z
http://arxiv.org/abs/2304.10991v1
# Dilaton photoproduction in a magnetic dipole field of pulsars and magnetars ###### Abstract According to Einstein-Maxwell-Dilaton theory, the dilaton field \(\psi\) can be produced by electromagnetic fields with non-zero Maxwell invariant. So electromagnetic wave propagating in an external electromagnetic field is a typical source of dilaton radiation. For study dilaton photoproduction in astrophysical conditions it's interesting to consider plane elliptically polarized electromagnetic wave propagating in the electromagnetic field of magnetic dipole \({\bf m}\) of pulsars and magnetars. The dilation field equation is solved in case \(|\psi|\ll 1\). The angular distribution dilaton radiation is studied in every point of space. It's shown that spectral composition of dilatons is similar to spectral composition of plane electromagnetic wave. Amount of dilaton energy radiated in time and all directions is greatest in condition \((B_{1}^{2}-B_{2}^{2})(m_{x}^{2}-m_{y}^{2})\geq 0\), where \(B_{1}\) and \(B_{2}\) are electromagnetic wave amplitudes along the axes of polarization ellipse. This condition is valid for many neutron star systems. Introduction Nowadays in scientific literature there are few theories for Goldstone bosons:arions [1; 2; 3; 4; 5], axions [6; 7; 8] and dilatons [9; 10; 11; 12; 13; 14]. One of the main sources of these particles is electromagnetic fields and waves. Equations for these bosons in the classic limit have similar form. In particular, corresponding to string theory system of interacting Maxwell (U(1) gauge) and scalar (dilaton) fields design, so called Einstein-Maxwell-Dilaton theory. According to [15], the action of Einstein-Maxwell-Dilaton theory can be written as \[S=\int d^{4}x\Big{\{}a_{0}(\partial\Psi)^{2}+a_{1}e^{-2{\cal K}\Psi}F^{nm}F_{ nm}\Big{\}}, \tag{1}\] where \(\Psi\) is a dilaton field, \(a_{0},a_{1}\) and \({\cal K}\) are gauge constants and \(F_{nm}\) is Maxwell tensor. In string theory \({\cal K}=1\), the five-dimensional Kaluza-Klein theory results in the value \({\cal K}=\sqrt{3}\); In this work the constant \({\cal K}\) is arbitrary. The field equations in Minkowski spacetime obtained from action (1) have the form: \[\hbox{\hbox to 0.0pt{\hbox{\kern 2.5pt\vrule height 6.45pt width 0.4pt depth 0.0pt \hss}\vrule height 0.0pt width 4.0pt depth 0.0pt\hss}\vrule height 0.0pt width 4.0pt depth 0.0pt\hss}\Psi=\frac{a_{1}{\cal K}}{a_{0}}e^{-2{\cal K}\Psi}F_{nm}F^{nm}= \frac{2a_{1}{\cal K}}{a_{0}}e^{-2{\cal K}\Psi}\big{[}{\bf B^{2}}-{\bf E^{2}} \big{]}, \tag{2}\] \[\frac{\partial}{\partial x^{n}}\Big{[}e^{-2{\cal K}\Psi}F^{nm}\Big{]}=0\;, \tag{3}\] where \({\bf E}\) is the electric field strength and \({\bf B}\) is magnetic induction. As dilaton field hasn't been discovered, one can assume that dilaton field is weak in Solar system and \(|\psi|\ll 1\). In this case equation (3) can be expressed as \[\hbox{\hbox to 0.0pt{\hbox{\kern 2.5pt\vrule height 6.45pt width 0.4pt depth 0.0pt \hss}\vrule height 0.0pt width 4.0pt depth 0.0pt\hss}\vrule height 0.0pt width 4.0pt depth 0.0pt\hss}\Psi=\frac{2a_{1}{\cal K}}{a_{0}}\big{[}{\bf B^{2}}-{\bf E^{2}} \big{]}\;. \tag{4}\] According to equations (3) - (4), the invariant \({\bf B^{2}}-{\bf E^{2}}\) is the source of the dilaton field. Beside this invariant is equal to zero for wave zone of every electromagnetic wave, dilaton photoproduction is possible from near zone or in area where there is a superposition of electromagnetic fields with non-zero invariant \({\bf B^{2}}-{\bf E^{2}}\). Basic equation and its solution Consider a plane elliptically polarized electromagnetic wave with frequency \(\omega\) propagating among axis \(z\). Fields \(\mathbf{E}\) and \(\mathbf{B}\) of electromagnetic wave have the following form: \[\mathbf{B}=\mathbf{e_{x}}B_{1}\cos(kz-\omega t)-\mathbf{e_{y}}B_{2 }\sin(kz-\omega t), \tag{5}\] \[\mathbf{E}=-\mathbf{e_{x}}B_{2}\sin(kz-\omega t)-\mathbf{e_{y}}B_ {1}\cos(kz-\omega t),\] where \(k=\omega/c\), \(B_{1}\) and \(B_{2}\) are the electromagnetic wave amplitudes among principal axes of the polarization ellipse. For example, if \(B_{1}=B_{2}\) then the wave (5) has circle polarization; if \(B_{1}=0\) or \(B_{2}=0\) then the wave (5) is linear polarized. In other cases wave has elliptical polarization. Assume that there is a neutron star with radius \(R_{S}\) rotating around magnetic dipole momentum \(\mathbf{m}\) in the origin of the axis. The magnetic induction \(\mathbf{B}\) of the neuron star for \(r>R_{S}\) has the form: \[\mathbf{B}=\frac{3(\mathbf{m}\cdot\mathbf{r})\mathbf{r}-\mathbf{m}r^{2}}{r^{5 }}\;, \tag{6}\] For now a few hundred neutron stars have been discovered [16; 17] whose rotating axis doesn't coincide with the axis of magnetic dipole momentum. Such stars radiate electromagnetic waves in magnetic dipole approximations (pulsars and magnetars). But there must be neutron stars whose dipole momentum axis coincides with the rotation axis. In this case there is no electromagnetic radiation and their magnetic field (6) must be static. Substituting superposition of electromagnetic fields (5) and (6) to expression (4) and discarding the time-independent terms, one can obtain \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! It follows from this expression that spectral composition of dilatons coincides with the spectral composition of plane electromagnetic wave. Rewrite expression (8) in spherical coordinates: \[\Psi=\frac{2a_{1}\mathcal{K}}{ka_{0}}\Big{\{}k\Big{[}\frac{[m_{x} \cos\phi+m_{y}\sin\phi]\sin\theta}{r(1-\cos\theta)}-\frac{m_{z}}{r}\Big{]} \Big{[}B_{1}\cos\phi\cos k(r-ct) \tag{9}\] \[-B_{2}\sin\phi\sin k(r-ct)\Big{]}\sin\theta-\Big{[}\frac{[m_{x} \cos\phi+m_{y}\sin\phi](2-\cos\theta)\sin\theta}{r^{2}(1-\cos\theta)^{2}}- \frac{m_{z}}{r^{2}}\Big{]}\] \[\times\Big{[}B_{1}\cos\phi[\sin k(r-ct)-\sin k(r\cos\theta-ct)]+B _{2}\sin\phi[\cos k(r-ct)\] \[\qquad\qquad-\cos k(r\cos\theta-ct)]\Big{]}\sin\theta+\frac{1}{r^{2}(1- \cos\theta)}\Big{[}B_{1}m_{x}[\sin k(r-ct)\] \[-\sin k(r\cos\theta-ct)]+B_{2}m_{y}[\cos k(r-ct)-\cos k(r\cos \theta-ct)]\Big{]}\Big{\}}.\] It should be noted that in limit \(\theta\to 0\) dilaton field has no singularity \[\lim_{\theta\to 0}\Psi=\frac{2a_{1}\mathcal{K}}{a_{0}}\Big{\{}B_{1}m_{x} \cos k(r-ct)-B_{2}m_{y}\sin k(r-ct)\Big{\}}. \tag{10}\] Thus, dilaton field has finite value everywhere out of neutron star. Using the expression (9) one can study angular distribution of the dilaton radiation. ## III Angular distribution of the dilaton radiation By definition [7; 18], the amount of energy \(dI\) emitted by the source per unit time through the solid angle \(d\Omega\) is given by the formula: \[\frac{dI}{d\Omega}=\lim_{r\rightarrow\infty}r(\mathbf{W}\cdot\mathbf{r})\;, \tag{11}\] where \(\mathbf{W}\) is the energy flux density vector associated with the components of the stress-energy tensor \(T^{ik}\) by the relations: \(W^{\alpha}=T^{0\alpha}\). For free dilatonic field the stress-energy tensor \(T^{ik}\) has the form: \[T^{ik}=2a_{0}g^{in}g^{km}\Big{\{}\frac{\partial\Psi}{\partial x^{n}}\frac{ \partial\Psi}{\partial x^{m}}-\frac{1}{2}g^{ik}\frac{\partial\Psi}{\partial x ^{n}}\frac{\partial\Psi}{\partial x^{m}}g^{nm}\Big{\}}. \tag{12}\] It follows that for \(a_{0}>0\) dilaton energy density is positive for every distribution of electromagnetic fields. \[T^{00}=a_{0}\Big{\{}\frac{1}{c^{2}}\left(\frac{\partial\Psi}{\partial t} \right)^{2}+\left(\frac{\partial\Psi}{\partial x}\right)^{2}+\left(\frac{ \partial\Psi}{\partial y}\right)^{2}+\left(\frac{\partial\Psi}{\partial z} \right)^{2}\Big{\}}\geq 0. \tag{13}\] Angular distribution of dilaton radiation produced by elliptically polarized electromagnetic wave (5) propagating in magnetic field of neutron star (6) can be calculated by formula: \[\frac{dI}{d\Omega}=r\big{(}{\bf r\ W}\big{)}=-2a_{0}r(\vec{r}\ \vec{\nabla}\Psi) \frac{\partial\Psi}{\partial t}, \tag{14}\] In the expression above it is taken into account that the dilaton field \(\psi\) has a special point in \(\theta=0\). Substituting (8) to (14) one can obtain angular distribution averaged over the period of electromagnetic wave \(T=2\pi/\omega\): \[\frac{\overline{dI}}{d\Omega}=\frac{4a_{1}^{2}{\cal K}^{2}c}{a_{0}r^{2}}\Big{\{} \ 4k^{2}r^{2}(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)(m_{x}\cos\phi+m_{y}\sin \phi)^{2}+2kr\big{(} \tag{15}\] \[B_{1}B_{2}m_{y}\cos\phi-B_{1}B_{2}m_{x}\sin\phi-2krB_{1}^{2}m_{z}\sin\theta \cos^{2}\phi-2krB_{2}^{2}m_{z}\sin\theta\] \[\times\sin^{2}\phi\ \big{)}(m_{x}\cos\phi+m_{y}\sin\phi)+krB_{1}B_{2}m_{z}\sin \theta(m_{x}\sin\phi-m_{y}\cos\phi)\] \[+\ (1-\cos\theta)\times\Big{[}2k^{2}r^{2}m_{z}^{2}(B_{1}^{2}\cos^{2}\phi+B_{2 }^{2}\sin^{2}\phi)-4k^{2}r^{2}(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)\] \[\times(m_{x}\cos\phi+m_{y}\sin\phi)^{2}+kr\big{(}2krB_{1}^{2}m_{z}\sin\theta \cos^{2}\phi+2krB_{2}^{2}m_{z}\sin\theta\sin^{2}\phi\] \[+B_{1}B_{2}m_{x}\sin\phi-B_{1}B_{2}m_{y}\cos\phi\big{)}(m_{x}\cos\phi+m_{y} \sin\phi)\Big{]}\] \[+(1-\cos\theta)^{2}\Big{[}k^{2}r^{2}(m_{x}\cos\phi+m_{y}\sin\phi)^{2}-k^{2}r^ {2}m_{z}^{2}+\big{(}\ 3(m_{x}\cos\phi+m_{y}\sin\phi)^{2}\] \[-3m_{z}^{2}-2m_{z}\sin\theta(m_{x}\cos\phi+m_{y}\sin\phi)\ \big{)}[1-\cos kr(1-\cos \theta)]+\big{(}\ 4krmz_{z}^{2}+2kr\] \[\times m_{z}\sin\theta(m_{x}\cos\phi+m_{y}\sin\phi)-5kr(m_{x}\cos\phi+m_{y} \sin\phi)^{2}\ \big{)}\sin kr(1-\cos\theta)\Big{]}\] \[\times(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)\ +\ (1-\cos\theta)^{3} \times\Big{[}\big{(}m_{z}^{2}-(m_{x}\cos\phi+m_{y}\sin\phi)^{2}\big{)}\] \[\times[1-\cos kr(1-\cos\theta)]+kr(\ (m_{x}\cos\phi+m_{y}\sin\phi)^{2}-m_{z}^{2}\ )\sin kr(1-\cos \theta)\Big{]}\] \[\times\Big{(}B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi\Big{)}+\ \frac{[1-\cos kr(1-\cos \theta)]}{(1-\cos\theta)^{2}}\times\Big{[}8(B_{1}^{2}\cos^{2}\phi+B_{2}^{2} \sin^{2}\phi)\] \[\times(m_{x}\cos\phi+m_{y}\sin\phi)^{2}+2(B_{1}^{2}m_{x}^{2}+B_{2}^{2}m_{y}^{2} )-8(B_{1}^{2}m_{x}\cos\phi+B_{2}^{2}m_{y}\sin\phi)\] \[\times(m_{x}\cos\phi+m_{y}\sin\phi)\Big{]}+\ \frac{[1-\cos kr(1-\cos\theta)]}{(1-\cos \theta)}\times\Big{[}2\big{(}B_{1}^{2}m_{x}\cos\phi+B_{2}^{2}m_{y}\sin\phi\] \[-3B_{1}^{2}m_{z}\sin\theta\cos^{2}\phi-3B_{2}^{2}m_{z}\sin\theta\sin^{2}\phi+ 2krB_{1}B_{2}m_{y}\cos\phi-2krB_{1}B_{2}m_{x}\sin\phi\big{)}\] \[\times(m_{x}\cos\phi+m_{y}\sin\phi)+3B_{1}^{2}m_{x}m_{z}\sin\theta\cos\phi+3B_{ 2}^{2}m_{y}m_{z}\sin\phi\sin\theta\] \[-2krB_{1}B_{2}m_{y}m_{z}\sin\theta\cos\phi+2krB_{1}B_{2}m_{x}m_{z}\sin\phi \sin\theta-B_{1}^{2}m_{x}^{2}-B_{2}^{2}m_{y}^{2}\Big{]}\] \[+\frac{\sin(kr(1-\cos\theta))}{(1-\cos\theta)}\times\Big{[}m_{z}\sin\theta\big{(} B_{1}B_{2}m_{y}\cos\phi-B_{1}B_{2}m_{x}\sin\phi-2krB_{1}^{2}m_{x}\cos\phi\] \[-2krB_{2}^{2}m_{y}\sin\phi\bigr{)}-8kr(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2} \phi)(m_{x}\cos\phi+m_{y}\sin\phi)^{2}\] \[+2\bigl{(}2krB_{1}^{2}m_{x}\cos\phi+2krB_{2}^{2}m_{y}\sin\phi-B_{1}B_{2}m_{y} \cos\phi+B_{1}B_{2}m_{x}\sin\phi\] \[+2krB_{1}^{2}m_{z}\sin\theta\cos^{2}\phi+2krB_{2}^{2}m_{z}\sin\theta\sin^{2} \phi\bigr{)}(m_{x}\cos\phi+m_{y}\sin\phi)\Bigr{]}\] \[+\bigl{[}1-\cos kr(1-\cos\theta)\bigr{]}\times\Bigl{[}\bigl{(}5B_{1}^{2}m_{x} \cos\phi+5B_{2}^{2}m_{y}\sin\phi-4krB_{1}B_{2}m_{y}\cos\phi\] \[+4krB_{1}B_{2}m_{x}\sin\phi+3B_{1}^{2}m_{z}\sin\theta\cos^{2}\phi+3B_{2}^{2}m_ {z}\sin\theta\sin^{2}\phi\bigr{)}(m_{x}\cos\phi\] \[+m_{y}\sin\phi)-10(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)(m_{x}\cos \phi+m_{y}\sin\phi)^{2}+m_{z}\sin\theta\] \[\times\bigl{(}krB_{1}B_{2}m_{y}\cos\phi-krB_{1}B_{2}m_{x}\sin\phi-2B_{1}^{2}m _{x}\cos\phi-2B_{2}^{2}m_{y}\sin\phi\bigr{)}\Bigr{]}\] \[+\,\sin kr(1-\cos\theta)\times\Bigl{[}4kr(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^ {2}\phi)(m_{x}\cos\phi+m_{y}\sin\phi)^{2}\] \[+\bigl{(}4krB_{1}^{2}m_{z}\sin\theta\cos^{2}\phi+4krB_{2}^{2}m_{z}\sin\theta \sin^{2}\phi-4krB_{1}^{2}m_{x}\cos\phi\] \[-4krB_{2}^{2}m_{y}\sin\phi+B_{1}B_{2}m_{y}\cos\phi-B_{1}B_{2}m_{x}\sin\phi \bigr{)}(m_{x}\cos\phi\] \[+m_{y}\sin\phi)+krm_{z}\sin\theta(B_{1}^{2}m_{x}\cos\phi+B_{2}^{2}m_{y}\sin \phi)\Bigr{]}\] \[+\,(1-\cos\theta)[1-\cos kr(1-\cos\theta)]\times\Bigl{[}2m_{z}^{2}(B_{1}^{2} \cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)\] \[+2(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)(m_{x}\cos\phi+m_{y}\sin\phi)^ {2}+\bigl{(}4B_{1}^{2}m_{z}\sin\theta\cos^{2}\phi\] \[+4B_{2}^{2}m_{z}\sin\theta\sin^{2}\phi-2B_{1}^{2}m_{x}\cos\phi-2B_{2}^{2}m_{y} \sin\phi+krB_{1}B_{2}m_{y}\cos\phi\] \[-krB_{1}B_{2}m_{x}\sin\phi)(m_{x}\cos\phi+m_{y}\sin\phi)\Bigr{]}+\,(1-\cos \theta)\sin kr(1-\cos\theta)\] \[\times\Bigl{[}6kr(B_{1}^{2}\cos^{2}\phi+B_{2}^{2}\sin^{2}\phi)(m_{x}\cos\phi+ m_{y}\sin\phi)^{2}-4krm_{z}^{2}(B_{1}^{2}\cos^{2}\phi\] \[+B_{2}^{2}\sin^{2}\phi)+kr(B_{1}^{2}m_{x}\cos\phi+B_{2}^{2}m_{y}\sin\phi-7B_{1 }^{2}m_{z}\sin\theta\cos^{2}\phi\] \[-7B_{2}^{2}m_{z}\sin\theta\sin^{2}\phi)(m_{x}\cos\phi+m_{y}\sin\phi)\Bigr{]} \Bigr{\}}.\] For dilaton radiation forward (\(\theta=0\)) one can obtain: \[\overline{\frac{dI}{d\Omega}}(\theta=0)=\frac{4a_{1}^{2}\mathcal{K}^{2}\omega ^{2}}{a_{0}c}\Bigl{(}B_{1}^{2}m_{x}^{2}+B_{2}^{2}m_{y}^{2}\Bigr{)}. \tag{16}\] There is no dilaton radiation backward \(dI/d\Omega(\theta=\pi)=0\). Integrating over angles \(\theta\), \(\phi\) one can obtain amount of dilaton energy \(\overline{I}\) averaged over the period of electromagnetic wave in main asymptotic approximation (\(r\rightarrow\infty\)) radiating per unit of time in all directions: \[\overline{I}=\frac{8\pi a_{1}ck^{2}\mathcal{K}}{3a_{0}}\Bigl{\{}3(B_{1}^{2}m_{ x}^{2}+B_{2}^{2}m_{y}^{2})+2(B_{1}^{2}+B_{2}^{2})m_{z}^{2}+B_{1}^{2}m_{y}^{2}+B_{2}^{2}m_{ x}^{2}\Bigr{\}}\;. \tag{17}\] It follows from expression (17) that amount of dilaton energy \(\overline{I}\) per unit of time is the greatest in condition \((B_{1}^{2}-B_{2}^{2})(m_{x}^{2}-m_{y}^{2})>0\). Conclusion As it was shown, a plane elliptically polarized electromagnetic wave propagating in an electromagnetic field of magnetic dipole produces a dilaton wave, whose amplitude has a special point in \(\theta=0\). But dilaton \(\psi\) has a finite value at this point. That's why the dilaton field has finite value in an area where \(r>R_{S}\). The dilaton radiation forward (\(\theta=0\)) has the form: \[\overline{\frac{dI}{d\Omega}}(\theta=0)=\frac{4a_{1}^{2}\mathcal{K}^{2}\omega ^{2}}{a_{0}c}\Big{(}B_{1}^{2}m_{x}^{2}+B_{2}^{2}m_{y}^{2}\Big{)}. \tag{18}\] There is no dilaton radiation backward \(dI/d\Omega(\theta=\pi)=0\). Simple analysis shows that the amount of dilaton energy \(\overline{I}\) per unit of time averaged over the period of electromagnetic wave is the greatest in condition \((B_{1}^{2}-B_{2}^{2})(m_{x}^{2}-m_{y}^{2})>0\). This condition is valid for many superpositions of electromagnetic waves and magnetic dipole momentums of neuron stars. ## V Acknowledgements The research was carried out within the framework of the scientific program of the National Center for Physics and Mathematics, the project "Particle Physics and Cosmology".
2305.16011
Coördinate transformations, metrics and black hole features in the collapsed phase of EDT
This is a companion article to `Using massless fields for observing black hole features in the collapsed phase of Euclidean dynamical triangulations' [1]. It clarifies a singular co\"{o}rdinate transformation of an $SO(4)$ invariant metric to the usual spherical co\"{o}rdinates in which, at an instant of time called zero, the metric takes the form of a black hole with an interior. Regular transformations are also studied and found to lead in the zero time limit to the same spatial components of the metric as with the singular one, whereas the time component ends up differently. Components of the Einstein tensor also end up the same. A regular black hole metric is inversely transformed and compared with simulation results in [1].
Jan Smit
2023-05-25T12:50:55Z
http://arxiv.org/abs/2305.16011v1
# Coordinate transformations, metrics and black hole features in the collapsed phase of EDT ###### Abstract This is a companion article to _Using massless fields for observing black hole features in the collapsed phase of Euclidean dynamical triangulations_[1]. It clarifies a singular coordinate transformation of an \(SO(4)\) invariant metric to the usual spherical coordinates in which, at an instant of time called zero, the metric takes the form of a black hole with an interior. Regular transformations are also studied and found to lead in the zero time limit to the same spatial components of the metric as with the singular one, whereas the time component ends up differently. Components of the Einstein tensor also end up the same. A regular black hole metric is inversely transformed and compared with simulation results in [1]. ## I Introduction In Euclidean quantum field theory, configurations contributing to a lattice-regulated path integral are typically wildly varying on the lattice scale, whereas average propagators vary typically slowly. Likewise, in the Euclidean dynamical triangulation (EDT) approach to quantum gravity the simplicial configurations are wildly varying, but average scalar field propagators still vary slowly as a function of the geodesic lattice distance [2; 3]. In [1] we proposed using'measured' (numerically computed quantum-averaged) massless propagators for defining an average metric. An intuitive idea supporting this is the fact that our experimental understanding of distance is essentially based on QED with its massless photon field. In the collapsed phase of EDT, measurement of massless scalar field propagators led to the determination -- apart from an integration constant -- of the scale factor \(a(\eta)\) in an \(SO(4)\) rotation invariant metric with line element \[ds^{2}=d\eta^{2}+a(\eta)^{2}\,d\Omega_{3}^{2}\,,\quad 0<\eta<\eta_{\rm max}\,. \tag{1}\] Here \(\eta\) is a radial coordinate in four dimensions and \(d\Omega_{3}\) the line element on the 3-sphere \(S^{3}\) with unit radius; \(a(\eta)\) is defined to be positive. The scale factor was obtained in terms of a rational fit function \(f_{\rm rat}(\eta)\): \[a(\eta)=c_{G}\,f_{\rm rat}(\eta)=c_{G}\,\frac{p_{0}+p_{1}\,\eta^{2}}{1+q_{1}\, \eta^{2}}\;, \tag{2}\] where \(c_{G}>0\) is the (dimensionless) integration constant. Figure 1 shows \(f_{\rm rat}(\eta)\) fitted to an example of measured data. The fit ignored data in \(0<\eta<5\) suspected to be too much influenced by lattice artefacts, and also data in \(\eta>24\) suspected of being susceptible to uncontrolled statistics, or finite size effects or 'polymer hair'. The scale factor is taken to represent a smooth continuum geometry with a three-ball boundary at the origin \(\eta=0\), of finite radius \(a(0)\). The constant \(c_{G}\) gets determined when fitting to a continuum formula. The slope \(a^{\prime}(\eta)=da(\eta)/d\eta\) vanishes at the origin, \(a^{\prime}(0)=0\), which led to black-hole features shown in [1]. It may well be that more refined simulations on larger lattices suggest vanishing scale factors with non-vanishing slope at the origin, but which still have a (possibly nearly) vanishing slope somewhere away from the origin, which would then lead again to features of (possibly remnants of) black holes. This will be illustrated near the end of this article by the Hayward model [4]. After fitting numerical data for the metric in (1) a transformation to _spherical coordinates_ was presented in which the metrics are invariant under \(SO(3)\) spatial rotations, \[ds^{2}=g_{tt}(r,t)dt^{2}+2g_{rt}(r,t)dr\,dt+g_{rr}(r,t)dr^{2}+r^{2}\,d\Omega_ {2}^{2}. \tag{3}\] Here \(r\) is a radial coordinate in three dimensions, \(t\) is a Euclidean time variable and \(d\Omega_{2}\) is the line element on \(S^{2}\) with unit radius. (In [1] imaginary time was denoted by \(\tau\); here \(t\) is used to avoid reading-confusion with \(r\).) The transformation was constructed for taking the limit \(t\to 0\) where \(g_{\mu\nu}\) became diagonal with \(g_{tt}=1/g_{rr}\). Examples for several \(c_{G}\) were given in [1]. Figure 2 shows \(g_{tt}(r,0)\) for two cases of \(c_{G}\). One recognizes features of a Euclidean black hole [5; 6] with horizon radius \(h=a(0)=c_{G}\,p_{0}\) where \(g_{tt}(r,0)=0\), and with an interior metric similar to'regular black holes' ([4; 7; 8; 9] and references therein). The Einstein equations were assumed to hold in the effective sense and in the presence of a condensate of geometrical degrees of freedom. The energy-momentum tensor of the condensate is determined by the Einstein tensor, \[G^{\mu}_{\ \nu}=8\pi G_{\rm N}\,T^{\mu}_{\ \nu}\,,\quad G^{\mu}_{\ \nu}=R^{\mu}_{\ \nu}-\frac{1}{2}\,R\,\delta^{\mu}_{\ \nu}\,. \tag{4}\] where \(G_{\rm N}\) is to be a Newton constant at the scale of \(h\). In the limit \(t\to 0\), \(G^{\mu}_{\ \nu}\) became also diagonal and examples were given for the various \(c_{G}\). In the present article we present calculations that led the construction of the coordinate transformation and the computation of the Einstein tensor in [1]. In section II we introduce differential equations for a function \(y(r,t)\) that serve to determine the coordinate transformation and its effect on the form of the new metric. In section II.1, examples of \(y(r,t)\) that produce diagonal metrics are calculated by numerically solving a differential equation for \(y(r,t)\). When \(t\to 0\) the inverse of the spatial component of the metric, \(1/g_{rr}(r,t)\), is found to approach the curves shown in figure 2. The time component \(g_{tt}(r,t)\) behaves rather differently and its magnitude depends furthermore on time transformations \(t\to\bar{t}(t)\) which are induced naturally by differences in the boundary condition for the differential equation. Its value at \(t=0\) is ambiguous in this sense. The Einstein tensor is also calculated using results in [10] and it is found to become diagonal at time zero with unambiguous components \(G^{\mu}_{\ \mu}(r,0)\) (no summation) matching those in [1]. In section II.2 the work is done analytically. The diagonal-metric condition is given up but recovered in the limit \(t\to 0\) together with \(g_{tt}=1/g_{rr}\). It is shown how this leads to the function \(y(r,t)\) and the ensuing metric in [1]; the Einstein tensor is recalled. An inverse transformation is applied to the Hayward model in section III, where properties of the result are compared with the EDT data in figure 1. Our conclusions are in section IV. Appendix A contains details of the Einstein tensor with a diagonal metric, appendix B investigates an important cancellation in these formulas which prohibits a conjectured shell distribution at \(r=h\) in [1]. The diagonality of \(G^{\mu}_{\ \nu}\) as a distribution is also investigated. ## II Transformation to spherical coordinates For convenience metrics of the form (1) and (3) are shown again more explicitly: \[ds^{2} = d\eta^{2}+a(\eta)^{2}\,d\psi^{2}+a(\eta)^{2}\,\sin(\psi)^{2}\,d \Omega_{2}^{2}\,, \tag{5}\] \[= g_{tt}(r,t)\,dt^{2}+2g_{rt}(r,t)\,dr\,dt+g_{rr}(r,t)\,dr^{2}\] (6) \[+\,r^{2}\,d\Omega_{2}^{2}\,,\] \[d\Omega_{2}^{2} = d\theta^{2}+\sin(\theta)^{2}\,d\phi^{2}\,. \tag{7}\] The transformation is specified by a function \(y(r,t)\) in \[r=a(\eta)\,\sin(\psi)\,,\quad y(r,t)=a(\eta)\,\cos(\psi)\,, \tag{8}\] to be determined. In principle \[0 \leq \eta<\infty\,,\quad 0\leq\psi\leq\pi\,, \tag{9}\] \[-\infty < t<\infty\,,\quad 0\leq r<\infty\,, \tag{10}\] in practice \(0\leq\eta\leq\eta_{\rm max}\) with corresponding limits on \(r\) and \(t\) depending on \(y(r,t)\). Since \(a>0\), \[y > 0\,,\quad 0<\psi<\pi/2\,, \tag{11}\] \[y < 0\,,\quad\pi/2<\psi<\pi\,. \tag{12}\] Note \[r^{2}+y^{2}=a^{2}\geq h^{2}\,,\quad h\equiv a(0)\,, \tag{13}\] which will be used in the following; the case with \(a=h\) will be called _the boundary equation_. In the region where \(a(\eta)\) is a monotonously increasing function of \(\eta\) it has a unique inverse \(\eta(a)\), and a positive function \(F(a)\) can be defined by \[(da/d\eta)^{2}=F(a)=F\left(\sqrt{r^{2}+y^{2}}\right). \tag{14}\] The new metric turns out as \[g_{tt} = \frac{\dot{y}^{2}}{r^{2}+y^{2}}\left(\frac{y^{2}}{F}+r^{2}\right)\,, \tag{15}\] \[g_{rt} = \frac{\dot{y}}{r^{2}+y^{2}}\left(\frac{y(yy^{\prime}+r)}{F}-r(y- ry^{\prime})\right)\,,\] (16) \[g_{rr} = \frac{1}{r^{2}+y^{2}}\left(\frac{(yy^{\prime}+r)^{2}}{F}+(y-ry^{ \prime})^{2}\right) \tag{17}\] (\(\dot{y}=\partial_{t}y\), \(y^{\prime}=\partial_{r}y\)). We would like \(y\) to be such, that the off-diagonal component of the metric vanishes, \[g_{rt}=0\,. \tag{18}\] Equation (18) can be solved for \(F\) to eliminate it from expression (17) for \(g_{rr}\) and obtain \[g_{rr}=1-r\,y^{\prime}/y\,, \tag{19}\] which is useful once \(y\) that satisfies (18) is known. Assuming that generically \(\dot{y}\neq 0\), solving (18) for \(y^{\prime}\) gives \[y^{\prime}=ry\frac{F-1}{y^{2}+r^{2}F}\equiv P\,,\quad P=P(r,y)\,. \tag{20}\] We also would like \(y\) to satisfy the property \[g_{tt}=1/g_{rr}\,, \tag{21}\] as for the Schwarzschild Euclidean (Anti) de Sitter (SE(A)dS) metrics [6]. Requiring (21) in addition to (18) leads to an equation for \(\dot{y}^{2}\), or \[\dot{y}=\pm\sqrt{F}\,. \tag{22}\] The case with a minus-sign is the time-reversed version of the case with a plus-sign. When using this equation in the following we choose the plus-sign to avoid double covering. The differential \(dy=P\,dr+\sqrt{F}\,dt\) is in general _imperfect_, i.e. loosely \(\partial_{t}P\neq\partial_{r}\sqrt{F}\), meaning \[\partial_{y}P(r,y)\,\sqrt{F(\sqrt{r^{2}+y^{2}})} \neq \partial_{r}\sqrt{F(\sqrt{r^{2}+y^{2}})}\] \[+\partial_{y}\sqrt{F(\sqrt{r^{2}+y^{2}})}\,P(r,y)\;.\] Requiring the differential to be perfect leads to a differential equation for \(F\) which is easy to solve, \[dF/da=2(F-1)/a\;\Rightarrow\;F=1+c_{F}\,a^{2}=1+c_{F}(r^{2}+y^{2})\,, \tag{24}\] where \(c_{F}\) is an integration constant. Writing \(c_{F}=\pm 1/r_{0}^{2}\) with \(r_{0}>0\), the solution of (22\({}^{+}\)) with initial condition \(y(r,0)=0\) is, \[y(r,t) = \sqrt{r_{0}^{2}+r^{2}}\,\sinh\frac{t}{r_{0}}\,,\qquad\text{(EAdS)} \tag{25}\] \[y(r,t) = \sqrt{r_{0}^{2}-r^{2}}\,\sin\frac{t}{r_{0}}\,,\qquad\quad\text{ (EdS)} \tag{26}\] respectively for \(+\) and \(-\). In the latter case \(r\) is to be limited to \(0<r<r_{0}\). Upon substitution in (15) - (17) the time-dependence drops out and the diagonal EAdS and EdS metrics emerge, \[g_{tt}(r)=1\pm r^{2}/r_{0}^{2}\,. \tag{27}\] Furthermore, treating (14) as a differential equation for the scale factor \(a(\eta)\) leads for the \(+\) case to the solution \(a(\eta)=r_{0}\sinh[(\eta-s_{0})/r_{0}]\) with integration constant \(s_{0}\) (or its \(\eta\)-reversed version), corresponding to hyperbolic space. In the \(-\) case it leads to the spherical scale factor \(a(\eta)=r_{0}\sin[(\eta-s_{0})/r_{0}]\). Integrating the imperfect differential \(dy\) along a path in the \((r,\,t)\) plane gives a path-dependent result. To avoid this imperfection we shall in section II.1 release the condition \(g_{tt}=1/g_{rr}\). ### Regular transformation to diagonal metric In this section the condition of diagonality, \(g_{rt}=0\) is kept and the differential equation (20) will be integrated at fixed times. Equation (22) will be used only at one reference \(r\) to obtain boundary conditions depending on time for the integration of (20) along \(r\). Then there is no imperfectness of the coordinate transformation. We concentrate on the fit function (2). It is convenient to rewrite its \(a(\eta)\) in the form \[a(\eta)=h\,\frac{1+p\,\eta^{2}}{1+q\,\eta^{2}}\,,\quad h=c_{G}\,p_{0}\,,\;p= \frac{p_{1}}{p_{0}}\,,\;q=q_{1}\,. \tag{28}\] The function \(F(a)\) introduced in (14) can be determined via the inverse function \(\eta(a)\) of \(a(\eta)\); one finds \[\eta^{2} = \frac{a-h}{ph-qa},\quad a^{\prime}(\eta)^{2}=\frac{4h^{2}(p-q)^{2 }\eta^{2}}{(1+q\,\eta^{2})^{4}} \tag{29}\] and \[F(a)=\frac{4q^{3}(a-h)(hp/q-a)^{3}}{h^{2}(p-q)^{2}}\,. \tag{30}\] The function \(F(a)\) is positive between its zeros at \(a=h\) and \(a=h\,p/q\) with a maximum at \(a_{\rm max}\) somewhere in between, and only its monotonous branch in \(h<a<a_{\rm max}\) is to be used. In the numerical simulation [1]\(a_{\rm max}\) was only slightly larger than \(2h\). In this section results will be shown for the generic case \(c_{G}=1.5\) (\(h=3.3\)) (cf. figure 2). Since an analytical treatment is already awkward in this case and might be prohibitive with more general fit functions, the following is a numerical exploration. We choose a reference distance \(r_{\rm p}\) and erect a 'time pole' \(y_{\rm p}(t;r_{\rm p})\) by solving \(\dot{y}=\sqrt{F}\) numerically at \(r=r_{\rm p}\): \[\dot{y}_{\rm p}(t;r_{\rm p})=\sqrt{F\left(\sqrt{r_{\rm p}^{2}+y_{\rm p}(t;r_{ \rm p})^{2}}\right)}\,, \tag{31}\] with initial conditions \(y=y_{0}\) at \(t=t_{0}\) chosen as follows. Guided by (25) we assume that \(y>0\) (\(y<0\)) when \(t>0\) (\(t<0\)). Compatible with this, the initial condition can be taken a minimal \(|y|\) at \(t=0\): for \(r_{\rm p}>h\), \(y_{0}=0\), for \(r_{\rm p}<h\) the minimal \(|y|\) has to comply with the boundary equation in (13): \[y_{0} = 0\,,\qquad\qquad h\leq r_{\rm p}\leq r_{\rm max}\,,\quad t_{0}=0\,, \tag{32}\] \[= \pm\sqrt{h^{2}-r_{\rm p}^{2}}\,,\quad 0\leq r_{\rm p}\leq h\,,\quad t_{0} \to 0^{\pm}\,.\] The resulting \(y_{\rm p}\) is a monotonically rising function of \(t\). Next, equation (20) implementing \(g_{rt}=0\) is solved (numerically) with a boundary condition attaching \(y\) to the pole: \[y^{\prime}(r,t;r_{\rm p})=P(r,y(r,t;r_{\rm p}))\,,\quad y(r_{\rm p},t;r_{\rm p}) =y_{\rm p}(t;r_{\rm p})\,. \tag{33}\] The solutions \(y(r,t;r_{\rm p})\) map the \((r,t)\) plane to the \((r,y)\) plane and the give a foliation of the latter. The foliation lines do not cross. Consider erecting a second time pole \(\bar{y}_{\rm p}(\bar{t};\bar{r}_{\rm p})\) at a different position \(\bar{r}_{\rm p}\neq r_{\rm p}\). This pole will be crossed by a foliation line \(y(r,t;r_{\rm p})\) of the first pole, \(y(\bar{r}_{\rm p},t;r_{\rm p})=\bar{y}_{\rm p}(\bar{t};\bar{r}_{\rm p})\), which determines \(\bar{t}\) in terms of \(t\). This can be interpreted as a coordinate transformation of the time variable only: \(\bar{t}=\bar{t}(t)\). The second pole leads to a second foliation \(y(r,\bar{t};\bar{r}_{\rm p})\). However, substituting \(\bar{t}=\bar{t}(t)\) will give just the original foliation. In other words: results of different choices of the pole position are related by coordinate time transformations that depend only on time. We shall see evidence of this in tensor components that transform as a scalar field under such transformations, in particular the spatial components of the metric (which is diagonal in this section) and the diagonal up-down components of the Einstein tensor, \(G^{\mu}_{\ \mu}(r,t)\) (no summation). Results will be shown for two choices of \(r_{\rm p}\) & initial conditions for \(y_{\rm p}(t;r_{\rm p})\): \[r_{\rm p} = 2h\ \&\ t_{0}=0\,,\ y_{\rm p}(0;2h)=0\,, \tag{34}\] \[r_{\rm p} = 0\ \&\ y_{\rm p}(t_{0};0)=\pm h\,(1+c_{t}t_{0}^{2})\,\ c_{t}=p-q\,, \tag{35}\] with very small but nonzero \(c_{t}t_{0}^{2}>0\) to get the numerical integration started. (With the indicated choice of \(c_{t}\), the solution of (31) at small times is \(\pm h(1+c_{t}\,t^{2}+{\cal O}(t^{4}))\), cf. section II.2.) The case of \(-h\) in (35) corresponds to negative \(t\), \(y_{\rm p}(t;r_{\rm p})\) and \(y(r,t;r_{\rm p})\). Figure 3 shows foliation curves obtained with the two choices of \(r_{\rm p}\) in (34), (35). The curves run over the whole domain \(0<r<r_{\rm max}\) and approach the same limit (dashed, black) when \(t\to 0\). Figure 4 shows a corresponding logarithmic plot. The calculation of the metric needs derivatives of \(y(r,t)\). Spatial derivatives can be expressed as functions of \(y\) without derivatives using the first equation in (33), repeatedly as needed, for example \[y^{\prime\prime}=\partial_{r}P(r,y)+\partial_{y}P(r,y)\,P(r,y)\,. \tag{36}\] Time derivatives of \(y\) were calculated using nearby foliations at \(t\pm\epsilon\) and \[\dot{y}(r,t) \simeq [y(r,t+\epsilon)-y(r,t-\epsilon)]/(2\epsilon)\,, \tag{37}\] \[\ddot{y}(r,t) \simeq [y(r,t+\epsilon)+y(r,t-\epsilon)-2\,y(r,t)]/\epsilon^{2}\,,\] with small \(\epsilon\) typically of order \(t/10\). (Evaluating \(y\) also at \(t\pm 2\epsilon\) one can approximate \(\ddot{y}\) and also improve the above approximations. The first derivative \(\dot{y}\) is needed for \(g_{tt}\), but it turns out that the time derivatives cancel out of components of the Einstein tensor.) Figure 5 shows \(1/g_{rr}(r,t)\) obtained with the help of (19). As \(t\) approaches zero an envelope develops, which is represented by the dashed black curves. Figure 6 shows a closeup. The envelope is \(F(r)\) in the exterior region \(r>h\) and switches to \(1-r^{2}/h^{2}\) in the interior. In the exterior this can be understood from taking the limit \(y\to 0\) and \(y^{\prime}\to 0\) in the basic form (17) for \(g_{rr}\). In the interior the limit \(y\to\pm\sqrt{h^{2}-r^{2}}\) leads to \(F(h)\) in the denominator in (17) and one has to heed the fact that \(F(h)=0\) (cf. section II.2). The time component of the metric, \(g_{tt}\) calculated from (15) differs very much from \(1/g_{rr}\) as shown in figures 7 and 8, note the vertical scale in the latter. The non-invariance of \(g_{tt}\) under transformations of the time variable, \(t\to\bar{t}(t)\), has drastic effects here. For the calculation of the components of the Einstein tensor the expressions given in Exercise 14.16 of [10] can be used; transformed to the Euclidean case they are recorded in appendix A. Figure 9 shows \(G^{t}_{\ t}\) with the same conventions as in figures 3 and following. It illustrates that \(r_{\rm p}=0\) and \(r_{\rm p}=2h\) give indeed the same Figure 4: Logarithmic plot corresponding to the positive \(y\) part of figure 3. \(G^{t}{}_{t}\) curves as expected from the scalar nature of \(G^{t}{}_{t}(r,t)\) under transformations \(t\to\bar{t}(t)\). An envelope develops as \(t\to 0\) which has a discontinuous jump at \(r=h\). Figure 10 shows similar phenomena in \(\bar{G}^{r}\); for \(G^{\theta}{}_{\theta}\) in figure 11 the envelope is continuous (\(G^{\phi}{}_{\phi}=G^{\theta}{}_{\theta}\)). The Einstein tensor has an off-diagonal component \(G^{t}{}_{r}\) shown in figure 12, which vanishes in the limit \(t\to 0\) for \(r\neq h\) and, as argued in appendix B, also at \(r=h\) when interpreted in the distributional sense. The scalar curvature \(R\) provides a check on the calculations. The trace of the Einstein tensor is \(-R\). Calculating \(R=6(-a^{\prime\prime}/a-a^{\prime 2}/a^{2}+1/a^{2})\) from the metric (1), which is a function of \(\eta^{2}\), using \(\eta^{2}(a)\) in (29) with \(a=\sqrt{r^{2}+y^{2}}\) and the numerically calculated \(y(r,t)\) to transform it to \((r,t)\) variables, we can compare it with \(-G^{\mu}{}_{\mu}(r,t)\). The two \(R\) match accurately with a point-wise precision of order \(10^{-14}\) % (using Mathematica with default working conditions). This is surprising, since one expects from (37) that the time derivatives of \(y\) will have only order percent accuracy. The reason is that \(\dot{y}\) and \(\ddot{y}\) cancel out of the Einstein tensor, cf. appendix A. The shape of \(G^{\mu}{}_{\mu}\) similar to that of \(G^{\theta}{}_{\theta}\). ### Singular transformation implementing \(g_{tt}=g^{rr}\) at \(t=0\) The numerical integration of \(y^{\prime}=P\) in section II.1 yielded diagonal metrics with spatial inverse components \(1/g_{rr}=g^{rr}\) that approached a robust envelope when \(t\to 0\). On the contrary, the magnitude of the component \(g_{tt}\) was highly sensitive to differing choices of boundary conditions (the two pole choices) and the shape of \(g_{tt}(r,t)\) differed from \(g^{rr}(r,t)\). In this section we seek to obtain \(g_{tt}=g^{rr}\) in the limit \(t\to 0\). We give up diagonality of the metric for general times but seek to recover it at zero time.. Consider erecting a pole as in the previous section but here at every \(r\in(0,r_{\rm max})\). We wish to solve \(\dot{y}=\sqrt{F}\) analytically at small times and try \(y(r,t)=y_{\rm p}(t;r)\). It is helpful to take a brief look at the solution of this equation in case of the \(\cosh\)-model, \[a(\eta)=h\cosh(\eta/r_{0})\,, \tag{38}\] which is simpler to solve than the rat-model. (This model fitted the simulation data less accurately and failed at Figure 5: Plot of \(1/g_{rr}\). The dashed (black) curves represent \(F(r)\) in the exterior and \(1-r^{2}/h^{2}\) in the interior. Figure 8: Log-plot of \(g_{tt}(r,t)\). The lower (red) curves correspond again to \(r_{\rm p}=0\), the upper (blue) curves to \(r_{\rm p}=2h\). Figure 6: Closeup of figure 5. The envelope covers visually the \(1/g_{rr}\) curves the exterior region; it is zero at \(r=h\). smaller Newton couplings [1]. It approaches the exponential form of hyperbolic space at large \(\eta\), but here \(r_{0}\) is meant to parametrize primarily the small to intermediate distance region. In particular, \(1/r_{0}^{2}\) may be thought to represent \(2(p-q)\) of the rat-model to \({\cal O}(\eta^{2})\).) With \(a^{\prime}(\eta)^{2}=(h^{2}/r_{0}^{2})(\cosh^{2}(\eta/r_{0}^{2})-1)\) the function \(F\) in (14) turns out as \[F(\sqrt{r^{2}+y^{2}})=(r^{2}+y^{2}-h^{2})/r_{0}^{2}\,. \tag{39}\] The solution of (22\({}^{+}\)) with initial condition (32)\(|_{r_{p}\to r}\) is \[y(r,t) = \pm\sqrt{h^{2}-r^{2}}\,\cosh(t/r_{0})\,,\;0<r<h\,,\;t\stackrel{{ >}}{{<}}0\,, \tag{40}\] \[= \sqrt{r^{2}-h^{2}}\,\sinh(t/r_{0})\,,\qquad r>h\,.\] At \(t=0\) this solution satisfies \(y^{\prime}=P\) (trivially in the exterior as \(0=0\)), and the resulting metric becomes indeed diagonal, \(g_{rt}=0\), with \[g_{tt}(r,0)=1/g_{rr}(r,0) = 1-r^{2}/h^{2}\,,\quad r<h\,,\] \[= (r^{2}-h^{2})/r_{0}^{2}=F(r)\,,\quad r>h\,.\] Generalizing, in the exterior, consider a factorized form as suggested by (40), \[y(r,t)=u(r)\,t\,. \tag{42}\] Equation (16) shows that it leads to a vanishing off-diagonal component at \(t=0\), \(g_{rt}(r,0)=0\), independent of \(u(r)\). (The equation \(y^{\prime}=P\) is again solved trivially as \(0=0\).) Furthermore (17) gives \[g_{rr}(r,t)=\frac{1}{F(r)}+{\cal O}(t^{2})\,, \tag{43}\] also independent of \(u(r)\), whereas (15) gives \[g_{tt}(r,t)=u(r)^{2}+{\cal O}(t^{2})\,. \tag{44}\] Hence, requiring \(g_{tt}(r,0)\,g_{rr}(r,0)=1\) we get \[u(r)=\sqrt{F(r)}\,. \tag{45}\] and the equation \(\dot{y}=\sqrt{F}\) is indeed satisfied at \(t=0\). In the interior the Ansatz is, \[y(r,t)=\pm\sqrt{h^{2}-r^{2}}\,(1+c_{t}\,t^{2})\,,\quad\stackrel{{ >}}{{<}}0\,, \tag{46}\] Figure 11: Component \(G^{\theta}{}_{\theta}\). The black-dashed lines represent (54), (57). Figure 12: Component \(G^{t}{}_{r}\). It vanishes for \(r\neq h\) in the limit \(t\to 0\). Figure 9: Component \(G^{t}{}_{t}\). Upper curves (blue) in \(r<h\) correspond to \(r_{\rm p}=2h\), the curves occasionally joining (red) correspond to \(r_{\rm p}=0\). The black-dashed lines represent (54), (56), the limit \(t\to 0\). which has a time-dependence similar to that of the cosmological for small \(t\) (cf. first line in (40)). Since \(F[h]=a^{\prime}(0)^{2}=0\) (cf. (14)), \(F\) vanishes in the interior at \(t=0\): \[F(\sqrt{r^{2}+y(r,0)^{2}})=F[h]=0\,. \tag{47}\] Its expansion in \(t\) starts out as \[F(\sqrt{r^{2}+y(r,t)^{2}})=(h-r^{2}/h)\,F^{\prime}(h)\,c_{t}\,t^{2}+{\cal O}(t ^{4})\,. \tag{48}\] The equation \(y^{\prime}=P\) is satisfied at \(t=0\). When \(t\to 0\), the off-diagonal part of the metric \(g_{rt}\) in (16) contains \(1/F\) which blows up, and it contains \(\dot{y}\) which vanishes. Working out the details we find \[g_{rt}=-2r\left(1+\frac{2}{hF^{\prime}(h)}\right)\,c_{t}\,t+{\cal O}(t^{3})\,. \tag{49}\] Hence, the interior metric becomes also diagonal at time zero. Furthermore, after some algebra: \[g_{rr}(r,t) = \frac{h^{2}}{h^{2}-r^{2}}+{\cal O}(t^{2})\,, \tag{50}\] \[g_{tt}(r,t) = \frac{h^{2}-r^{2}}{h^{2}}\,\frac{2h\,c_{t}}{F^{\prime}(h)}+{\cal O }(t^{2})\,. \tag{51}\] Requiring \(g_{tt}(r,0)\,g_{rr}(r,0)=1\) determines \(c_{t}\), \[c_{t}=\frac{F^{\prime}(h)}{2h}\,, \tag{52}\] and with this choice the equation \(\dot{y}=\sqrt{F}\) is indeed solved to leading order in \(t\). For the rat-model this becomes \[c_{t}=p-q\,, \tag{53}\] which was used in section II.1 for the case \(r_{\rm p}=0\). Examples of \(g_{tt}(r,0)\) following from (44), (45), (50) - (52), have already been shown in figure 2. The Einstein tensor \(G^{\mu}_{\ \nu}\) contains time derivatives of the metric which is non-diagonal at non-zero \(t\) and a little complicated. For the daunting task of its calculation the software OGRe [11] is very helpful. The results in the limit \(t\to 0\) are fairly simple and already given in [1]. For completeness we list the components again in the simplified notation (28); \(G^{\mu}_{\ \nu}(r,0)\) is diagonal and in the exterior region, \(r>h\): \[G^{t}_{\ t}=G^{\theta}_{\ \theta}=G^{\phi}_{\ \phi} = -\frac{1}{r^{2}}\,\left[1+\frac{4(hp-qr)^{2}(h^{2}p-2h(p+2q)\,r+ 5q\,r^{2})}{h^{2}(p-q)^{2}}\right]\,, \tag{54}\] \[G^{r}_{\ r} = -\frac{3}{r^{2}}\left[1+\frac{4(h-r)(hp-qr)^{3}}{h^{2}(p-q)^{2}} )\right]\,. \tag{55}\] In the interior region \(0<r<h\): \[G^{t}_{\ t} = -\frac{3}{h^{2}}\,, \tag{56}\] \[G^{r}_{\ r}=G^{\phi}_{\ \phi}=G^{\theta}_{\ \theta} = -\frac{1}{h^{2}}+4(p-q)\,. \tag{57}\] These limit forms are shown in figures 9 - 12 as the black-dashed curves. Note that \(G^{t}_{\ t}\) and \(G^{r}_{\ r}\) are discontinuous at \(r=h\), but not \(G^{\theta}_{\ \theta}\). In [1] it was conjectured that the discontinuity at \(r=h\) in the first derivative of \(g_{tt}(r,0)\) would be accompanied by a delta-shell \(\propto\delta(r-h)\) in the transverse pressure \(\propto G^{\theta}_{\ \theta}\), since the latter contains two derivatives of the metric with respect to \(r\). If so, then one expects some regulated version of the Dirac distribution blowing up at small diminishing \(t\). However, there is no sign of this in figure 11. Indeed, as shown in appendices A and B, such developing singular behavior is canceled exactly by a similar contribution with two time derivatives of the metric. After all cancelations are taken into account the Einstein tensor can be expressed in a form that depends explicitly only on \(r\), \(y\), \(F\) and its first derivative \(F^{\prime}\), but not anymore on derivatives with respect to \(r\) and \(t\), cf. (17) - (187). These formulas apply equally well to the previous section as here and it seems remarkable that envelope in figure 11 is described correctly by the consequences of (42) and (46). ## III Transforming the Hayward model The time-independence of the static version of Hayward's model [4] allows a simple transformation to imaginary time, resulting in the Euclidean model: \[ds^{2} = f(r)\,dt^{2}+\frac{1}{f(r)}\,dr^{2}+r^{2}\,d\Omega_{2}^{2}\,,\] \[f(r) = 1-\frac{2mr^{2}}{2ml^{2}+r^{3}}\,. \tag{58}\] The two parameters \(m\) and \(l\) are positive and have the dimension of length. For small \(r\) the metric is like EdS, \(f(r)\simeq 1-r^{2}/l^{2}\), and for large \(r\) it is 'Newton-like' \(f(r)\simeq 1-2m/r\). Figure 13 shows \(f(r)\) for several values of \(m\) at fixed \(l\). With sufficiently large \(m\) there are two real zeros at \(r_{+}>r_{-}>0\), which coincide, \(r_{+}=r_{-}=r_{*}\), when \(m\) is reduced to a critical value \(m_{*}\). Then also \(f^{\prime}(r_{*})=0\) which together with \(f(r_{*})=0\) determines \[m_{*}=\frac{3\sqrt{3}}{4}\,l\,,\quad r_{*}=\sqrt{3}\,l\,. \tag{59}\] The aim here is to determine the scale factor \(a(\eta)\) in an \(SO(4)\) invariant metric of the form (1) such that a transformation to spherical coordinates (3) gives the metric (58) in the limit \(t\to 0\). For this purpose the results of the previous sections which led to \(g^{rr}(r,0)=F(r)\) can be used (cf. (42) - (45)). Thus \[F(r)=f(r)\,, \tag{60}\] and we can find \(a(r)\) from the definition of \(F\) in (14), by solving the differential equation \[a^{\prime}(\eta)=\sqrt{f(a(\eta))}\,. \tag{61}\] Since the Hayward model describes a regular black hole it is natural to fix the integration constant by requiring regularity at the origin, \[a(0)=0\,. \tag{62}\] To avoid a conical singularity also unit slope \(a^{\prime}(0)=1\) is required, which is satisfied since \(f(0)=1\). The small and large \(\eta\) behavior of the solution of (61), (62) are given by \[a(\eta) \approx l\,\sin(\eta/l),\qquad\qquad\qquad\quad a(\eta)^{3}\ll 2ml^{2}\,, \tag{63}\] \[\approx \eta-m\ln(\eta/m)+{\rm const},\quad a(\eta)^{3}\gg 2ml^{2}\,, \tag{64}\] When \(m\geq m_{*}\), the solution of (61), (62) has a fixed point at the first zero of \(f(a)\), \[a(\eta)\to r_{-}\,,\quad\eta\to\infty,\quad(m\geq m_{*})\,. \tag{65}\] For \(m<m_{*}\) but close to \(m_{*}\) the solution can have a flat region \(a(\eta)\approx r_{*}\) before the asymptotic behavior (64) sets in. Hence \(r_{*}\) is similar to \(h\) in the rat-model. However, the surface gravities at \(r_{*}\) vanish, \(\kappa_{\pm}=\lim_{r\to r_{+}^{\pm}}f^{\prime}(r)/2=0\). A match (by hand and eye) of \((1/c_{G})\,a(\eta)\) to the data in figure 1 is shown Figure 14. Starting from the origin the curve reaches the data at much larger distances (by factor about 3) than in figure 1. This suggests that such fits may be more appropriate in computations reaching smaller lattice spacings (_in physical units_). ## IV Conclusions We studied the effect of two types of coordinate transformations, a regular one producing a diagonal metric with \(g_{tt}(r,t)\neq g^{rr}(r,t)\) at all times and a singular one [1] producing a metric which becomes diagonal only in the limit \(t\to 0\) with \(g_{tt}(r,0)=g^{rr}(r,0)\). At time zero the component \(g^{rr}(r,0)\) is the same for both transformations As shown in figure 2 it is zero at \(r=h\), as for a black hole with gravitational radius \(h\). The singular transformation has a singularity at this point, which is akin to the transformation between the Schwarzschild-Droste black hole and its Kruskal-Szekeres extension [12; 13; 14; 15; 16]. The numerical study of the regular transformation showed that the left cuspy curve in figure 2 is the limiting envelope of smooth curves representing \(g^{rr}(r,t)\) (cf. figure 6). The time component \(g_{tt}(r,t)\) differed from \(g^{rr}(r,t)\), in shape and, depending on boundary conditions specifying the implementation of the transformation, also by many orders of magnitude. These changes in boundary conditions correspond to transformations \(t\to\bar{t}(t)\) under which \(y(r,t)\) and \(g^{rr}(r,t)\) behave as scalar fields, but not \(g_{tt}(r,t)\) which may suffer large multiplicative factors. Diagonal mixed components of the Einstein tensor, \(G^{\mu}_{\ \nu}(r,t)\), \(\mu=\nu\), do also transform as a scalar field under \(t\to\bar{t}(t)\) and they approach the same zero time limit as with the singular transformation. Remarkably, the off-diagonal \(G^{t}_{\ \nu}(r,t)\) also does not suffer from the time ambiguity; it was found to vanish when \(t\to 0\) and arguments where given that this holds also in the distributional sense. In [1] a shell-like singular distribution was conjec Figure 14: Data of \(a_{G}(\eta)\) (blue) as in figure 1 matched by \((1/c_{G})\,a(\eta)\), with \(1/c_{G}=0.54\) and \(a(\eta)\) the solution of (61), (62) with \(m=0.96\,m_{*}\), \(l=3\). tured in the transverse pressure \(\propto G^{\theta}{}_{\theta}(r,0)\) at \(r=h\) as a consequence of the discontinuous first derivative \(2\kappa_{\pm}=\lim_{r\to h^{\pm}}\partial_{r}g_{tt}(r,0)\). However, with the regular transformation such a shell is absent due to a remarkable cancellation between singular contributions to \(G^{\theta}{}_{\theta}\) (cf. appendix B). One also notes that the local minimum in \(g^{rr}(r,t)\) approaches \(r=h\) when \(t\to 0\) (figure 6), which implies that the derivative at the minimum remains zero in the limit. A gravastar-like shell [8] is not supported in this study. The inversely transformed Hayward model and its qualitative match to the EDT data sheds new light on the interpretation of the EDT results. ## Appendix A Einstein tensor for a time-dependent diagonal metric In equation (14.48) of [10], a time-dependent rotation-invariant diagonal Lorentzian metric depending on coordinates \(T\), \(R\), \(\theta\) and \(\phi\) is defined by its line element \[ds^{2}=-e^{2\Phi}\,dT^{2}+e^{2\Lambda}\,dR^{2}+r^{2}(d\theta^{2}+\sin(\theta) ^{2}d\phi^{2})\,, \tag{15}\] and its Einstein tensor is given in (14.48), (14.51) and (14.52) of this book. Transforming coordinates \(T=-i\,t\) and choosing \(R=r\) we obtain the Einstein tensor for the Euclidean metric. The transformed functions of [10] are given by \[\Phi=\frac{1}{2}\,\ln g_{tt}\,,\quad\Lambda=\frac{1}{2}\,\ln g_{rr}\,, \tag{16}\] and \[E_{1} = e^{-2\Phi}(\tilde{\Lambda}+\dot{\Lambda}^{2}-\dot{\Lambda}\dot{ \Phi})\,, \tag{17}\] \[E_{2} = -e^{-2\Lambda}(\Phi^{\prime\prime}+\Phi^{\prime 2}-\Phi^{\prime} \Lambda^{\prime})\,,\] (18) \[E = E_{1}+E_{2}\,,\quad\bar{E}=-\frac{1}{r}\,e^{-2\Lambda}\;\Phi^{ \prime}\,,\] (19) \[F_{\mbox{\tiny MTW}} = \frac{1}{r^{2}}\,(1-e^{-2\Lambda})\,,\quad\bar{F}_{\mbox{\tiny MTW }}=\frac{1}{r}\,e^{-2\Lambda}\;\Lambda^{\prime}\,,\] (20) \[H = -i\frac{1}{r}\,e^{-\Phi-\Lambda}\;\dot{\Lambda}\,, \tag{21}\] in terms of which the Euclidean components of the Einstein tensor are given by \[G^{t}{}_{t} = -F_{\mbox{\tiny MTW}}-2\bar{F}_{\mbox{\tiny MTW}}\,,\quad G^{r}{} _{r}=-2\bar{E}-F_{\mbox{\tiny MTW}}\,,\] \[G^{\theta}{}_{\theta} = G^{\phi}{}_{\phi}=-E-\bar{E}-\bar{F}_{\mbox{\tiny MTW}}\,,\quad G ^{t}{}_{r}=2\,iH\,. \tag{22}\] Note that \(E_{1}\) contains two time derivatives of \(g_{rr}\), \(E_{2}\) two spatial derivatives of \(g_{tt}\), and that their sum \(E_{1}+E_{2}\) enters in \(G^{\theta}{}_{\theta}\). It is convenient to treat \(F(a)\) introduced in (14) as a function of \(a^{2}=r^{2}+y^{2}\), writing \[F(\sqrt{r^{2}+y^{2}})=\tilde{F}(r^{2}+y^{2})\,. \tag{23}\] The functions \(E\), \(\ldots\), \(H\) depend on derivatives of the metric components \[f\equiv g_{tt}\,,\quad j\equiv\frac{1}{g_{rr}} \tag{24}\] (cf. (15), (17)). Indicating their explicit dependence on variables these are given by \[j(r,y) = \frac{r^{2}\tilde{F}(r^{2}+y^{2})+y^{2}}{r^{2}+y^{2}}\,, \tag{25}\] \[f(r,y,\dot{y}) = \frac{\dot{y}^{2}}{\tilde{F}(r^{2}+y^{2})}\,j(r,y)\,. \tag{26}\] Since \(y\) is a solution of \(\partial_{r}y=P\), \[P(r,y)=ry\frac{-1+\tilde{F}(r^{2}+y^{2})}{y^{2}+r^{2}\tilde{F}(r^{2}+y^{2})}\,, \tag{27}\] we can replace spatial derivatives \(\partial_{r}y\) by \(P\): \[\frac{d}{dr}\,j = \frac{\partial j}{\partial r}+\frac{\partial j}{\partial y}\,P\,, \tag{28}\] \[\frac{d}{dr}\,f = \frac{\partial f}{\partial r}+\frac{\partial f}{\partial y}\,P+ \frac{\partial f}{\partial\ddot{y}}\,\partial_{r}\dot{y}\,,\] (29) \[\partial_{r}\,\dot{y} = \partial_{r}\partial_{t}y=\partial_{t}\partial_{r}y=\frac{d}{dt}\, P=\frac{\partial P}{\partial y}\,\dot{y}\,, \tag{30}\] and recursively for higher derivatives. In principle this gives \(E_{1}\), \(E_{2}\), \(\ldots\), \(H\) as functions depending explicitly only on \(r\), \(y\), \(\ddot{y}\), \(\ddot{y}\) (\(\ddot{y}\) is not needed since double time derivatives occur only on \(\Lambda\) which does not contain \(\dot{y}\)). However the explicit dependence on \(\dot{y}\) and \(\ddot{y}\) cancels out. This should happen in the diagonal components of the Einstein tensor which transform as a scalar field under time transformations \(\tilde{t}(t)\) and \(\dot{y}\) and \(\ddot{y}\) are not such scalars. But it happens already in the individual \(E_{1}\), \(E_{2}\), \(\ldots\), \(H\). Further details of \(E_{1}\), \(\ldots\), \(H\) are in appendix B. There is a near cancellation between \(E_{1}\) and \(E_{2}\), hence also in their sum \(E\) contributing to \(G^{\theta}{}_{\theta}\). This is relevant because \(E_{1}\) and \(E_{2}\) separately are increasingly strongly peaked as \(t\to 0\), whereas \(E\) clearly reaches a finite limiting form. The Dirac distribution \(\delta(r-h)\), which was conjectured in [1] to be present in \(G^{\theta}_{\theta}\) at \(t=0\) as a result of the double spatial derivative in \(E_{2}\), is not present in \(E\). We were not able to prove that \(\delta(r-h)\) emerges in \(E_{1}\) and \(E_{2}\) separately as \(t\to 0\), although there is modest numerical evidence for a finite limit in the distributional sense. The components of \(G^{\mu}{}_{\nu}\) listed in (22) become: \[G^{t}{}_{t} = \frac{(\tilde{F}-1)(3y^{2}+r^{2}\tilde{F})}{(y^{2}+r^{2})(y^{2}+r^ {2}\tilde{F})}+\frac{2r^{2}\tilde{F}\tilde{F}^{\prime}}{y^{2}+r^{2}\tilde{F}}\,, \tag{31}\] \[G^{r}{}_{r} = \frac{(\tilde{F}-1)(y^{2}+3r^{2}\tilde{F})}{(y^{2}+r^{2})(y^{2}+r^{ 2}\tilde{F})}+\frac{2y^{2}\tilde{F}^{\prime}}{y^{2}+r^{2}\tilde{F}}\,,\] (32) \[G^{\theta}{}_{\theta} = G^{\phi}{}_{\phi}=\frac{\tilde{F}-1}{y^{2}+r^{2}}+2\tilde{F}^{ \prime}\,,\] (33) \[G^{t}{}_{r} = 2ry\sqrt{\tilde{F}}\left[\frac{\tilde{F}-1}{(y^{2}+r^{2})(y^{2}+r^ {2}\tilde{F})}-\frac{\tilde{F}^{\prime}}{y^{2}+r^{2}\tilde{F}}\right]\,. \tag{34}\] Here \(\tilde{F}^{\prime}\) is the derivative of \(\tilde{F}\) with respect to its argument: \(\tilde{F}^{\prime}(r^{2}+y^{2})=d\tilde{F}(x)/dx|x\to r^{2}+y^{2}\) (and similar for \(\tilde{F}^{\prime\prime}\) which appears in \(E_{1}\) and \(E_{2}\)). The trace of the Einstein tensor simplifies to \[G^{\mu}_{\ \mu}=\frac{6(-1+\tilde{F})}{y^{2}+r^{2}}+6\,\tilde{F}^{\prime}\,. \tag{10}\] Following the reasoning in section II.2, the limit forms for \(t\to 0\) in that section follow here easily - without encountering singularities - by letting \(y\to 0\) in the exterior region and \(\tilde{F}\to 0\) & \(y\to\sqrt{h^{2}-r^{2}}\) in the interior region. The off-diagonal component \(G^{t}_{\ r}\) vanishes in the limit. The possibility of a remaining finite distribution at \(r=h\) is investigated in appendix B. ## Appendix B Details of \(E_{1}\),..., \(H\) After the canceling-out of \(\dot{y}\) and \(\ddot{y}\), \(E_{1}\) and \(E_{2}\) are given by \[E_{1} = e_{10}+e_{11}\,\tilde{F}^{\prime}+e_{12}\,(\tilde{F}^{\prime})^ {2}\,,\] \[E_{2} = e_{20}+e_{21}\,\tilde{F}^{\prime}+e_{22}\,(\tilde{F}^{\prime})^ {2}\,,\] \[e_{10} = \frac{1}{(y^{2}+\tilde{F}r^{2})^{3}}\left[r^{2}(3y^{2}(-1+\tilde{ F})\tilde{F}-(-1+\tilde{F})\tilde{F}^{2}r^{2})\right.\] \[\left.+\tilde{F}^{\prime\prime}r^{2}(2y^{6}\tilde{F}+2y^{4} \tilde{F}(1+\tilde{F})r^{2}+2y^{2}\tilde{F}^{2}r^{4})\right]\,,\] \[e_{11} = \frac{1}{(y^{2}+\tilde{F}r^{2})^{3}}\left[r^{2}(y^{4}(1-4\tilde{ F})+\tilde{F}^{2}r^{4}\right.\] \[\left.+y^{2}\tilde{F}(-6r^{2}+4\tilde{F}r^{2}))\right]\,,\] \[e_{12} = \frac{1}{(y^{2}+\tilde{F}r^{2})^{3}}\left[r^{2}(y^{6}+y^{4}(1-3 \tilde{F})r^{2}-3y^{2}\tilde{F}r^{4})\right]\,,\] \[e_{21} = \frac{1}{(y^{2}+\tilde{F}r^{2})^{3}}\left[(-y^{6}+y^{4}(-1+ \tilde{F})r^{2}\right.\] \[\left.+y^{2}\tilde{F}r^{2}(6r^{2}-7\tilde{F}r^{2})-\tilde{F}^{2 }r^{4}(r^{2}+\tilde{F}r^{2}))\right]\,,\] \[e_{20} = -e_{10}\,,\quad e_{22}=-e_{12}\,,\quad e_{11}+e_{21}=-1\,,\] \[E = E_{1}+E_{2}=-\tilde{F}^{\prime}\,. \tag{11}\] The expressions for \(\bar{E}\),..., \(H\) come out as: \[\bar{E} = -\frac{(-1+\tilde{F})\tilde{F}r^{2}}{(y^{2}+r^{2})(y^{2}+\tilde{F }r^{2})}-\frac{y^{2}\tilde{F}^{\prime}}{y^{2}+\tilde{F}r^{2}}\,,\] \[F_{\rm MTW} = \frac{1-\tilde{F}}{y^{2}+r^{2}}\,,\] \[\bar{F}_{\rm MTW} = -\frac{y^{2}(-1+\tilde{F})}{(y^{2}+r^{2})(y^{2}+\tilde{F}r^{2})} -\frac{\tilde{F}\tilde{F}^{\prime}r^{2}}{y^{2}+\tilde{F}r^{2}}\,,\] \[iH = ry\sqrt{\tilde{F}}\left[\frac{\tilde{F}-1}{(y^{2}+r^{2})(y^{2}+ \tilde{F}r^{2})}-\frac{\tilde{F}^{\prime}}{y^{2}+\tilde{F}r^{2}}\right]\,. \tag{12}\] For the rat-model \(\tilde{F}\) and \(\tilde{F}^{\prime}\) can be written in the form (cf. (30)) \[\tilde{F} = c(\sqrt{x}-h)(\sqrt{x}-\bar{h})^{3}\,,\quad x=r^{2}+y^{2}\,, \tag{13}\] \[\tilde{F}^{\prime} = \frac{c(4\sqrt{x}-3h-\bar{h})(\sqrt{x}-\bar{h})^{2}}{2\sqrt{x}}\,. \tag{14}\] Figure 15 shows a plot of \(E_{1}\) corresponding to five (blue) curves of \(y(r,t)\) in figure 3 with \(r_{\rm p}=2h\). A similar plot for \(-E_{2}(r,t)\) is indistinguishable to the eye, since the sum \(E_{1}+E_{2}\) is down in magnitude by a factor of about \(10^{4}\), note the vertical scale in figure 16 which displays \(E\). The integral \(I=\int_{3.2}^{4.2}dr\,E_{1}(r,t)\) was monitored to check wether a finite distribution (such as a Dirac func tion \(\delta(r-h))\) develops in \(E_{1}\) (and also in \(E_{2}\) as follows from the cancelation) in the limit \(t\to 0\). Considered as function of \(\ln(t)\), \(I\) is very well fitted by the form \(I=\alpha+\beta\ln(t)\), which suggests a logarithmic divergence in the limit. But a dependence of \(I\) as a function of a time \(t\) introduced at \(r_{\rm p}=2h\), 'far away' from \(r=h\) involves a coordinate peculiarity of this \(t\) (cf. figure 4). Testing as a function of the foliation as labeled by the value of \(y\) at \(r=h\), i.e. \(y_{t}=y(h,t)\), may be a better idea. The values of \(I\) are well fitted by the rational-function form \(I=(\alpha+\beta\,y_{t})/(1+\gamma\,y_{t})\), which has a build-in finite limit as \(y_{t}\to 0\). We take this as a mild support for a finite limit distribution \(E_{1,2}\) at \(r=h\). However, since only the regular sum \(E\) enters in \(G^{\theta}_{\ \theta}\) the finiteness of \(E_{1,2}\) is not of physical interest. The function \(2iH=G^{t}_{\ r}\) has been plotted in figure 12. It vanishes when \(t\to 0\) for \(r\neq h\) (as also mentioned in appendix A). To investigate the possibility of a finite remaining distribution at \(r=h\), consider \(I_{H}=\int_{0}^{2h}dr\,iH\). It turns out to be a non-linear function of \(\ln(t)\) but an almost linear one as a function of \(y_{t}=y(h,t)\); its four smallest values can be fitted by the form \(\beta\,y_{t}+\gamma\,y_{t}^{2}\), as shown in figure 17. Adding a constant \(\alpha\) to the fit function leads to a rather small value \(\alpha=0.00075\). We assume that \(H\) vanishes also as a distribution when \(t\to 0\).
2301.12958
Programmable phase behavior in fluids with designable interactions
We introduce a method for solving the "inverse" phase equilibria problem: How should the interactions among a collection of molecular species be designed in order to achieve a target phase diagram? Using techniques from convex optimization theory, we show how to solve this problem for phase diagrams containing a large number of components and many coexisting phases with prescribed compositions. We apply our approach to commonly used mean-field models of multicomponent fluids and then use molecular simulations to verify that the designed interactions result in the target phase diagrams. Our approach enables the rational design of "programmable" fluids, such as biopolymer and colloidal mixtures, with complex phase behavior.
Fan Chen, William M. Jacobs
2023-01-30T14:58:00Z
http://arxiv.org/abs/2301.12958v2
# Programmable phase behavior in fluids with designable interactions ###### Abstract Intracellular fluids can self-organize into phase-separated condensates as a result of evolved biomolecular interactions. However, the extent to which the equilibrium phase behavior of a complex fluid can be controlled by tuning such interactions remains unclear. Here we apply convex optimization to design fluids that demix into phases with prescribed compositions at thermodynamic equilibrium. We then show how multicomponent phase diagrams can differ qualitatively from those of simple fluids. Our approach enables the rational design of multicomponent fluids with phase diagrams of arbitrary complexity. Intracellular mixtures of biopolymers can demix to form "biomolecular condensates" via the mechanism of liquid-liquid phase separation (LLPS) [1; 2; 3]. Since phase separation occurs spontaneously _in vitro_ using naturally occurring or engineered biopolymers, biomolecular LLPS is widely believed to be governed primarily by equilibrium thermodynamics [4; 5; 6; 7], even though nonequilibrium processes may affect the phase behavior _in vivo_[8; 9]. Although it is not surprising that heteropolymers can phase separate at high concentrations, it is remarkable that LLPS can establish coexisting condensates with distinct molecular compositions required for carrying out specific biological functions [10; 11]. Despite recent progress in modeling multicomponent phase separation [12; 13; 14; 15; 16; 17; 18; 19], the relationship between the "design space" of such specific interactions and the capacity for biological fluids to self-organize into chemically diverse droplets via LLPS remains poorly understood. Exploration of the design space for tunable LLPS is challenging due to the combinatorial complexity associated with multicomponent phase coexistence. In order to identify coexisting phases in a model multicomponent mixture, it is first necessary to locate all the candidate phases in a high-dimensional concentration space. This constitutes a search problem whose complexity scales exponentially with the dimension of the concentration space. As a result, any algorithm for predicting or designing multicomponent phase diagrams that relies on computing phase coexistence from a prescribed set of interactions must either be limited to mixtures with a small number of components or employ additional assumptions in the search problem. Such assumptions can qualitatively affect the predictions of the algorithm. This combinatorial complexity can be avoided by solving the _inverse problem_ of designing interactions to achieve a target phase diagram [20]. By instead specifying the compositions of the coexisting phases as design criteria, this approach eliminates the need to search for candidate phases. Furthermore, if the thermodynamic constraints on bulk phase coexistence can be cast as a _convex optimization problem_, then suitable interactions can be found using efficient algorithms [21]. Solving the inverse problem therefore allows us to associate an equilibrium phase diagram with a set of designed interactions, leading to a more complete picture of multicomponent phase behavior. Here we show that an inverse-problem approach can be applied to design equilibrium phase diagrams with arbitrary condensed-phase compositions. We first explain how this strategy can be applied to generic mean-field models with pairwise intermolecular interactions. We then show that this approach reveals unexpected features of multicomponent phase diagrams, which differ qualitatively from the intuitive phase behavior of simple fluids. Finally, we perform molecular simulations and free-energy calculations to demonstrate that our insights apply beyond mean-field theoretical models. We begin by assuming that the intermolecular interactions in a multicomponent solution are pairwise additive and that the elements of the symmetric interaction matrix, \(\mathbf{\epsilon}\), are independently tunable. With these assumptions, the vector of excess chemical potentials for all molecular species can be written in the form \[\vec{\mu}_{\rm ex}(\vec{\phi};\mathbf{\epsilon},\vec{v})=\vec{\mu}_{\rm v}(\vec{ \phi};\vec{v})+\mathbf{\epsilon}\vec{\phi}, \tag{1}\] where \(\vec{\phi}\) and \(\vec{v}\) represent the volume fractions and molecular volumes of the species, respectively, and \(\vec{\mu}_{\rm v}\) is independent of \(\mathbf{\epsilon}\). For simplicity, we assume that the thermal energy \(k_{\rm B}T=1\). Eq. (1) describes the mean-field Flory-Huggins [22] and van der Waals [23] models, as well as approximate field-theoretic treatments of sequence-dependent heteropolymer mixtures [24], with appropriate choices of \(\vec{\mu}_{\rm v}\). The osmotic pressure, \(P(\vec{\phi};\mathbf{\epsilon},\vec{v})\), which can be determined from Eq. (1) via the Gibbs-Duhem relation, is also linear with respect to \(\mathbf{\epsilon}\). Our objective is to find an \(N\times N\) interaction matrix, \(\mathbf{\epsilon}\), and an \(N\)-dimensional chemical potential vector, \(\vec{\mu}\), that lead to equilibrium phase coexistence among a dilute phase and \(K\) condensed phases in a solution with \(N\) solute species ("components"). The inverse design problem is defined by the target volume fractions of each of the condensed phases, \(\vec{\phi}^{(\alpha)}\), indexed by \(\alpha=1,\ldots,K\) (Fig. 1a). In general, each target condensed phase consists of \(M^{(\alpha)}\) "enriched" components, which comprise the majority of the volume fraction of phase \(\alpha\), and \(N-M^{(\alpha)}\) "depleted" components, which are found at negligible concentrations in phase \(\alpha\). Bulk phase coexistence occurs when all \(K+1\) phases have equal osmotic pressures and each molecular species has the same chemical potential in each of the \(K+1\) phases. Furthermore, all \(K+1\) phases must be stable with respect to concentration fluctuations, such that \(\partial\vec{\mu}(\vec{\phi})/\partial\vec{\phi}\) is positive definite. To find an \(\mathbf{\epsilon}\) and \(\vec{\mu}\) that satisfy these thermodynamic constraints, we perform a convex relaxation by making three minor approximations. First, since the depleted components in each condensed phase have an insignificant effect on the phase diagram, we write the volume-fraction constraint for each depleted component \(j\) in every phase as an inequality, such that \(\phi^{(\alpha)}_{j}<\phi^{(\alpha)}_{\rm depl}\equiv\phi^{(\alpha)}_{\rm T}/M^{ (\alpha)}(N-M^{(\alpha)})\), where \(\phi^{(\alpha)}_{\rm T}\equiv\sum_{i=1}^{N}\phi^{(\alpha)}_{i}\). Second, we assume that the contributions of the depleted components to \(\vec{\mu}_{\rm ex}\) in each condensed phase are negligible and can thus be ignored. Finally, we assume that the total volume fraction in the dilute phase, \(\phi^{(0)}_{\rm T}\), is very small, so that the osmotic pressure at coexistence is near zero; this approximation is valid far from a critical manifold when every component is enriched in at least one target condensed phase. Taken together, these conditions define a semidefinite program (SDP) that is convex with respect to \(\mathbf{\epsilon}\) and \(\vec{\mu}\) (Fig. 1b): \[\mu_{\rm id,\textit{i}}(\vec{\phi}^{(\alpha)};\vec{v})+\mu_{\rm ex,\textit{i}}(\vec{\phi}^{(\alpha)};\mathbf{\epsilon},\vec{v}) \geq\mu_{i}\;\forall i,\alpha \tag{2a}\] \[P(\vec{\phi}^{(\alpha)};\mathbf{\epsilon},\vec{v}) =0\;\forall\alpha\] (2b) \[\partial[\vec{\mu}_{\rm id}(\vec{\phi}^{(\alpha)};\vec{v})+\vec {\mu}_{\rm ex}(\vec{\phi}^{(\alpha)};\mathbf{\epsilon},\vec{v})]/\partial\vec{\phi} \succ\lambda_{\rm min}I\;\forall\alpha\] (2c) \[\phi^{(0)}_{\rm T}(\vec{\mu};\vec{v}) <\phi^{*}_{\rm T}(\vec{v}), \tag{2d}\] where \(\mu_{\rm id,\textit{i}}=v_{i}^{-1}\log\phi^{(\alpha)}_{i}\) for any component \(i\) that is enriched in the \(\alpha\) phase, \(\mu_{\rm id,\textit{i}}=v_{i}^{-1}\log\phi^{(\alpha)}_{\rm depl}\) for any component \(i\) that is depleted in the \(\alpha\) phase, and the equality(inequality) in Eq. (2a) applies to enriched(depleted) components. In Eq. (2c), the parameter \(\lambda_{\rm min}\geq 0\) places a lower bound on the smallest eigenvalue of the second-derivative matrix to guarantee thermodynamic stability. The final constraint, Eq. (2d), ensures that the volume fraction in the dilute phase, \(\phi^{(0)}_{\rm T}\), is less than the critical volume fraction, \(\phi^{*}_{\rm T}(\vec{v})\); this condition is independent of \(\mathbf{\epsilon}\) due to the zero-osmotic-pressure assumption. This program is straightforward to solve using modern convex optimization tools [25; 26]. Moreover, it is possible to prove whether this convex relaxation is infeasible, meaning that no solution \((\mathbf{\epsilon},\vec{\mu})\) exists for the target condensed-phase volume fractions \(\{\vec{\phi}^{(\alpha)}\}\). Because the approximations required to establish this convex relaxation are well controlled, we expect that there is a close correspondence between the feasible domain of \(\{\vec{\phi}^{(\alpha)}\}\) and the domain on which thermodynamic coexistence can be established. To confirm that the precise thermodynamic conditions for bulk-phase coexistence can be satisfied, we next perform a multicomponent generalization of the common-tangent construction. Starting from the SDP solution \((\mathbf{\epsilon},\vec{\mu})\), we adjust \(\vec{\mu}\) in order to fit a common tangent plane to the local minima of the grand potential, \(\Omega(\vec{\phi};\vec{\mu},\mathbf{\epsilon},\vec{v})\equiv\sum_{i=1}^{N}\int d \phi_{i}[v_{i}^{-1}\log\phi_{i}+\mu_{\rm ex,\textit{i}}(\vec{\phi})-\mu_{i}]\) (Fig. 1c). The conditions specified in Eq. (2) imply that the grand potential evaluated at the SDP solution has local minima close to the prescribed target-phase and dilute-phase volume fractions. Therefore, we can fit a common tangent plane by minimizing the norm of \(\vec{\Delta}\Omega(\vec{\mu})\), where \(\Delta\Omega^{(\alpha)}(\vec{\mu})\) is the difference between \(\Omega(\vec{\phi};\vec{\mu})\) evaluated at the local minimum near the dilute phase and at the local minimum near the \(\alpha\) condensed phase. This procedure converges rapidly using standard numerical methods [27], since the convex relaxation is a good approximation of this nonlinear hyperplane-fitting problem. In the extensive numerical tests described below, we indeed find that a solution to the convex relaxation typically implies that the conditions for coexistence can be satisfied for the target phases to numerical precision. This algorithm provides a scalable and highly accurate means for predicting whether prescribed target phases can be in simultaneous thermodynamic coexistence and, if so, for determining a coexistence point \((\mathbf{\epsilon},\vec{\mu})\). In gen Figure 1: **Inverse design approach to multicomponent phase coexistence.** (a) Schematic of the design problem. Each condensed-phase droplet (gray circle) has a distinct composition of the five molecular components (colors). The enriched-species compositions are indicated by pie charts. (b) Depiction of the solution space for a 3-component, 2-phase problem (see SI for details). Inequality constraints (blue line) and minimum eigenvalue constraints (contours) delineate the feasible region (dashed white line). (c) Illustration of the common-hyperplane construction for two phases, \(\alpha\) and \(\beta\), of the convex relaxation solution shown in panel b. (d) Schematic of the regularization heuristic for eliminating stable (\(\Delta\Omega\leq 0\)) off-target phases \(\gamma\) and \(\delta\). eral, the convex relaxation specified by Eq. (2) defines a continuous space of interaction matrices that solve the inverse design problem, with a unique \(\vec{\mu}\) corresponding to each point in this space. However, we have not yet considered the possibility that other "off-target" condensed phases may be equally or even more stable than the target phases at the calculated coexistence point, meaning that the target phases are only in marginal or metastable coexistence. This possibility can be addressed by introducing a regularization heuristic that attempts to maximize \(\Omega(\phi;\vec{\mu},\mathbf{\epsilon},\vec{v})\) away from the target phases (Fig. 1d). Specifically, based on the form of Eq. (1), we seek to minimize both the norm of \(\mathbf{\epsilon}-\mathbf{\bar{\mu}}/\langle\phi_{\rm T}^{(\alpha)}\rangle\), where \(\bar{\mu}_{ij}=(\mu_{i}+\mu_{j})/2\) and \(\langle\phi_{\rm T}^{(\alpha)}\rangle\) is the mean target-phase total volume fraction, and the variance of the elements of \(\vec{\mu}\) (see Supplementary Information for details). Regularizing the SDP in this way tends to destabilize off-target phases while guaranteeing that the solution to our convex relaxation is unique. Our approach reveals unexpected features of multicomponent phase behavior, which we demonstrate in the case of a Flory-Huggins polymer model [22] with degree of polymerization ranging from \(L=1\) to \(100\). (See the Supplementary Information for the model and SDP definitions.) Here we describe results for mixtures with \(N=6\) species, a sufficient number to uncover qualitative differences with simple fluids while still permitting exhaustive searches for off-target phases. For simplicity, we choose the same total volume fraction, \(\phi_{\rm T}^{\alpha}=\phi_{\rm T}^{\rm(cond)}\), for each condensed phase. We begin by designing phase diagrams with "equimolar" target phases, in which every enriched component within a target phase has the same volume fraction \(\simeq\phi_{\rm T}^{\rm(cond)}/M^{(\alpha)}\). Interestingly, we find that the feasibility of the SDP for any particular target phase diagram is independent of \(L\) and \(\phi_{\rm T}^{\rm(cond)}\). However, the probability that a solution to the regularized SDP results in phase coexistence tends to increase with \(L\) and \(\phi_{\rm T}^{\rm(cond)}\gg\phi_{\rm T}^{*}\) (Fig. 2a), since the convex relaxation becomes a more accurate approximation as the coexistence pressure decreases. Intuition based on the phase behavior of simple mixtures suggests that small changes in \(\{\vec{\phi}^{(\alpha)}\}\) should result in small changes in \(\mathbf{\epsilon}\), and vice versa, unless the mixture is near a critical point where two or more of the \(\vec{\phi}^{(\alpha)}\) merge. For example, small changes in the dimensionless interaction parameter in an incompressible binary mixture energy perturb the binodal but do not change the coexistence region qualitatively, as long as \(\phi_{\rm T}\gg\phi_{\rm T}^{*}\)[22]. Furthermore, the Gibbs Phase Rule (GPR) [28], which relates the number of coexisting phases to the number of thermodynamic degrees of freedom, should be expected to limit the number of condensed phases, \(K\), for which our method can find a coexistence point. For the \(N\)-component fluids that we study here, this expected bound is \(K\leq N\). Our inverse approach reveals multiple ways in which such intuition can mislead in multicomponent fluids. Surprisingly, we can design coexistence points where the condensed-phase count, \(K\), is greater than the number of distinct species, \(N\) (Fig. 2b). At first glance, these examples might appear to conflict with the GPR. Furthermore, in terms of bulk phase coexistence, these examples imply that the lever rule, which relates the total concentrations of the various species in a mixture to the mole fractions of the coexisting phases, does not have a unique solution. However, in mixtures with designed interactions, we can end up with coexistence equations that are linearly dependent, allowing us to perform a common tangent plane construction when \(K>N\). This behavior can be understood by realizing that the design problem, with \(N(N+1)/2\) tunable interaction-matrix parameters, is not overdetermined, and that convex optimization identifies interaction matrices that result in linearly dependent co Figure 2: **Features of multicomponent phase coexistence.** (a) Schematic of \(\mathbf{\epsilon}\)-space and validation that SDP solutions (feas) result in coexistence (coex). This probability, \(p\)(coex[feas], approaches one as the degree of polymerization, \(L\), increases. Here we consider phase diagrams with \(N=6\) species and equimolar condensed-phase compositions (see text). Each class of isomorphic phase diagrams, which are equivalent under permutation of component and target-phase indices, is considered once in these calculations. (b) Sensitivity of phase coexistence to perturbations in the interaction matrix. We add zero-mean Gaussian noise to the interaction matrices that produce phase coexistence among target phases with equimolar compositions, and then attempt to reestablish coexistence. The probability of success, averaged over many trials, is \(\langle p({\rm coex})\rangle_{\rm equimolar}\). (c) The probability that coexistence can be achieved for condensed phases with arbitrary compositions. Starting from feasible phase diagrams with equimolar condensed phases, we construct target phases by randomly scaling the enriched component compositions in each condensed phase. We also show whether the SDP solution leads to global phase coexistence, meaning no stable off-target phases. In panels b–c, \(\phi_{\rm T}^{\rm(cond)}=0.95\) and \(L=100\). existence equations at the prescribed \(\{\vec{\phi}^{(\alpha)}\}\). These unusual coexistence conditions do not occur in mixtures with fewer than \(N=5\) species, but become increasingly common as the number of components increases (see Supplementary Information for further discussion). Consistent with this explanation, we find that small, random perturbations to the designed interaction matrices almost always preclude phase coexistence of the target phases when \(K>N\) (Fig. 2b). Specifically, we add zero-mean Gaussian noise to the designed matrix \(\mathbf{\epsilon}\), and then attempt to perform a common tangent plane construction for phases close to the original target phases by tuning \(\vec{\mu}\). After perturbation, only a subset of the original \(K\) condensed phases can be brought into coexistence with the dilute phase, while the remaining phases become metastable. Similarly, we can make random perturbations to the initially equimolar compositions of the enriched components in each target phase. If we then attempt to solve the convex relaxation for these perturbed target phases, we almost always find that the SDP is infeasible when \(K>N\) (Fig. 2c). This indicates that a particular relationship among the compositions of the enriched components is necessary to establish linearly dependent coexistence equations. Yet surprisingly, such behavior is not limited to phase diagrams with \(K>N\). We also find certain phase diagrams with \(K\leq N\) that are similarly sensitive to random perturbations in \(\mathbf{\epsilon}\) and \(\{\vec{\phi}^{(\alpha)}\}\) (Fig. 2b,c). Regardless, when our inverse approach indicates that coexistence among the prescribed phases is possible, exhaustive sampling of the grand potential landscape confirms that these phases are globally stable with high probability (red curve in Fig. 2c). Taken together, these observations suggest that special coexistence points, which are sensitive to small perturbations in \(\mathbf{\epsilon}\), lie on manifolds of lower dimension than the full \(\mathbf{\epsilon}\)-space. Coexistence is not limited to \(K\leq N\) condensed phases on these manifolds, although some of these phases must become metastable if we move off the manifold by perturbing the interaction matrix. These manifolds represent "interfaces" between volumes of \(\mathbf{\epsilon}\)-space corresponding to condensed phases with different sets of enriched components. In other words, crossing one of these interfaces by changing \(\mathbf{\epsilon}\) entails a discontinuous transition from one set of condensed phases to another, with phases from both sets stable on the interface itself. We emphasize that our calculations are performed far from critical points, since the Euclidean distance between all pairs of target phases \(\{\vec{\phi}^{(\alpha)}\}\) is large. Furthermore, these special coexistence points need not have equimolar condensed-phase compositions; however, the enriched-component volume fractions are not independent on these manifolds (see Supplementary Information for further discussion). How are distinct multicomponent phase diagrams related in \(\mathbf{\epsilon}\)-space? To address this question, we can use the Frobenius norm to measure of the distance between two interaction matrices \(\mathbf{\epsilon}_{r}\) and \(\mathbf{\epsilon}_{s}\), corresponding to different phase diagrams with globally stable condensed phases \(\{\vec{\phi}^{(\alpha)}\}_{r}\) and \(\{\vec{\phi}^{(\alpha)}\}_{s}\), respectively (Fig. 3a and black distributions in Fig. 3b). Yet because the interaction matrix that stabilizes a particular phase diagram is typically not unique, it is more useful to quantify the extent to which an interaction matrix must be changed in order to switch from one phase diagram to another. We can accomplish this within our inverse design framework by modifying the regularization heuristic in one of two ways (see Supplementary Information for details). Starting from the interaction matrix \(\mathbf{\epsilon}_{r}\) that solves the original regularized SDP for phase diagram \(r\), we identify the "closest" matrix \(\mathbf{\epsilon}_{s}\) that solves phase diagram \(s\) by minimizing the Frobenius norm \(||\mathbf{\epsilon}_{s}-\mathbf{\epsilon}_{r}||_{\text{fo}}\) (red distributions in Fig. 3b). This distance can be infinitesimal if \(\mathbf{\epsilon}_{r}\) resides on a low-dimensional manifold and the phase-diagram change \(r\to s\) reduces the phase count. Similarly, the minimal distance between interaction matrices is typically larger when we add phases, such that \(K_{s}>K_{r}\). Alternatively, we can determine the smallest number of Figure 3: **Relationships among phase diagrams in \(\mathbf{\epsilon}\)-space.** (a) Low-dimensional representation of the interaction matrices corresponding to the phase diagrams considered in Fig. 1b. Circles with black outlines indicate phase diagrams that are sensitive to random perturbations in \(\mathbf{\epsilon}\) (see text). Multidimensional scaling [29] has been used to preserve distances in \(\mathbf{\epsilon}\)-space, taken here as the Frobenius norm. (b) Distances between pairs of matrices in panel a (black), and the minimum distance required to switch from phase diagram \(r\) to phase diagram \(s\) (red). Box plots indicate the quartiles of the distance distributions as a function of the phase count difference, \(K_{s}-K_{r}\). (c) The minimum number of entries of the symmetric \(\mathbf{\epsilon}\) matrix that must be changed to switch from phase diagram \(r\) to phase diagram \(s\), \(D_{rs}\); \(D_{\text{max}}\equiv N(N+1)/2\). (d) Asymmetry in the minimum number of elements changed when switching between phase diagrams. distinct matrix elements that must be changed to switch phase diagrams. This minimal number of elementwise changes, \(D_{rs}\), is always greater than zero (Fig. 3c). Our calculations reveal that \(D_{rs}\) is also asymmetric with respect to phase diagram changes \(r\leftrightarrow s\) and tends to increase with the net number of added phases (Fig. 3d). Finally, we assess whether the predictions of our inverse design approach apply beyond mean-field models. Specifically, we consider fluids in which the potential energy can be written as a sum of short-ranged pair potentials [23]. Note that Eq. (1) is only accurate at low concentrations for such fluids, since the higher-order virial coefficients generically depend on the species-specific pair potentials [23]. To this end, we perform free-energy calculations using simulations of a multicomponent lattice gas. We first design an interaction matrix, \(\mathbf{\epsilon}\), for a target phase diagram using the \(L=1\) Flory-Huggins SDP. We then use this matrix to define the well-depths of the pair potentials, \(u_{ij}(1\leq r/a<2)\propto\epsilon_{ij}\), where \(r\) is the distance between particles of types \(i\) and \(j\) and \(a\) is the lattice constant (see Supplementary Information for details). We first identify the free-energy basins in this model by running grand-canonical Monte Carlo simulations [30]. We then sample reversible transitions between the dilute free-energy basin and each of the condensed-phase basins [13]. Finally, we reconstruct the free-energy landscapes in the \(N\)-dimensional \(\vec{\phi}\)-space and adjust the chemical potentials to bring all phases into coexistence [31], at which point the grand potentials of all basins are all equal. Our mean-field design approach results in non-mean-field free-energy landscapes that are consistent with the target phase diagrams. We carry out simulations with five species and condensed-phase counts that are less than, equal to, and greater than the number of components (Fig. 4). In line with the predictions of the mean-field model, we find that coexistence can be achieved within sampling accuracy, even in the \(K>N\) case. Minor quantitative differences in the phase compositions occur in the simulation model due to inaccuracies in the mean-field approximations; however, the enriched components in all simulated phases match the designs. In summary, we have introduced a method to design the phase behavior of multicomponent fluids with pairwise--or approximately pairwise--interactions. Our approach provides insight into the structure of the interaction-matrix solution space, revealing ways in which the behavior of multicomponent fluids can differ qualitatively from that of simple fluids, while also scaling well to mixtures with tens or hundreds of components. In practice, it may not be possible to engineer molecular interactions with the independence and precision necessary to construct all theoretically possible phase diagrams. In this regard, our results indicate that the physically relevant constraints on the phase behavior of multicomponent fluids arise from the properties of the intermolecular interactions, since thermodynamically allowed phase diagrams can be surprisingly complex. Additional constraints on the physicochemical properties of the molecular components should therefore be added as an additional layer of our design framework. Overall, our results highlight the need to understand the extent to which molecular interactions can be tuned independently in phase-separating (bio)chemical fluids. We anticipate that our theoretical approach will play an important role in ongoing efforts to unravel the connections between molecular design and multicomponent phase behavior [32; 33; 34; 35; 36; 37; 38; 39]. This research was partially supported by the NSF through the Princeton University's Materials Research Science and Engineering Center DMR-2011750. Figure 4: **Transferable predictions validate the pairwise approximation.** Simulated free-energy landscapes at phase coexistence (\(|\Delta\Omega|\leq 0.007k_{\text{B}}T\)) in a molecular model with pair potentials derived from designed interaction matrices. Examples are shown for mixtures with \(N=5\) species and (a) \(K=4\), (b) \(5\), and (c) \(6\) condensed phases. \((N+1)\)-dimensional landscapes are projected onto two principal-component coordinates, \(X_{1}\) and \(X_{2}\), for visualization. The Pearson correlation coefficient, R, between the target and equilibrium composition is shown for each condensed phase.
2306.10130
Non-Contact Monitoring of Dehydration using RF Data Collected off the Chest and the Hand
We report a novel non-contact method for dehydration monitoring. We utilize a transmit software defined radio (SDR) that impinges a wideband radio frequency (RF) signal (of frequency 5.23 GHz) onto either the chest or the hand of a subject who sits nearby. Further, another SDR in the closed vicinity collects the RF signals reflected off the chest (or passed through the hand) of the subject. Note that the two SDRs exchange orthogonal frequency division multiplexing (OFDM) signal, whose individual subcarriers get modulated once it reflects off (passes through) the chest (the hand) of the subject. This way, the signal collected by the receive SDR consists of channel frequency response (CFR) that captures the variation in the blood osmolality due to dehydration. The received raw CFR data is then passed through a handful of machine learning (ML) classifiers which once trained, output the classification result (i.e., whether a subject is hydrated or dehydrated). For the purpose of training our ML classifiers, we have constructed our custom HCDDM-RF-5 dataset by collecting data from 5 Muslim subjects (before and after sunset) who were fasting during the month of Ramadan. Specifically, we have implemented and tested the following ML classifiers (and their variants): K-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network classifier. Among all the classifiers, the neural network classifier acheived the best classification accuracy, i.e., an accuracy of 93.8% for the proposed CBDM method, and an accuracy of 96.15% for the proposed HBDM method. Compared to prior work where the reported accuracy is 97.83%, our proposed non-contact method is slightly inferior (as we report a maximum accuracy of 96.15%); nevertheless, the advantages of our non-contact dehydration method speak for themselves.
Hasan Mujtaba Buttar, Kawish Pervez, M. Mahboob Ur Rahman, Kashif Riaz, Qammer H. Abbasi
2023-06-16T18:29:59Z
http://arxiv.org/abs/2306.10130v1
# Non-Contact Monitoring of Dehydration using RF Data Collected off the Chest and the Hand ###### Abstract In this work, we report for the first time a novel non-contact method for dehydration monitoring from a distance. Specifically, the proposed setup consists of a transmit software defined radio (SDR) that impinges a wideband radio frequency (RF) signal (of frequency 5.23 GHz) in the microwave band onto either the chest or the hand of a subject who sits nearby. Further, another SDR in the closed vicinity collects the RF signals reflected off the chest (or passed through the hand) of the subject. Note that the two SDRs exchange orthogonal frequency division multiplexing (OFDM) signal, whose individual subcarriers get modulated once it reflects off (passes through) the chest (the hand) of the subject. This way, the signal collected by the receive SDR consists of channel frequency response (CFR) that captures the variation in the blood osmolality due to dehydration. The received raw CFR data is then passed through a handful of machine learning (ML) classifiers which once trained, output the classification result (i.e., whether a subject is hydrated or dehydrated). For the purpose of training our ML classifiers, we have constructed our custom HCDDM-RF-5 dataset by collecting data from 5 Muslim subjects (before and after sunset) who were fasting during the month of Ramadan. Specifically, we have implemented and tested the following ML classifiers (and their variants): K-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network classifier. Among all the classifiers, the neural network classifier achieved the best classification accuracy, i.e., an accuracy of 93.8% for the proposed chest-based method, and an accuracy of 96.15% for the proposed hand-based method. Compared to the state-of-the-art (i.e., the contact-based dehydration monitoring method) where the reported accuracy is 97.83%, our proposed non-contact method is slightly inferior (as we report a maximum accuracy of 96.15%); nevertheless, the advantages of our non-contact dehydration method speak for themselves. That is, our proposed method is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. The anticipated beneficiaries of the proposed method include: sportsmen, athletes, elderly, diabetic and diarrhea patients, and labor working outdoors. dehydration, non-contact methods, RF-based methods, software-defined radio, covid19, machine learning. ## I Introduction A good sixty percent of the human body is composed of water, which is essential to many of the body's activities, including maintaining the body's temperature, transporting nutrients and oxygen to cells, lubricating joints, and eliminating waste products. Consuming sufficient water on a daily basis is necessary for preserving one's health and warding off a variety of diseases and adverse conditions [1]. Dehydration occurs when the body does not obtain enough water or when the body loses water through sweating and evaporation. When dehydration occurs, it throws off the natural equilibrium of the minerals and electrolytes found in the body. This could result in a variety of different health issues, ranging from quite harmless to life-threatening, depending on how much fluid is lost and what's causing it in the first place. Symptoms of mild dehydration include headache, dry mouth, thirst, dizziness, exhaustion, and dry and wrinkled skin [2],[3],[4]. In more extreme circumstances, dehydration can result in consequences such as kidney failure, convulsions, and even death. When the outside weather is hot and humid, then the dehydration could lead to heat exhaustion which could induce symptoms such as heavy respiration, nausea, headache, and weakness. Heat exhaustion if not addressed quickly, could escalate to heatstroke, which is a life-threatening medical emergency that can cause damage to the brain, organ failure, and even death. Last but not the least, dehydration could also have some long-term adverse effects on the body, e.g., constipation, damage to the kidneys, and infections of the urinary tract, etc. [1]. In short, dehydration could have fatal implications if left untreated, thus, timely diagnosis of dehydration followed by imminent medical intervention is of utmost importance. For the elderly, and for the diabetic and diarrhea patients, it is especially important to track their hydration levels frequently [5]. But when it comes to the existing dehydration detection methods, they have their limitations as they are either invasive (e.g., blood sample based), or contact-based (e.g., pulse oximeter, smart watch based). Further, the existing methods are expensive, inconvenient and inconsistent, as discussed below. **Existing dehydration measures and the dilemma:** Some of the most common methods for measuring hydration levels are: body mass change, total body water, serum and urine osmolality, plasma osmolality, urine specific gravity, and urine volume [6, 7, 8, 9, 10]. Another method that is sometimes considered as the "gold standard" consists of a procedure whereby a subject ingestg a known quantity of an isotope, which allows one to calculate its concentration in a bodily fluid in order to determine the body's total water content. Now, the dilemma. Though such "gold standards" of hydration assessment are considered useful for sports science, medicine, or for creating reference standards, but since they necessitate extensive methodological control, they are not useful for tracking one's hydration status on daily basis during a training or competition [11]. In other words, none of aforementioned hydration measures has been demonstrated to be valid in all dehydration scenarios (i.e., lab and field) [12]. Last but not the least, many of the aforementioned hydration measures could be expensive, cumbersome, erroneous, and inconvenient (either invasive or contact-based). This calls for the innovative and preferably non-contact methods for dehydration monitoring, which is precisely the agenda of this work. **Contributions.** This paper proposes an RF-based dehydration monitoring method that is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. Specifically, the key contributions of this work are as follows: 1) We propose a novel non-contact method called chest-based dehydration monitoring (CBDM) method. Under this method, the subject sits nearby an RF transceiver that impinges an OFDM signal onto the chest of the subject, while the receiver collects the signal reflected off the chest of the subject. 2) We propose a novel non-contact method called hand-based dehydration monitoring (HBDM) method. Under this method, the subject places his/her hand on a table and between two antennas such that the transmitted OFDM signal passes through the hand of the subject, and is subsequently collected by the receiver. The raw data collected by the receiver due to both (CBDM and HBDM) methods consists of channel frequency response (CFR) that is fed to multiple machine learning (ML) classifiers which eventually determine whether a person is hydrated or dehydrated. _To the best of our knowledge, this is the first work that reports a non-contact method for dehydration monitoring._ **Rationale.** The proposed CBDM and HBDM methods rely upon the following to infer dehydration related information from the data collected off the chest and the hand of the subject: i) Dehydration results in reduced blood volume and increased blood viscosity which in turn increases the heart rate and lessens the force of the blood against the walls of the arteries. ii) OFDM signal, being a wideband signal, helps in sensing for dehydration. That is, each OFDM subcarrier captures unique signatures of dehydration due to frequency, phase and amplitude modulation of the subcarrier reflected off the human body. Both factors assist our ML classifiers in achieving high classification accuracy. **Outline.** The rest of this paper is organized as follows. Section II discusses the related work. Section III provides a compact discussion of the apparatus/equipment that provides the scaffolding for our proposed non-contact dehydration monitoring method. Section IV provides further details about the software and hardware setup used for data collection, specifics of each of the two proposed experiments (chest-based, and hand-based), as well as the data acquisition protocol implemented in order to construct our custom HCDDM-RF-5 dataset. Section V talks about the training and testing of various ML classifiers on our custom dataset, and provides a detailed performance analysis. Section VI concludes. ## II Related Work The literature on dehydration monitoring is scarce, but could be broadly classified into three categories: i) invasive methods, ii) non-invasive but contact-based methods, iii) non-contact methods. First kind of methods (i.e., invasive methods) which examine blood or urine samples in order to determine the plasma and urine osmolality (and are considered as gold standard) have already been discussed in section I. Further, to the best of our knowledge, there exists no work for third kind of methods (i.e., non-contact methods) for dehydration monitoring in the open literature. Therefore, we summarize the related work on second kind of methods (i.e., non-invasive methods) only. ### _Non-invasive methods for dehydration monitoring_ The non-invasive methods for dehydration monitoring typically employ wearable sensors (e.g., oximeters, smart watch, smart wrist-bands, etc.) that capture the photoplethysmography (PPG) and electrodermal activity (EDA) signals and pass them through various ML algorithms in order to infer the dehydration status of a subject. For example, [13] collects both the EDA and the PPG data from 17 subjects and feeds it to a range of ML algorithms in order to detect mild dehydration by exploiting the autonomic response to cognitive stress (induced by means of Stroop test). In [14], authors collect EDA data from 16 subjects for three different body postures (sitting, standing, and walking), and pass it to a hybrid Bi-LSTM neural network in order to classify the hydration level of a subject into one of the three different states (hydrated, moderate dehydration, extreme dehydration). Authors of [15] utilize a miniature pulse oximeter to collect PPG data from 17 dehydrated patients admitted in emergency of a tertiary care hospital. They then extract multiple features from the acquired PPG data using the variable frequency complex demodulation algorithm, feed them to a support vector machine classifier, and report an accuracy of \(67.91\%\). [16] collects the EDA data, skin temperature, heart rate and body mass index from 16 participants while they undergo a workout/physical activity known as circuit training. It then feeds this data to an empirically derived formula in order to quantify fluid loss (dehydration) caused by physical activity. In [17], authors developed a real-time Android-based tool called "monitoring my dehydration" that utilizes the EDA data to learn the dehydration level of a person using machine learning techniques. They did experimental evaluation of their tool by feeding it real-world data from five users, obtaining an accuracy of \(84.5\%\). In [18], authors collect EDA data using BITalino kit from 5 subjects for three different activities by the subjects (sitting, standing, laying down), feed their data to various ML classifiers to solve the binary classification problem of dehydration detection, and report best classification accuracy of 91.3% using the random forests ML classifer. In [19] authors collect EDA data from several subjects under different conditions (sitting, standing), feed it to several ML classifiers to solve the binary classification problem of dehydration detection, and report a maximum accuracy of 87.78% using the simple k-NN classifier. Finally, [20] takes a rather different approach, and utilizes a leg skin microbiome data from 63 female subjects in order to accurately predict their skin hydration levels and several other important bio-markers. Before we conclude this section, it is imperative to have a quick discussion about the rise of non-contact methods for remote health sensing in the post-covid19 era. ### _Non-contact methods for health sensing_ The non-contact methods for monitoring of body vitals gained popularity in the post-covid19 era when it was learned that the covid19 pathogen/virus could stay alive on various surfaces for longer duration, and thus, could infect a healthy individual upon contact [21]. This gave rise to non-contact methods which can monitor a person's vital signs from a distance, and thus, could be used for long-term and real-time monitoring of a subject without inconvenience [22, 23]. Such methods also have the potential to decrease the number of visits to a hospital by a patient, thereby reducing the burden on healthcare systems [24]. Non-contact health sensing methods could be categorized into following four categories. 1) Camera-based sensing: These methods begin by recording the face and chest video of a subject from a distance and calculate vital signs by using the periodic change in skin colour to calculate the various body vitals [25, 26]. 2) Radar-based sensing: These systems incorporate various kinds of radars, e.g., ultra-wideband pulse radar, frequency modulated continuous-wave radar, etc. that utilize the traditional radar principles of range and Doppler in order to estimate various body vitals [27, 28]. 3) Wi-Fi-based sensing: Such methods exploit the extensive existing infrastructure of WiFi routers indoors to run cutting-edge ML and deep learning (DL) algorithms on the data collected off the reflections from the human subjects in order to measure body vitals [29, 28]. 4) Software-defined radio (SDR)-based sensing: Such methods capitalize on the amplitude and phase fluctuations in the signals reflected off the human body to measure vitals [30, 31]. _Note that the proposed non-contact CBDM method and HBDM method both do SDR-based sensing for dehydration monitoring. However, to the best of authors' knowledge, non-contact monitoring of dehydration has not been reported in the open literature, to date._ ## III Proposed Apparatus for Non-Contact Dehydration Monitoring The proposed non-contact system for dehydration monitoring is basically an RF transceiver that consists of two workstations, each connected with a software-defined radio (SDR) by means of a USB 3.0 port (see Fig. 1). Specifically, the SDR devices used for experiments are Universal Software Radio Peripheral (USRP) model B2101. Each SDR communicates with other by means of a directional horn antenna. We use MATLAB R2021a to program both the transmit and receive USRP SDRs. Specifically, the transmit SDR sends an orthogonal frequency division multiplexing (OFDM) signal with quadrature phase shift keying (QPSK) modulation on each subcarrier, while the receive SDR receives it and processes it. Footnote 1: The USRP B210 from National Instruments covers a wide frequency range (70 MHz to 6 GHz). It can process a wideband spectrum of up to 56 MHz in real time and sample at a high rate of up to 61.44 MS/s. Next, with the aim of non-contact dehydration monitoring, we design two distinct experiments. During the first experiment, the subject's chest is exposed to the OFDM signals, and thus, the receive SDR collects the signal reflected off the chest of the subject. We name this method as chest-based dehydration monitoring (CBDM) method. During the second experiment, the subject's hand is exposed to the OFDM signals, and thus, the receive SDR collects the signal that passes through the hand of the subject. We name this method as hand-based dehydration monitoring (HBDM) method2. Footnote 2: This study was approved by the ethical institutional review board (EIRB) of Information Technology University, Lahore, Pakistan. ## IV The HCDDM-RF-5 dataset This section provides sufficient details about the hardware and software setup used to construct the custom HCDDM-RF-5 dataset3, our thoughtful data collection methodology (that helped us capture dehydration related data in a very controlled manner), as well as the intricate details of the two experiments performed in order to collect data for the two proposed (CBDM and HBDM) methods. Footnote 3: The acronym HCDDM-RF-5 stands for Hand and Chest **D**ata for **D**ehydration **M**onitoring via **R**adio **F**requency data collected from **S** subjects. ### _USRP SDRs based OFDM transceiver_ OFDM Transmitter: For each OFDM frame, the random bits generator block creates pseudo-random data bits with a chunk size of 128 bits. The QPSK modulator block maps these bits to (frequency domain) symbols which are then transformed into a time-domain signal by means of an inverse fast Fourier transform (IFFT) of size \(N=64\) points. Further, a cyclic prefix (CP) of size 16 samples is appended to each OFDM frame, making each OFDM frame 80 samples long. Gain of the transmit horn antenna is set to 40 dB. Fig. 2(a) shows the Simulink flowgraph of USRP SDR based OFDM transmitter. OFDM Receiver: After removing the CP from each OFDM frame, fast Fourier transform (FFT) is then used to transform Fig. 1: The proposed non-contact method for dehydration monitoring: The apparatus consists of an SDR-based RF transceiver to collect radio data off the chest and the hand of the subject. The collected data is subsequently passed to various machine learning methods, which ultimately classify a subject either as hydrated or dehydrated. the received time-domain OFDM samples into the equivalent frequency-domain OFDM symbol. Then, keeping in mind that the transmitted QPSK symbols on each sub-carrier are known to the OFDM receiver, the channel coefficient \(h_{i}\) for \(i\)-th subcarrier could simply be computed as: \(h_{i}=\frac{y_{i}}{x_{i}}\), where \(x_{i}\),\(y_{i}\) are the transmitted and received QPSK symbol on \(i\)-th sub-carrier, respectively. This way, the raw CFR data \(\mathbf{h}=[h_{1},..,h_{N}]^{T}\) is collected by the OFDM transmitter, which is to be utilized later by the ML algorithms in order to classify the status of each subject as either hydrated or dehydrated. Fig. 2(b) shows the Simulink flowgraph of USRP SDR based OFDM transmitter. Table I provides a quick summary of setting of various relevant parameters of transmit and receive USRP SDRs. ### _Data Acquisition for the HCDDM-RF-5 dataset_ The custom HCDDM-RF-5 dataset was constructed by collecting data from five volunteers during the month of Ramadan (between March 23rd, 2023 and April 21st, 2023). Ramadan is an Islamic holy month during which devout Muslims observe a strict fast from sunrise till sunset. That is, while they are fasting, Muslims refrain from eating and drinking from sunrise till sunset. We took advantage of this unique opportunity in order to collect dehydration related data from five devout Muslims who had been fasting during this month. Among five subjects, two were males (aged 28, 62 years), and three were females (aged 21, 26, 61 years). For each fasting subject, we collected data twice, once for each class label (hydrated and dehydrated) in order to construct a balanced dataset. Specifically, first episode of data collection took place about 30 minutes before the sunset when the subject was deemed to be maximally dehydrated (thus, this data belongs to the first/dehydrated class). Subsequently, the second episode of the data collection took place an hour after the sunset, after the subject had finished eating and drinking after breaking the fast (thus, this data belongs to the second/hydrated class). For each subject, we conducted two kinds of experiments where we exposed the subject's chest and hand to the RF signals, respectively. Some more pertinent details about data collection for our proposed CBDM and HBDM methods are given below. \begin{table} \begin{tabular}{|c|c|} \hline Parameter & Type/Value \\ \hline Bits per OFDM frame & \(128\) \\ Bits per symbol & \(2\) \\ Coding scheme & Gray coding \\ Modulation scheme & QPSK \\ No. of OFDM subcarriers (N) & \(64\) \\ Data subcarriers & \(52\) \\ Pilot subcarriers & \(12\) \\ Size of FFT/IFFT & \(64\) points \\ Size of cyclic prefix & \(16\) \\ Sampling rate & \(20,000\) samples/sec \\ Antenna type & directional horn \\ USRP B210 frequency range & \(70\) MHz - \(6\) GHz \\ Centre frequency & \(5.23\) GHz \\ Clock source \& PPS source & Internal \\ Internal clock rate & \(200\) MHz \\ Interpolation factor (at Tx) & \(250\) \\ Decimation factor (at Rx) & \(250\) \\ Transmitter gain (at Tx and Rx) & \(40\) dB \\ \hline \end{tabular} \end{table} TABLE I: Some parameters for the USRP-SDR-based OFDM transceiver used for non-contact monitoring of dehydration. Fig. 2: The Simulink flowcharts of the USRP-SDR based OFDM transmitter and receiver. _Data collection for the proposed CBDM method:_ During data acquisition for the proposed CBDM method, each participant sat on a chair that was about 80 cm away from the pair of directional horn antennas that pointed towards the chest of the subject (see Fig. 3). As described before, the transmit horn antenna impinged an OFDM signal onto the chest of the subject, while the receive horn antenna gathered the signal reflected off the subject's chest. During each experiment session, the subject sat still in order to avoid motion-induced artefacts in the data being gathered. Each single experiment session lasted for 30 seconds. For each subject, we conducted five experiment sessions before the sunset (to capture the raw CFR data for dehydrated class) and five experiment sessions after the sunset (to capture the raw CFR data for the hydrated class). This way, we were able to collect \(30\times 5\) = \(150\) seconds worth of data for each class (for a given subject), and thus, \(150\times 2=300\) seconds worth of data per subject. Ultimately, for 5 subjects, this led to a total dataset size of \(300\times 5\) = \(1500\) seconds (or, 25 minutes) of raw CFR data (that corresponds to a total of \(5\times 5\times 2\) = \(50\) experiment sessions). _Data collection for the proposed HBDM method:_ During data acquisition for the proposed HBDM method, each participant sat on a chair that was about 60 cm away from the pair of directional horn antennas facing each other, and placed his/her hand on the table between the two antennas (see Fig. 4). Again, the transmit horn antenna impinged an OFDM signal onto the hand of the subject, while the receive horn antenna gathered the signal passed through the subject's hand. During each experiment session, the subject sat still in order to avoid motion-induced artefacts in the data being gathered. The rest of the details of data acquisition for the proposed HBDM method are the same as before. That is, for each subject, we conducted five experiment sessions before the sunset (to capture the raw CFR data for dehydrated class) and five experiment sessions after the sunset (to capture the raw CFR data for the hydrated class). This way, for 5 subjects, we acquired a dataset that consisted of \(300\times 5\) = \(1500\) seconds (or, 25 minutes) of raw CFR data (that corresponds to a total of \(5\times 5\times 2\) = \(50\) experiment sessions). In short, combining the two smaller datasets due to CBDM method and HBDM method together, the custom HCDDM-RF-5 dataset consists of a total of 50 minutes of raw CFR data that corresponds to a total of 100 experiment sessions. ## V Training and Testing of Machine Learning Classifiers For the binary classification problem (hydrated/dehydated) under consideration, we train and test the following five ML classifiers and their variants: K-nearest neighbours (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network. Subsequently, we provide detailed performance analysis and comparison of all the ML classifiers implemented. ### _Data Pre-processing & Training of Machine Learning Classifiers_ _Data Pre-processing:_ We utilised a low-pass filter and a Savitzky-Golay filter to denoise the CFR extracted from the received OFDM signal, for all the experiment sessions (for both CBDM and HBDM methods). We inspected the whole data manually and removed artifacts where found. _Training & validation of ML classifiers:_ The Matlab's classification learner app was used to train the following ML classifiers: K-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), ensemble classifier, and neural network. All the classifiers were trained on both labelled datasets (corresponding to the CBDM method and the HBDM method). The K-fold cross-validation strategy was used for validation in order to prevent the over-fitting issue. ### _Performance metrics_ Each classifier's performance is quantified in terms of accuracy, given as: \[\mathrm{Accuracy} =\frac{\mathrm{Correct\ prediction}}{\mathrm{Total\ observations}}\times 100 \tag{1}\] \[\mathrm{Accuracy} =\frac{T_{n}+T_{p}}{T_{n}+T_{p}+F_{n}+F_{p}}\times 100 \tag{2}\] where \(T_{n}\) represents a true negative, \(T_{p}\) represents a true positive, \(F_{n}\) represents a false negative, and \(F_{p}\) represents a false positive. In addition, we also do a performance comparison of the various ML algorithms by means of a confusion matrix. ### _Performance of proposed CBDM method_ We begin with performance analysis of the k-NN classifier for three distinct values of k, i.e., k=1,k=10,k=100 (where k is the number of neighbours used to calculate the distance to the new data point). We learn that the fine k-NN (k=1) achieves an accuracy of 79.1\(\%\), medium k-NN (k=10) achieves an Fig. 4: Experimental setup of the proposed HBDM method. Fig. 3: Experimental setup of the proposed CBDM method. accuracy of 69.2\(\%\), while the coarse K-NN (k=100) achieves a very low accuracy of 55.3\(\%\) (see Fig. 5 that displays the detailed confusion matrix). Next, we focus on Fig. 6 and do performance comparison of the remaining four ML classifiers (and their variants). Beginning with an SVM classifier (with linear, quadratic, and cubic kernels), we note that the linear SVM achieves an overall accuracy of 86.5%, quadratic SVM achieves an overall accuracy of 89.6%, while the cubic SVM achieves an overall accuracy of 90.9%. Next, we focus on the decision tree classifier, and note that it has the lowest accuracy of all. That is, the fine tree (despite its many leaves and despite its ability to differentiate between classes precisely) achieved an accuracy of 68.8% only, while the coarse tree achieved a very low accuracy of 58.0% only. Next in line is the ensemble classifier (a mixture of many classifiers) that is typically implemented with the aim to boost classification accuracy. We observe the following: the ensemble boosted tree has an overall accuracy of 70.3%, the ensemble bagged tree has an accuracy of 77.9%, the ensemble subspace KNN has an accuracy of 82.9%, while the ensemble subspace discriminant has an accuracy of 89.6%. Finally, the neural network (NN) classifier. Each variant of the NN classifier is a fully-connected feedforward network. After each fully connected layer, the Relu activation function is applied, except the last year where softmax activation function is used. We observe that all the different variants of the NN classifier outperform the other ML classifiers. Specifically, the narrow variant of the neural network achieves an accuracy of 93.8%, the medium neural network achieves an accuracy of 92.5%, the broad neural network achieves an accuracy of 92.9%, the bi-layered variant of neural network achieves an accuracy of 93%, while the tri-layered variant of the neural network achieves an accuracy of 93.1%. Fig. 7 provides an alternate way of comparing the overall accuracy of all the five ML classifiers and their variants. We note that, for the proposed CBDM method, the neural network classifier (with the narrow neural network) achieves the highest accuracy, which is 93.8%. ### _Performance of proposed HBDM method_ We begin performance analysis of our proposed HBDM method from Fig. 8 which provides confusion matrix of each of five ML classifiers (and their variants). Beginning with an SVM classifier (with linear, quadratic, and cubic kernels), we note that the linear SVM achieves an overall accuracy of 71.1%, quadratic SVM achieves an overall accuracy of 89.2%, while the cubic SVM achieves an overall accuracy of 88.2%. Next, the decision tree classifier. We observe that once again it has the lowest accuracy of all. That is, the fine tree achieved an accuracy of 72.2% only, while the coarse tree achieved a very low accuracy of 61.4% only. Next, the ensemble classifier. We observe the following: the ensemble boosted tree has an overall accuracy of 74.8%, while the ensemble bagged tree has an accuracy of 79.7%. Finally, the neural network (NN) classifier. Once again, all the different variants of the NN classifier outperform the other ML classifiers. Specifically, the narrow variant of the neural network achieves an accuracy of 94.7%, the medium neural network achieves an accuracy of 96.15%, the broad neural network achieves an accuracy of 95.15%, the bi-layered variant of neural network achieves an accuracy of 92.35%, while the tri-layered variant of the neural network achieves an accuracy of 94.2%. Fig. 9 provides an alternate way of comparing the overall accuracy of all the five ML classifiers and their variants. We note that, for the proposed HBDM method, the neural network classifier (with the medium neural network) achieves the highest accuracy, which is 96.15%. ### _Performance comparison with the state-of-the-art_ Finally, Table II compares the accuracy of the proposed non-contact CBDM and HBDM methods with the state-of Fig. 5: Confusion matrix of k-NN algorithm for the proposed CBDM method. Fig. 6: Confusion matrix of each of SVM, DT, Ensemble classifiers, and NN for the proposed CBDM method. the-art methods which are all contact-based methods for dehydration monitoring. Compared to the state-of-the-art where the maximum reported accuracy is 97.83%, our proposed non-contact method is slightly inferior (as we report a maximum accuracy of 96.15%); nevertheless, the advantages of our non-contact dehydration method speak for themselves. That is, our proposed method is non-invasive and contact-less, has high accuracy, allows continuous and seamless monitoring, is easy to use, and provides rapid results. ## VI Conclusion & Future Work This work proposed for the first time a non-contact method to monitor the dehydration of a subject from a distance. Specifically, we utilized a pair of USRP SDRs whereby the transmit SDRs impinged OFDM signals onto the chest or the hand of the subject, while the receive SDR collected the modulated signal reflected off the body of the subject. For the purpose of training our ML classifiers, we collected data from 5 Muslim subjects (before and after sunset) who were fasting during the month of Ramadan. We then passed the received raw CFR data through many ML classifiers. Among them, neural network classifier achieved the best performance: an accuracy of 93.8% for the proposed CBDM method, and an accuracy of 96.15% for the proposed HBDM method. The fact that the proposed HBDM method outperforms the proposed CBDM method is a pleasant result. This is because this allow us to promote the proposed HBDM method as a non-contact method for dehydration monitoring (where only a hand is exposed to RF radiation, instead of the full chest, albeit the radiation being non-ionizing). Last but not the least, the proposed non-contact method (with a maximum accuracy of 96.15%) performs very close to its contact-based counterpart (with a maximum accuracy of 97.83%). Such a minor performance degradation of our proposed non-contact method compared to its contact-based competitor might be affordable, keeping in mind the convenience (and other benefits) of a non-contact method. One major advantage of the proposed approach is that it may pave the way for the creation of a smart mobile health (m-health) solution that could be deployed in remote areas far away from the mega cities, in order to provide comprehensive \begin{table} \begin{tabular}{|c|c|} \hline Work & Accuracy \\ \hline Liapat et al. [14] & 97.83\% \\ \hline Kulkarni et al. [17] & 75.96\% \\ \hline Liapat et al. [18] & 91.53\% \\ \hline Rizwan et al [19] & 85.63\% \\ \hline Carrieri et al. [20] & 73.91 \% \\ \hline Our non-contact CBDM method & 93.8\% \\ \hline Our non-contact HBDM method & 96.15\% \\ \hline \end{tabular} \end{table} TABLE II: Accuracy comparison of our proposed non-contact CBDM and HBDM methods with the state-of-the-art (contact-based methods). Fig. 8: Confusion matrix of each of KNN, SVM, DT, Ensemble classifiers, and NN for the proposed HBDM method. Fig. 7: Performance comparison of all the classifiers for the proposed CBDM method. Fig. 9: Performance comparison of all the ML classifiers for the proposed HBDM method. health monitoring of the people living there. This work opens up many exciting directions for the future work. For example, one could construct/acquire a more challenging dataset (unlike the current dataset that was obtained in a very controlled setting), and re-evaluate as well as fine-tune the performance of the proposed method further, in order to make it robust and amicable to the unseen data.
2305.09131
Multiple symmetry protected BIC lines in two dimensional synthetic parameter space
Bound states in the continuum (BICs) have attracted significant interest in recent years due to their unique optical properties, such as infinite quality factor and wave localization. In order to improve the optical performance of BICs based devices, more degrees of freedom are required to tune BICs in high-dimension parameter space for practical applications. To effectively tune more BICs, we form a 2D synthetic parameter space based on a nanohole metasurface array. Multiple symmetry protected BIC modes with high Q factors can be achieved at high-order symmetry point. Through manipulating asymmetry parameters, BIC lines formed by a series of BIC modes can be found in the 2D synthetic parameter space. Moreover, the electric field distributions are investigated to demonstrate the generation and evolution of BICs. By measuring the absorption spectra, the tuning of multiple BICs with synthet-ic asymmetry parameters is experimentally explored, which agrees well with theoretical results. Therefore, our de-sign can provide new insight for a variety of on-chip applications, such as non-linear devices, integrated nanolasing array and high-resolution sensors for infrared molecular detection.
Fengyuan Zhang, Qiongqiong Chu, Qiang Wang, Shining Zhu, Hui Liu
2023-05-16T03:25:02Z
http://arxiv.org/abs/2305.09131v1
# Multiple symmetry protected BIC lines in two dimensional synthetic parameter space ###### Abstract Bound states in the continuum (BICs) have attracted significant interest in recent years due to their unique optical properties, such as infinite quality factor and wave localization. In order to improve the optical performance of BICs based devices, more degrees of freedom are required to tune BICs in high-dimension parameter space for practical applications. To effectively tune more BICs, we form a 2D synthetic parameter space based on a nanohole metasurface array. Multiple symmetry protected BIC modes with high Q factors can be achieved at high-order symmetry point. Through manipulating asymmetry parameters, BIC lines formed by a series of BIC modes can be found in the 2D synthetic parameter space. Moreover, the electric field distributions are investigated to demonstrate the generation and evolution of BICs. By measuring the absorption spectra, the tuning of multiple BICs with synthetic asymmetry parameters is experimentally explored, which agrees well with theoretical results. Therefore, our design can provide new insight for a variety of on-chip applications, such as non-linear devices, integrated nanolasing array and high-resolution sensors for infrared molecular detection. Keywords:Bound states in the continuum, BIC lines, nanohole metasurface, synthetic parameter space + Footnote †: 1: **corresponding author: Hui Liu,** National Laboratory of Solid State Microstructures, School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, Jiangsu 210093, China, E-mail: [email protected]. ## 1 Introduction BICs are eigenmodes embedded in the continuum radiation spectrum, generally characterized by infinite Q factor and perfect optical mode confinement [1-3]. This intriguing phenomenon was first proposed in quantum mechanics by von Neumann and Wigner [1], and has been found in many physical systems, such as electromagnetic, acoustic and water waves [4-6]. When small perturbations (such as structural parameters or incident angle) are introduced into the resonant system, symmetry protected BICs will transformation into quasi-BICs, exhibiting as high-Q resonances with tight optical confinement. The corresponding Q factors of leaky quasi-BIC modes depend on the asymmetry degrees caused by introduced perturbations [7]. In recent years, BICs have been extensively studied in various optical designs such as gratings [8-12], photonic crystals [13-16], waveguides [17-21], dielectric metasurfaces [22-25], and plasmonic systems [26-30], enabling numerous applications in nonlinear effect enhancement [31-32], lasers [33-35], sensors [36-37], and filters [9]. System with multiple BICs can provide more perspectives to flexibly manipulate the optical responses. New method for effectively tuning BICs in high-dimension parameter space is strongly needed for the development of various BIC based optical devices. Currently, the studies of optical mode modulation in high-dimension synthetic parameter space have become a hot topic due to the greatly increased tuning degrees of freedom. Based on 2D or 3D synthetic parameter space, our research group has achieved BIC mode modulation [38], charge-2 Dirac point in topological superlattices [39] and rotated Weyl physics probing [40-41]. On the basis of above researches, by introducing the concept of synthetic parameter space, which is formed by two independent asymmetry parameters, we demonstrate a system with multiple BICs in 2D synthetic parameter space through designed nanohole metasurface array. At high-order symmetry point, there are two BIC modes controlled by the x-direction asymmetry parameter and one BIC mode controlled by the y-direction asymmetry parameter. These BIC modes can achieve high Q factors which satisfies the common properties of symmetry-protected BICs. Away from this point, corresponding BIC mode will transform into quasi-BIC mode due to symmetry broken, manifesting as absorption peaks. Furthermore, BIC lines formed by a series of BIC modes can be achieved through adjusting the asymmetry parameters in proposed 2D synthetic parameter space. In experiments, measured absorption spectra under varied asymmetry parameters show the same trend as theoretical evolution of BICs. Benefiting from the continuously tunable optical responses of BICs, this proposed nanohole metasurface design can further enhance the light-matter interaction, opening a new avenue for plenty of physics fields in laser, sensing and nonlinear optics. Fengyuan Zhang, Qiongqiong Chu, Qiang Wang and Shining Zhu, National Laboratory of Solid State Microstructures, School of Physics, Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing, Jiangsu 210093, China. ## 2 Results and discussion ### Multiple BICs in 1D parameter space As shown in Figure 1a, our proposed mutiple BICs system is composed of nanohole metasurface array, a gold mirror, and a dielectric Si layer sandwiched between them. In particular, each unit cell of artificially designed metasurface contains two identical nanoholes with fixed length L and width W as 2.2um and 0.2um, as shown in Figure 1c. For better understanding, we consider two nanoholes as two individual parts with the period along x (y) axis P\({}_{\mathrm{x}}\) (Py) fixed as P\({}_{\mathrm{x}}\) = 1.5 um (Py = 3.6 um). If we move one nanohole while the other keeps still, the symmetry of nanoholes will be broken. According to the moving distances \(\Delta\)x and \(\Delta\)y, we define two asymmetry parameters p = \(\Delta\)x /(Px /2) and q = \(\Delta\)y / (Py /2) to form the 2D synthetic parameter space (p-q Space), as shown in Figure 1d. The SEM picture of one kind of asymmetry nanohole metasurfaces (p=0.4, q=0) is displayed in Figure 1b. In p-q space, the point of p=0, q=0 is the high-order symmetry point where the nanoholes simultaneously possess the \(\alpha\)x and \(\alpha\)y symmetry, which means that the structure is invariant under the mirror reflection \(\alpha\)x and \(\alpha\)y, as illustrated in Figure 1c. With the increase of the parameters p and q, the electric field distribution of nanoholes changes accordingly to the increased asymmetry. When two nanoholes possess two out-of-phase electric dipole field distributions, they can be regarded as a nonradiative electric quadrupole. Specifically, when the electric field of two nanoholes manifests as two out-of-phase x-polarized (y-polarized) electric dipoles, we call it the x-polarized (y-polarized) quadrupole mode. The mode properties of proposed mutiple BICs system in 1D parameter space are firstly analyzed. We calculated the eigenfrequency variations of nanohole metasurfaces with varied asymmetry parameters p and q respectively, by COMSOL Multiphysics (COMSOL Inc.), as depicted in Figure 2a, b. In the simulation, the gold material is described by Drude model with plasma frequency set as \(1.37\times 10^{16}\)rad/s and the collision frequency \(\Gamma\) set as 0 to realize lossless simulation. Here, we first focus on four modes at high-order symmetry point, two of which manifest as x-polarized electric quadrupole mode, that is, the BIC modes protected by \(\alpha\)x symmetry, denoted as X1, X2. One mode manifests as y-polarized electric quadrupole, that is, the BIC mode protected by \(\alpha\)y symmetry, denoted as Y1. The last ordinary mode is denoted as N1. Figure 2c shows the Ex distribution of four modes. The above three BIC modes and N1 mode are indicated by red and green circles, respectively. To verify the existence of above three BIC modes, we further calculate the absorptivity of designed metasurfaces as a function of the asymmetry parameters p and q, respectively, as shown in Figure 2a, b. Here, the collision frequency \(\Gamma\) of gold is set to \(4.05\times 10^{10}\)rad/s to obtain the absorption results, since the absorption of lossless system is nearly 0. It should be noted that this introduced metal loss is only one thousandth of the actual metal loss (\(\Gamma_{\text{normal}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ shown in Figure 1(c). When \(\alpha\) symmetry is broken (q\(\neq\)0), the symmetry of Ex distribution changes from odd to even. However, the Ex of mode N1 always exhibits an even symmetry distribution, representing a radiative mode. In order to reveal the underlying physical mechanism of these three BIC modes mentioned above, we calculated the variation of Q factors of them as the asymmetry parameters p or q change, as shown in Figure 1(d), e. We can see that when q is fixed as 0 and p gradually varies, the Q factors of X1 and X2 modes reach the maximum values at the high-order symmetry point. After moving away from this point, these two modes transform into quasi-BIC modes and the Q factors of them drop rapidly. Moreover, the Q factors of two modes are stabilized near the maximum value when p is fixed as 0 and q gradually varies. In contrast, when p is fixed as 0 and q gradually varies, the Q factor of the Y1 mode reaches its maximum value at the high-order symmetry point and decreases rapidly after moving away from the point, indicating the evolution from BIC mode to quasi-BIC mode. Similarly, the Q factor of Y1 mode is stabilized near its maximum value when q is fixed as 0 and p gradually varies. For comparison, the N1 mode shows relatively low Q factor during the variation of parameters p or q. Then, through linear fitting, we find that the Q factors of X1, X2, and Y1mdoes satisfy the relationship Q \(\propto\) p\({}^{-2}\) or Q \(\propto\) q\({}^{-2}\), further verifying that these three modes are symmetry-protected BIC modes at high-order symmetry point, as shown in Figure 1(f), g. The Q factor of N1 mode do not satisfy the linear relationship with the negative quadratic of the asymmetry parameter p or q. The corresponding results for these modes of nanohole metasurfaces under considering metal loss are given in Figure S1. ### BIC lines in 2D synthetic parameter space Based on the above results in 1D parameter space, we continue to investigate the optical properties of nanohole metasurfaces in 2D parameter p-q space. Here, we call the symmetry points merely protected by the \(\alpha\)x (\(\alpha\)y) symmetry where p=0, q\(\neq\)0 (q=0, p\(\neq\)0) as x-direction (y-direction) symmetry points. While the points where p\(\neq\)0, q\(\neq\)0, we call them as asymmetry points. Figure 3 shows the calculated Q factor variations of four modes X1, X2, Y1, and N1 in the p-q space, respectively. In 2D parameter p-q space, it is clear that the maximum values of Q factors of X1 and X2 modes appear on the parameter line (p=0) connected by x-direction symmetry points. Away from this line, these two BIC modes transform into quasi-BIC modes, and the Q factors of them decrease continuously with the increase of \(|\)p\(|\). Clearly, a BIC line is formed by a series of BIC modes which exist on the parameter line (p=0). Similarly, the maximum value of Q factor of Y1 mode appears on the parameter line connected by y-direction symmetry points. Away from this line, the Q factor of Y1 mode decreases continuously as \(|\)q\(|\) increases due to mode transformation. A new BIC line is formed by a series of BIC modes which exist on the parameter line (q=0). These BIC lines are indicated by red lines in the Figure 3. The Q factor distribution in p-q space demonstrates that these BIC lines are protected by cx or \(\alpha\)y symmetry. In addition, the Q factors of N1 mode present low values at all symmetry points, which is inconsistent with the BIC mode properties. In addition, we investigate the Q factors and the real part of the eigenfrequency variations of these four modes in p-q space with considering metal loss (as shown in Figure S2 and Figure S3). Along the BIC lines, BIC mode with highest Q factor can be rapidly found in p-q space, enabling the high-dimension tuning of optical performance. Benefiting from the increased tuning degrees of freedom provided by BIC lines, proposed nanohole metasurfaces can facilitate the developing of BIC based optical devices. To better understand the physical mechanism of multiple BICs system, we select four points on the Q factor surface in p-q space for each of the four modes, namely, the high-order symmetry point (p=0, q=0), the x-direction symmetry point (p=0, q=-0.2), the y-direction symmetry point (p=-0.2, q=0) and the asymmetry point (p=-0.2, q=-0.2), for in-plane electric field distribution analysis, as shown in Figure 4. At high-order symmetry point, the Ex distribution of X1 and X2 modes in two nanoholes can be regarded as two out-of-phase electric dipoles, performing as x-polarized electric quadrupole. Similarly, the Ex distribution of Y1 mode performs as y-polarized electric quadrupoles. Those non-radiative quadrupole distributions demonstrate the simultaneous existence of three BIC modes. At x-direction symmetry point, only the Ex distributions of X1, X2 modes are maintained as electric quadrupoles while the Ex distribution of Y1 mode in two nanoholes is in-phase, indicating the transformation from BIC mode to quasi-BIC mode. On the contrary, at y-direction symmetry point, only Y1 mode maintains as electric quadrupole while the X1, X2 modes are no longer electric quadrupole distribution. However, at asymmetry point where the symmetry is broken in both directions, the Ex distributions of X1, X2 and Y1 modes exhibit same in-phase properties and all transform into radiative quasi-BIC mode. Differently, the Ex distribution of N1 mode at four picked points exhibits the same in-phase distribution, performing as radiative non-BIC mode. To clearly show the radiation mechanism of BIC modes, we also provide the out-of-plane (x-z plane) electric field Ex and \(|\)E\(|\) distributions for each of the four modes at four points in p-q space, shown as Figure S4 and Figure S5. ### Experiment results and analysis In the following, we introduced metal loss into nanohole metasurfaces and simulated the absorption spectra of metasurfaces under varied asymmetry parameters to verify the existence of BIC lines in lossy system. The absorption spectra under varied asymmetry parameter p (q=0) were firstly simulated, as shown in Figure 5a. It can be seen that there is no resonant absorption peak for the X1 and X2 modes at the high-order symmetry point. After moving away from this point, the absorption peaks of X1, X2 modes appear and show redshift and blue shift respectively, indicating the evolution from BICs to quasi-BiCs. The absorption peaks of two modes are marked by the red dots in the figure. In contrast, the absorption peaks of N1 mode always present and its resonance wavelength remains almost unchanged. Above results are consistent with that in lossless system in Figure 2a. Note that, compared to lossless system, the absorption peaks of lossy system are broadened due to the presence of metal loss. Specifically, we fabricated designed nanohole metasurface array with varied asymmetry parameter p and measured the absorption spectra, as shown in Figure 5b. The variation tendency of absorption peaks of X1, X2 modes show good agreement with the simulation results, verifying the existence of two BIC modes in experiments. At the same time, the Y1 mode shows no corresponding absorption peak and thus maintain as BIC mode, which means that there is a BIC line formed by a series of BICs under varied p. Moreover, the Q factors of X1, X2 modes in experimental and simulated results are compared, as shown in Figure 5c, d. Both of them show the same trend and are consistent with the properties of the _cx_ symmetry protected BIC modes. The slight discrepancies between the Q factors of simulation and experimental results are caused by fabrication errors. Then, we investigate the absorption spectra variations under varied asymmetry parameter q (p = 0), as shown in Figure 6a, b. It can be seen that the experiment results agree well with the simulation results. There is no absorption peak for the Y1 mode at high-order symmetry point, but away from this point, corresponding absorption peaks appear and gradually shift, indicating the appearance of quasi-BIC. The variation tendency is same as that of lossless system in Figure 2b. Meanwhile, the X1, X2 modes show no corresponding absorption peak and thus both maintain as BIC modes, which means that there are BIC lines formed by a series of BIC modes under varied q. In Figure 6c, the Q factors of experiment results show the same tendency as that of simulation results and are consistent with the properties of the _cx_ symmetry protected BIC modes. Above results clearly show the generation and evolution of BIC lines in p-q space at the condition of considering metal loss. Therefore, our proposed multiple BICs system provides more degrees of freedom for BICs tuning, offering a new platform for multifunctional integrated optical devices. For example, by introducing this design method into semiconductor nanolasing regime, low-threshold integrated nanolasing array are expected to be realized. In addition, most studied BIC-based sensors have focused on the wavelength range from visible to near-infrared. However, our design can basically cover the long-wave infrared range of 8-14um by asymmetry parameter manipulation, which can be expected to be applied for integrated high-resolution sensors for infrared molecular detection. ## 3 Conclusion In summary, based on nanohole metasurface array, we have realized multiple BIC lines formed by a series of BICs in 2D synthetic parameter space. Desired BIC mode with highest Q factor can be found along the BIC lines. Through adjusting the asymmetry parameters in p-q space, the continuously tuning of BICs are simultaneously realized in simulation and experiments. Specifically, the physical mechanisms underlying the generation and evolution of BICs have been carefully investigated. This proposed system with multiple BICs can find many potential optical devices in nonlinear optics, nanolasing and infrared sensing. Meanwhile, our proposed new method for effectively tuning BICs in high-dimension synthetic parameter space can be applied to various BIC systems, paving the way for multi-dimensional manipulation. ## 4 Methods ### Nanohole metasurface array fabrication Firstly, three layers of Au (70 nm)/Si(300 nm)/Au(100 nm) are sequentially deposited on the Si substrate by Electron Beam Evaporation (AdNaNotek EBS-150U). Then nanohole metasurface are etched from the top gold film by a focused ion beam (FIB dual-beam FEI Helios 600, 30 keV, 100 pA). By adjusting the etching parameters, the asymmetric parameters p and q of nanohole metasurface array can be varied from -1 to 1 while the etching depth is set as 70 nm. Each sample of the metasurface array in p-q space is 100 \(\mathrm{\SIUnitSymbolMicro m}\times\)100 \(\mathrm{\SIUnitSymbolMicro m}\) sized. ### Optical characterization The absorption spectra (A) are derived based on the measured reflection spectra (R) by A = 1 - R. Specifically, each reflection spectrum (R) of the nanohole metasurface array was measured by a Fourier transform infrared (FTIR) spectrometer. The reflection signals in the spectral range of 4 - 16 \(\mathrm{\SIUnitSymbolMicro m}\) were collected using a Hyperion 2000 IR microscope with a liquid-nitrogen-cooled HgCdTe (MCT) detector. Measured reflection spectra were normalized with respect to a gold mirror. ### Numerical Simulations and Analysis. To find eigenmodes of multiple BICs system, numerical simulations are performed to calculate the eigenfrequency of nanohole metasurface array by the eigenfrequency solver in COMSOL Multiphysics. The Q factor is obtained from the real and imaginary parts of the eigenfrequency: \[Q=\frac{real\ \ (Eigenfrequency)}{2\times imag\ \ (Eigenfrequency)}\] The absorption of each individual metasurface can also be simulated. During the process, periodic boundary conditions are applied in both x and y directions for mimicking the periodic nanohole array. **Author contribution:** All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission. **Research funding:** This work was financially supported by the National Natural Science Foundation of China (Nos. 92163216 and 92150302). **Conflict of interest statement:** The authors declare no conflicts of interest regarding this article.
2304.09218
Generative models improve fairness of medical classifiers under distribution shifts
A ubiquitous challenge in machine learning is the problem of domain generalisation. This can exacerbate bias against groups or labels that are underrepresented in the datasets used for model development. Model bias can lead to unintended harms, especially in safety-critical applications like healthcare. Furthermore, the challenge is compounded by the difficulty of obtaining labelled data due to high cost or lack of readily available domain expertise. In our work, we show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models. In particular, we leverage the higher abundance of unlabelled data to capture the underlying data distribution of different conditions and subgroups for an imaging modality. By conditioning generative models on appropriate labels, we can steer the distribution of synthetic examples according to specific requirements. We demonstrate that these learned augmentations can surpass heuristic ones by making models more robust and statistically fair in- and out-of-distribution. To evaluate the generality of our approach, we study 3 distinct medical imaging contexts of varying difficulty: (i) histopathology images from a publicly available generalisation benchmark, (ii) chest X-rays from publicly available clinical datasets, and (iii) dermatology images characterised by complex shifts and imaging conditions. Complementing real training samples with synthetic ones improves the robustness of models in all three medical tasks and increases fairness by improving the accuracy of diagnosis within underrepresented groups. This approach leads to stark improvements OOD across modalities: 7.7% prediction accuracy improvement in histopathology, 5.2% in chest radiology with 44.6% lower fairness gap and a striking 63.5% improvement in high-risk sensitivity for dermatology with a 7.5x reduction in fairness gap.
Ira Ktena, Olivia Wiles, Isabela Albuquerque, Sylvestre-Alvise Rebuffi, Ryutaro Tanno, Abhijit Guha Roy, Shekoofeh Azizi, Danielle Belgrave, Pushmeet Kohli, Alan Karthikesalingam, Taylan Cemgil, Sven Gowal
2023-04-18T18:15:38Z
http://arxiv.org/abs/2304.09218v1
# Generative models improve fairness of medical classifiers under distribution shifts ###### Abstract A ubiquitous challenge in machine learning is the problem of domain generalisation. This can have serious implications as it can exacerbate bias against groups or labels that are underrepresented in the datasets used for model development. Model bias can lead to unintended harms, especially in safety-critical applications such as healthcare. Furthermore, the challenge is compounded by the difficulty of obtaining labelled data due to high cost or lack of readily available domain expertise. In our work, we show that learning realistic augmentations automatically from data is possible in a label-efficient manner using generative models (e.g., diffusion probabilistic models). In particular, we leverage the higher abundance of unlabelled data to capture the underlying data distribution of different conditions and subgroups for an imaging modality. By conditioning generative models on appropriate labels (e.g., diagnostic labels and / or sensitive attribute labels), we can steer the distribution of synthetic examples according to specific requirements. We demonstrate that these learned augmentations can surpass heuristic, manually implemented ones by making models more robust and statistically fair in- and out-of-distribution. To evaluate the generality of our approach, we study three distinct medical imaging contexts of varying difficulty: (i) histopathology images from a publicly available and widely adopted generalisation benchmark, (ii) chest X-rays from publicly available clinical datasets, and (iii) dermatology images characterised by complex shifts and imaging conditions. The latter constitutes a particularly unstructured domain with various challenges. Two of these imaging modalities further require operating at a high-resolution, which requires developing faithful super-resolution techniques to recover fine details of each health condition. Complementing real training samples with synthetic ones improves the robustness of models in all three medical tasks and increases fairness by improving the accuracy of clinical diagnosis within underrepresented groups. Our proposed approach leads to stark improvements out-of-distribution across modalities: 7.7% prediction accuracy improvement in histopathology, 5.2% in chest radiology with 44.6% lower fairness gap and a striking 63.5% improvement in high-risk sensitivity for dermatology with a 7.5\(\times\) reduction in fairness gap. classification, prediction, and prognostication of diseases (Cui and Zhang, 2021). These solutions are often motivated by the global shortage of expert clinicians, e.g., in the case of radiologists (Rimmer, 2017), and demonstrate that machine learning models can facilitate detection of conditions (Rajpurkar et al., 2017). Despite these rapid methodological developments and the promise of transformative impact in different areas of healthcare (Liu et al., 2019), few of these approaches (if any) have achieved the ambitious goal of fostering clinical progress (Varoquaux and Cheplygina, 2022). As Wilkinson et al. (2020) highlight, only 24% of published studies evaluate the performance of their proposed algorithms on external cohorts or compare this out-of-sample performance with that of clinical experts. Many studies do not validate the efficacy of algorithms in multiple settings and, the ones that do, often perform poorly when introduced to new environments not represented in the training data. Building a method that is robust across populations and subgroups, such that model performance does not degrade and benefits can be transferred when applied across groups, is a non-trivial task. This is due to data scarcity (Castro et al., 2020), challenges in the acquisition strategies of evaluation datasets, and the limitations of evaluation metrics. We list a number of key challenges here. **(i)**_Disease prevalence_ may differ between demographic subgroups. For example, melanoma is 26% more likely to occur in white patients than black patients (Ame, 2022). Additionally, disease prevalence in the training data may not be reflective of the general population (Kaushal et al., 2020), which can be particularly problematic when due to disparities in access to healthcare (Khan et al., 2021). However, over-reliance on such an attribute may lead to the model learning spurious correlations between those features and the diagnostic label or relying on'shortcuts' (Brown et al., 2022; DeGrave et al., 2021). **(ii)**_Data scarcity_. While we may be able to mitigate poor performance among subgroups or new domains by collecting more data, this can be infeasible due to disease scarcity, at odds with protecting patients privacy or just not sufficient for better and more generalizable solutions (Varoquaux and Cheplygina, 2022). **(iii)**_Poor evaluation datasets_. It is vital to evaluate methods on datasets that reflect realistic shifts. For example, a population shift (Quinonero-Candela et al., 2008) can lead to performance drop of the machine learning model across environments. In a realistic setting, we have limited control over the complexity of the shifts that arise, as shifts over multiple axes can occur simultaneously (e.g., both the acquisition protocol and the population at a new hospital may be different in a new geographic location). In healthcare, machine learning systems are often trained on data from a limited number of hospitals with the hope that they will generalise well to new unseen sites. However, if we focus on simpler synthetic settings, our conclusions may not generalise as demonstrated by Gulrajani and Lopez-Paz (2020); Wiles et al. (2021). **(iv)**_High performance on overall accuracy metrics_, while important to track, _may not always expose subtle problems_. For instance, it is possible to improve on top-1 accuracy by improving performance of the most prevalent class at the expense of performance on the minority classes. Prior work has shown that a developed model may perform unexpectedly poorly on underrepresented populations or population subgroups in radiology (Larrazabal et al., 2020; Seyed-Kalantari et al., 2021), histopathology (Yu et al., 2018) and dermatology (Abbasi-Sureshjani et al., 2020). However, the issues of robustness to distribution shifts and statistical fairness have rarely been tackled together. In this work, we leverage generative models and potentially available unlabelled data to capture the underlying data distribution and _augment_ real samples when training diagnostic models across these three modalities. We show that combining synthetic and real data can lead to significant improvements in top-level performance, while closing the fairness gap with respect to different sensitive attributes under distribution shifts1. Finally, we show that diffusion models are able to generate high quality images (see Figure 1) across modalities and perform an in-depth analysis to shed light on the mechanisms that improve generalization capabilities of the downstream classifiers. ## 2 Background Generative models, especially generative adversarial networks (GANs) (Goodfellow et al., 2014), have been employed by various studies to improve performance in different medical imaging tasks (Baur et al., 2018; Frid-Adar et al., 2018; Ju et al., 2021; Li et al., 2019; Rashid et al., 2019) and, in particular, for underrepresented conditions (Han et al., 2020; Havaei et al., 2021). GAN-based augmentation techniques have not only been used for whole-image downstream tasks, but also for pixel-wise classification tasks (Uzunova et al., 2020; Zhao et al., 2018) with a more thorough review of those techniques provided by Chen et al. (2022). Data obtained by exploring different latent image attributes through a generative model has also been shown to improve adversarial robustness of image classifiers Gowal et al. (2020). More recently, diffusion probabilistic models (DDPM) (Ho et al., 2020, 2022; Nichol et al., 2021; Nichol and Dhariwal, 2021) presented an outstanding performance in image generation tasks and have been probed for medical knowledge by Kather et al. (2022) in different medical domains. Other works extended diffusion models to 3D MR and CT images (Khader et al., 2022) and demonstrated that they can be conditioned on text prompts for chest X-ray generation (Chambon et al., 2022). Given the ethical questions around the use of synthetic images in medicine and healthcare (Chen et al., 2021; Kather et al., 2022), it is important to make a distinction between using generative models to _augment_ the original training dataset and _replacing_ real images with synthetic ones, especially in the absence of privacy guarantees. None of these works claims that the latter would be preferable, but rather come to the rescue when obtaining more abundant real data is either expensive or infeasible (e.g. in the case of rare conditions), even if this solution is not a panacea (Zhang et al., 2022). It is worth noting that while some studies view generative models as a means of replacing real data with 'anonymized' synthetic data, we abstain from such claims as greater care needs to be taken in order to ensure that generative models are trained with privacy guarantees, as shown by Carlini et al. (2023); Sompalli et al. (2022). Recently, machine learning systems used for computer-aided diagnosis and clinical decision Figure 1: Samples generated by our conditional diffusion model for different imaging modalities. making have been scrutinized to understand their effect on sub-populations based on demographic or socioeconomic traits. Studies led by Larrazabal et al. (2020); Puyol-Anton et al. (2022); Seyved-Kalantari et al. (2021) have investigated and identified discrepancies across groups based on gender / sex, age, race / ethnicity and insurance type (as a proxy of socioeconomic status), as well as their intersections. Gianfrancesco et al. (2018) performed a similar analysis for models operating on electronic health records. When evaluating machine learning systems in terms of certain fairness criteria, it is important to keep in mind that ensuring fairness in a source domain does not guarantee fairness in a different target domain under significant distribution shifts (Schrouff et al., 2022). Last but not least, there are multiple definitions of fairness in recent literature and different fairness metrics are often at odds with each other as noted by Ricci Lara et al. (2022). A more thorough review of related work is provided in Appendix B. In this work, we evaluate fairness both in- and out-of-distribution and aim to improve on fairness metrics without compromising top-level accuracy across all settings. ## 3 Results ### Overview of the proposed approach and experimental setting Our proposed approach (illustrated in Figure 2) leverages generative models for learning augmentations of the data to improve robustness and fairness of medical machine learning models. It comprises Figure 2: **Method overview.** In the proposed approach we first train a diffusion model on both labelled and unlabelled data (if available). In a general setting, unlabelled data may comprise of in- or out-of-distribution data (e.g., from an unseen hospital) for which we do not have expert labels. Subsequently, we sample synthetic images from the diffusion model according to particular specifications (e.g., an image of a female individual with pulmonary edema). Finally, we train a downstream classifier on a combination of the real labelled images and the synthetic images sampled from the diffusion model. three main steps: **(1)** We train a generative diffusion model given the available labelled and unlabelled data; we assume that labelled data is available only for a single, source domain, while additional unlabelled data can be from any domain (in- or out-of-distribution). We either condition the generative model only on the diagnostic label or on both the diagnostic label and a property (e.g., hospital id or sensitive attribute label). If high-resolution images are required (\(>96\times 96\) resolution), we further train an upsampling diffusion model in a similar manner. It is worth highlighting that both the low-resolution generative model and the upsampler are trained with the same conditioning vector (i.e. either with label or label & property conditioning). **(2)** We sample from the generative model according to a fair sampling strategy. To do this, we sample uniformly from the sensitive attribute distribution, and preserve the original diagnostic label distribution in order to preserve the original disease prevalence. Sampling multiple times from the generative model allows us to obtain different augmentations for a given condition (and property) and increase the diversity of training samples for the downstream classifier. **(3)** We combine the synthetic images sampled from the generative model with the labelled data from the source domain and train a downstream classifier. We treat the mixing ratio of real to synthetic data as a hyperparameter that is application- and modality-specific. The classifier may have multiple heads and a shared backbone in the scenario where we require a separate prediction per diagnostic class. Experimental protocol.We evaluate this approach using diffusion probabilistic models on different medical contexts and track top-level performance (e.g. accuracy) and fairness (when relevant) in- and out-of-distribution. The evaluation on out-of-distribution datasets is equivalent to developing a machine learning model on a certain population (e.g., from a particular hospital or geographic location) and testing its performance on a population from an unseen hospital or acquired under novel conditions. In all contexts, we consider the strongest and most relevant heuristic augmentations as a baseline. It is worth noting that these augmentations (heuristic or learned) can be combined with any alternative learning algorithm that aims to improve model generalization. For the sake of our experiments we use empirical risk minimization (ERM) (Vapnik, 1991), as there exists no single method found to consistently outperform it under distribution shifts (Wiles et al., 2021). Even though our experiments and analysis focus on diffusion probabilistic models for generation, any conditional generative model that produces high-quality and diverse samples can be used. Evaluation metrics.To measure the performance of the different baselines and the proposed method we use two sets of metrics: one set is more focused on accuracy (i.e. top-1 accuracy for histopathology, ROC-AUC for radiology and high-risk sensitivity for dermatology), while the second set is more geared towards fairness. The performance metrics vary depending on the classification task performed for each modality (i.e. binary vs. multi-class vs. multi-label) and consider label imbalance (tracking raw accuracy in a heavily imbalanced binary setting is not very insightful). For fairness we look at the performance gap (depending on the performance metric of interest) in the binary attribute setting and the difference between the worst and best subgroup performance for categorical attributes. For continuous sensitive attributes, like age, we discretize them into appropriate buckets (specified in Appendix A). ### Clinical Tasks and Datasets #### 3.2.1 Histopathology The first setting that we consider is histopathology. Different staining procedures followed by different hospitals lead to distribution shifts that can challenge a machine learning model that has only encountered images from a particular hospital. The CAMELYON17 challenge by Bandi et al. (2018) aims to improve generalization capabilities of automated solutions and reduce the workload on pathologists that have to manually label those cases. The corresponding dataset contains whole-slide images from five different hospitals and the task is to predict whether the histological lymph node sections captured by the images contain cancerous cells, indicating breast cancer metastases. Two of the hospital datasets provided by the challenge are held-out for out-of-distribution evaluation and three are considered in-distribution, because they use similar staining procedures. We consider this as the simplest setting for our experiments, because there are no extreme prevalence or demographic shifts. Additionally, the considered image resolution (\(96\times 96\)) is smaller in comparison to the imaging modalities presented later, which allows generation directly at that resolution without requiring an upsampler. The labelled dataset contains \(455,954\) images, while the unlabelled dataset contains \(1.8\) million images from the three training hospitals; full statistics are given in Table A1. The unlabelled dataset _does_ contain the hospital identifier, but not the diagnostic label. In order to understand the impact of the number of labelled examples on fairness and overall performance, we create different variants of the labelled training set, where we vary the number of samples from two of the three training hospitals (3 and 4). The number of labelled examples from one hospital remains constant. For each setting, we train a diffusion model using the labelled and unlabelled dataset (using only the diagnostic label whenever available in one case, and the diagnostic label together with the hospital id in the other case). We, subsequently, sample synthetic samples from the diffusion model and train a downstream classifier that we evaluate on the held out Figure 3 | (a) In-distribution fairness gap (in percentage) between the best and worst performing hospital vs. overall prediction accuracy for the presence of breast cancer metastases in histopathology images. (b) Prediction accuracy (x-axis) on the validation and test hospitals when training the generative model on all in-distribution labelled examples (the y-axis corresponds to method index). Note that the validation set is used for model selection, given that its distribution is more similar to the training distribution. We compare the following methods: _Baseline_ model with no augmentations; _Color augm_ for a model that uses color augmentations; _Label conditioning_ and _Label & property conditioning_ for our proposed approach of a generative model conditioned on the diagnostic label and both the diagnostic label and the hospital id, respectively; _L cond + Color augm_, _L & P cond + Color augm_ for applying color augmentations on the images generated with diffusion models. Combining color augmentation with synthetic data performs best across all settings. in- and out-of-distribution datasets (results shown in Figure 3). We compare top-level classification accuracy and fairness gap, i.e. best-to-worst accuracy gap between the in-distribution hospitals to different baselines (more details about baselines are provided in subsection E.2). We find that using synthetic data outperforms both baselines in-distribution in the less skewed (with 1000 labelled samples from hospitals 3, 4) and more skewed setting (with only 100 labelled samples) while closing the performance gap between hospitals. We obtain the best accuracy out-of-distribution when using all in-distribution labelled examples as shown in Figure 3(b) (in the OOD setting there is one validation and one test hospital so we do not report a performance gap). We find that performing color augmentation on top of the generated samples generalizes best overall, leading to a 7.7% absolute improvement over the baseline model on the test hospital. This validates that indeed we can use synthetic data to better model the data distribution and outperform variants using real data alone. We also observe that this method is most effective in the low-data regime (i.e. more skewed setting in Figure 3(a)). In Figure G1 we show some examples of healthy and abnormal histopathology images generated at \(96\times 96\) resolution. #### 3.2.2 Chest Radiology The second setting that we consider is radiology. We focus our analysis on two large public radiology datasets, CheXpert (Irvin et al., 2019) and ChestX-ray14 (US National Institutes of Health (NIH)) (Wang et al., 2017). These datasets have been widely studied by the community (Larrazabal et al., 2020; Rajpurkar et al., 2017; Seyyed-Kalantari et al., 2021) for model development and fairness analyses. For these datasets, demographic attributes like sex and age are publicly available, and classification is performed at a higher resolution, i.e. \(224\times 224\) like in Azizi et al. (2022). After training the generative model and classifier on 201,055 examples of chest X-rays from the CheXpert dataset, we evaluate on a held-out CheXpert test set (containing 13,332 images), which we consider in-distribution, and the test set of ChestX-ray14 (containing 17,723 images), which we consider out-of-distribution (OOD) due to demographic and acquisition shifts. We focus on five conditions for Figure 4: Heatmaps of normalized co-occurrence of conditions in the (a) CheXpert training and (b) the ChestX-ray14 evaluation datasets. For each condition on the row \(r\) of the heatmap, the corresponding column \(c\) indicates the ratio of all samples with condition \(r\) that also have condition \(c\). Note that more than two conditions can be present at once. We observe that in the training set it is much more common that more than one condition is present simultaneously. which labels exist in common between the two datasets2, i.e., _atelectasis_, _consolidation_, _cardiomegaly_, _pleural effusion_ and _pulmonary edema_, while each of these datasets contains more conditions (not necessarily overlapping), as well as examples with no findings, corresponding to healthy controls. In this setting the model backbone is shared across all conditions, while a separate (binary classification) head is trained for each condition, given that multiple conditions can be present at once. Figure 4 illustrates how often different conditions co-occur in the training and evaluation samples. It is apparent that capturing the characteristics of a single condition can be challenging given that in most cases they coexist with other conditions. One characteristic example is pleural effusion, which is included in the diagnosis of atelectasis, consolidation and edema in 50% of the cases. However, the scenario is slightly different for the OOD ChestX-ray14 dataset, where for most pairs of conditions the corresponding ratio is much lower. It is worth noting that the original CheXpert training set contains positive, negative, uncertain and unmentioned labels. The uncertain samples are not considered when learning the classification model, but they are used for training the diffusion model. The unmentioned label is considered a negative (i.e. the condition is not present) which yields a highly imbalanced dataset. Therefore, we report area under the receiver operating characteristic (ROC-AUC) curve in line with the CheXpert leaderboard, as raw accuracy is not very informative for such imbalanced settings. Footnote 2: Note that the labelling procedures for the two datasets were defined and enacted separately, which likely increases the complexity of the task. We observe that synthetic images improve the average AUC for the five conditions of interest in-distribution, but even more so out-of-distribution. Improvements are particularly striking for cardiomegaly, where the model trained purely with synthetic images improves AUC by 21.1% (see Figure G3). Overall, we observe an improvement of 5.2% on average AUC OOD and a 44.6% improvement in sex fairness gap (see Figure 5). We show some examples of generated augmentations by the diffusion model for a model conditioned on the diagnostic label in Figure G1. Higher resolution images are generated in comparison to histopathology with the use of a cascaded diffusion model that upsamples images generated at \(64\times 64\) resolution to \(224\times 224\). #### Dermatology For the dermatology setting, we consider a dermatology dataset of images at \(256\times 256\) resolution grouped into 27 labelled conditions ranging from low risk (e.g. acne, verruca vulgaris) to high risk (e.g. melanoma). Out of these conditions, four are considered to be high-risk: basal cell carcinoma, melanoma, squamous cell carcinoma (SCC/SCCIS) and urticaria. The imaging samples are often accompanied with metadata that include attributes, like biological sex, age, and skin tone. Skin tone is labelled according to the Fitzpatrick scale3, which gives rise to 6 categories (plus unknown). The ground truth labels for the condition are the result of aggregation of clinical assessments by multiple experts, who provide a list of top-3 conditions along with a confidence score (between 1-5). A weighted aggregate of these labels gives rise to soft labels that we use for training the diffusion and downstream classifier models. For the purposes of our experiments we consider three datasets: the in-distribution dataset featuring 16,530 cases from a tele-dermatology dataset acquired from a population in the US (Hawaii and California); OOD 1 dataset featuring 6,639 images of clinical type focusing mostly on high-risk conditions from an Australian population, and OOD 2 featuring 3,900 tele-dermatology images acquired in Colombia. These datasets are characterized by complex shifts with respect to each other as the label distribution, demographic distribution and capture process may all vary across them. To demonstrate the severity of the prevalence shift across locations, we visualise the distribution of conditions in the evaluation datasets in Figure 6. For training the downstream classifier, we use labelled samples from only one of these datasets (in-distribution), while we include unlabelled images from the other two distributions when training the diffusion model. We evaluate on a held-out slice of the in-distribution dataset and two out-of-distribution sets to investigate how well models generalize. We present results for OOD 2 only in Supplementary material G.3.1, as it has similar label distribution to the in-distribution dataset and is less challenging. We explore whether the proposed approach can be used to _not only_ improve out-of-distribution accuracy _but also_ fairness over the different label predictions and attributes for the in-distribution distribution. Given that images considered for dermatology are high resolution, we train a cascaded diffusion model that upsamples images generated at \(64\times 64\) resolution to \(256\times 256\). While the datasets are already imbalanced with respect to different labels and sensitive attributes, we also investigate how the performance varies as a dataset becomes more or less skewed along a single one of these axes. This allows us to better understand to what extent conditioning generative models on the axis of interest can help alleviate biases with regard to the corresponding attribute. For example, if our original dataset is skewed towards younger age groups, conditioning the generative model on age and (over)sampling from older ages can potentially help close the performance gap between younger and older populations4. We skew the training labelled dataset to make it progressively more biased (by removing instances from the least represented subgroups) and investigate how performance suffers as a result of the skewing. For each sensitive attribute, we create new versions of the in-distribution dataset that are progressively more skewed to the high data regions. We show how the resulting training dataset are skewed with respect to each of the sensitive attributes in Table A2. Footnote 4: To study this aspect, we cannot rebalance our datasets as we have too few samples from the long tail of our distribution with regards to the label or sensitive attribute. In Figure 7, we illustrate for a single axis of interest how different methods compare with regards to sensitivity for the four high-risk conditions mentioned above and fairness. In the more skewed setting the training dataset contains a maximum of 100 samples from the underrepresented subgroup regardless of the underlying condition, while in the less skewed setting it contains maximum 1000 Figure 5: Comparison of average AUC vs. fairness (AUC) gap across different baselines for radiology. We report results in- (left column) and out-of-distribution (right column) on CheXpert and ChestX-ray14 datasets, respectively. We mark the baseline _Pretrained on JFT_ with black. _Label conditioning_ corresponds to the model that uses synthetic images from a diffusion model conditioned on only the diagnostic labels. We further compare to other strong contenders, i.e., a BiT-ResNet model pretrained on ImageNet-21K (_Pretrained on IN-21K_), a model pretrained on JFT using RandAugment heuristic augmentations (_RandAugment_), a model trained with RandAugment on top of standard ImageNet augmentations (_RandAugment_ + _IN Augms_) and a model trained with focal loss (_Focal loss_). To ensure a fair comparison all methods are trained / finetuned for the same number of steps and with the same batch size. It is worth noting that for the fairness gap smaller values are preferable. samples. We compare all methods in four different settings: in- and out-of-distribution as well as less and more skewed with respect to the sensitive attribute of interest, i.e. sex. We observe that in all settings, combining heuristic augmentations as in _RandAugment_ + _IN Augms_ does improve the predictive performance across the board, but harms fairness of the model. Pretraining on a different dataset, on the other hand, has a negative impact on both performance and fairness (except for some performance improvement in the less skewed setting). Using _RandAugment_ alone is beneficial for high-risk sensitivity in-distribution, but not out-of-distribution, but it harms fairness in the OOD setting. _Oversampling_ slightly closes the fairness gap across the board while improving performance, as expected. The approaches that leverage synthetic data, _Label conditioning_ and _Label & property conditioning_, improve on high-risk sensitivity in-distribution without reducing fairness, while they yield a significant improvement in the OOD setting on both axes. In the more skewed setting, in particular, _Label & property conditioning_ leads to 27.3% better high-risk sensitivity compared to the baseline in-distribution and a striking 63.5% OOD, while closing the fairness gap by 7.5\(\times\) OOD. It is worth noting that the underrepresented group in the training set and the ID evaluation set is over-represented in the OOD evaluation set. Our approach shows improvements in accuracy and fairness metrics with respect to different sensitive attributes, while being able to generalize these improvements out-of-distribution as shown in G.3.1. ### In depth analysis for dermatology In this section our analysis focuses on the last modality of dermatology. Figure 6: Condition distributions for in-distribution and out-of-distribution dermatology datasets. In-domain and OOD 2 distributions are much more similar in comparison to OOD 1. In particular, 3 out of 4 of the most prevalent conditions (i.e. acne, eczema and other) in the in-distribution dataset are also the most prevalent in OOD 2. However, there are only few examples of high-risk conditions like basal cell carcinoma and SCC/SCCIS, which are the two most prevalent conditions in OOD 1. We can see that overall the right-most dataset has a similar label distribution to the training dataset and, hence, is ‘less’ out-of-distribution than the other one. #### Generated images are diverse First, we show images generated at \(256\times 256\) resolution for this challenging, natural setting and a number of dermatological conditions in Figure 8. We highlight that our conditional generative model does capture the characteristics well for multiple, diverse conditions, even for cases that are more scarce in the dataset, such as seborrheic dermatitis, alopecia areata and hidradenitis. #### Generated images are realistic We further evaluate how realistic the generated images are as determined by expert dermatologists to validate that these images do contain properties of the disease used for conditioning. We note that the synthetic images do not need to be perfect, as we are interested in downstream performance. However, being able to generate realistic images validates that the generative model is capturing relevant features of the conditions. To evaluate this, we ask dermatologists to rate a total of 488 synthetic images Figure 7: Comparison of high-risk sensitivity (for basal cell carcinoma, melanoma, squamous cell carcinoma (SCC/SCCIS) and urticaria) vs. fairness gap w.r.t. sex in dermatology across different baselines. We report results in- (left column) and out-of-distribution for OOD 1 (right column), as well as for the less skewed (top row) and more skewed (bottom row) setting. We mark the baseline _Pretrained on JFT_ with black. _Label conditioning_ and _Label & property conditioning_ correspond to the models that use synthetic images sampled from a diffusion model conditioned on only the label, and the label and sensitive attribute, respectively. We further compare to other strong contenders, i.e., a BiT-ResNet model pretrained on ImageNet-21K (_Pretrained on IN-21K_), a model pretrained on JFT using RandAugment heuristic augmentations (_RandAugment_), a model trained with RandAugment on top of standard ImageNet augmentations (_RandAugment + IN Augms_), a model trained on a resampled version of the training dataset that is more balanced w.r.t. to the sensitive attribute (_Oversampling_) and a model trained with focal loss (_Focal loss_). To ensure a fair comparison all methods are trained / finetuned for the same number of steps and with the same batch size. It is worth noting that for the fairness gap, smaller values are preferable. each, evenly sampled from the four most common classes (eczema, psoriasis, acne, SK/ISK) and four high risk classes (melanoma, basal cell carcinoma, urticaria, SCC/SCCIS). They are tasked to first determine if the image is of a sufficient quality to provide a diagnosis. They are then asked to provide up to three diagnoses from over 20,000 common conditions with an associated confidence score (out of 5, where 5 is most confident). These 20,000 conditions are mapped to the 27 classes we use in this paper (where one class, other, encompasses all conditions not represented in the other 26 classes). We report mean and standard deviation for all metrics across the three raters. \(50.0\pm 12.6\%\) of those images were found to be of a sufficient quality for diagnosis, while dermatologists had an average confidence of \(4.13\pm 0.43\) out of 5 for their top diagnosis. They had a top-1 accuracy of \(56.0\pm 11.9\%\) on the generated images and a top-3 accuracy of \(67.7\pm 12.5\%\). We compare these numbers to a set of real images of the same eight conditions considered above (for the images considered, the majority of raters consider diagnosis of this disease as most prevalent in the image). Amongst 101 board certified dermatologists rating 789 real images in total5, we found that their top-1 accuracy was \(54.0\pm 21.1\%\) and top-3 accuracy \(67.1\pm 22.7\%\); slightly higher performance in terms of top-1 (63%) and top-3 accuracy (75%) was shown in (Liu et al., 2020) across a more diverse set of dermatological conditions. This demonstrates that, when diagnosable as per experts' evaluation, synthetic images are indeed representative of the condition they are expected to capture; similarly so to the real images. Even though not all generated images are diagnosable, this can be the case for real samples as well, given that images used to train the generative model do not necessarily include the body part or view that best reflects the condition. Footnote 5: For this analysis, if an image has been rated by \(N\) dermatologists, we consider a single rater’s accuracy with respect to the aggregated diagnosis of the remaining \(N-1\) raters. #### Generated images are canonical We hypothesize that the reason why models become more robust to prevalence shifts is due to synthetic images being more canonical examples of the conditions. To understand how canonical ground truth images for a particular condition are, we investigate cases with high degree of concordance in raters' assessments and compare those to synthetic images for the same condition. More specifically, we threshold the aggregated ground truth values to filter the images within the training data that experts were most confident about presenting a condition. The aggregation function operates as follows: Figure 8: Generated images in dermatology setting; each row of images corresponds to a different condition. assume we have a set of 4 conditions \(\{A,B,C,D\}\); if rater \(R_{1}\) provides the following sequence of (condition, confidence) diagnosis tuples: \(\{(A,4),(B,3)\}\) and rater \(R_{2}\) provides \(\{(A,3),(D,4)\}\), then we obtain the following soft labels \(\{0.5,0.167,0,0.333\}\) (after weighting each condition with the inverse of its rank for each labeller, summing across labellers and normalising their scores to 1). If we look for instances for which there is consensus amongst raters and high-confidence that a condition is present we can threshold the corresponding soft label for that condition with a strict threshold, e.g. \(t=0.9\). In our example, this doesn't hold for any of the 4 conditions, but if we lowered the threshold to 0.5, then it would hold for condition \(A\). In Figure 9 we show an example for melanoma. For this particular diagnostic class we are able to generate multiple synthetic instances of the condition, while we recovered only 5 images (out of \(>15,000\)) that clinicians rated with high confidence, i.e. \(t_{\text{melanoma}}=0.9\). The nearest neighbours from the training dataset identified based on \(\ell^{2}\)-norm are also shown in Figure 9. #### Generated images align feature distributions better Previous work on out-of-distribution generalization (Albuquerque et al., 2019; Ben-David et al., 2010; Muandet et al., 2013) has pointed out that several factors can affect the performance of a model on samples from domains beyond the training data. In this analysis, we investigate the models trained with our proposed learned augmentations in terms of changes in distribution alignment between all pairs of distributions measured via the Maximum Mean Discrepancy (MMD) (Gretton et al., 2012), as previous work has empirically shown that approaches based on learning features that decrease MMD estimates yield improved out-of-distribution generalization (Li et al., 2018). We compute domain mismatches considering the space where decisions are performed, i.e., the output of the penultimate layer of each model. We thus project each data point from the input space to a representation. We find that learned augmentations yield on average 18.6% lower MMD in comparison to heuristic augmentations (for more details refer to G.3.1) which leads to the following conclusions: (i) Data augmentation has a significant effect on distribution alignment. Improvement on OOD performance suggests this is happening via learning better predictive features rather than capturing spurious correlations. (ii) Generated data helps the model to better match different domains by attenuating Figure 9: (_Left_) The diffusion model can produce an infinite amount of synthetic images for a particular condition that is inherently more scarce (we have fewer than 50 samples of melanoma in our training dataset). (_Middle_) The images that experts have identified with melanoma with high confidence (a combination of individual’s confidence in diagnosis and expert consensus). (_Right_) The nearest neighbours from the training samples identified for each of the synthetic images based on \(\ell^{2}\)-norm in pixel space. the overall discrepancy between domains. (iii) Given the minor decline in performance when adding generated data in the less skewed setting as shown in Figure 7, these findings suggest that learning such features might conflict with learning spurious correlations that were helpful for in-distribution performance. #### Synthetic images reduce spurious correlations To further compare the effect of different augmentation schemes on the features learned by the downstream classifier, we investigate the representation space occupied by all considered datasets, including samples obtained from the generative model. In practice, we project \(N\) randomly sampled instances from each dataset to the feature space learned by each model and apply the Principal Component Analysis algorithm (Abdi and Williams, 2010). We then extract the number of principal components required to represent different fractions of the variance across all instances projected to the feature spaces induced by models obtained with heuristic and learned augmentations. We observe that for a fixed dataset, features from models trained with synthetic data require 5.4% fewer principal components to retain 90% of the variance in latent feature space (results for different fractions are provided in Figure G8). This indicates that using synthetic data induces more compressed representations in comparison to augmenting the training data in a heuristic manner. Considering this finding in the context of the results in Table G1, we posit the observed effect is due to domain-specific information being attenuated in the feature space learned by models trained with synthetic data. This suggests that our proposed approach is capable of reducing model's reliance upon correlations between inputs and labels that do not generalize out-of-distribution. ## 4 Discussion In this work, we propose to use conditional generative models for improving robustness and fairness of machine learning systems applied to medical imaging. More specifically, we show that diffusion models can produce useful synthetic images in three different medical settings of varying difficulty, complexity and resolution: histopathology, radiology and dermatology. Our experimental evaluation provides extensive evidence that synthetic images can indeed improve statistical fairness, balanced accuracy and high risk sensitivity in a multi-class setting, while improving robustness of models both in- and out-of-distribution. In fact, we observe that generated data can be more beneficial out-of-distribution than in-distribution even in the absence of data from the target domain during training of the generative model (in the case of radiology). Generative models prove to be label efficient in both histopathology and dermatology settings, where we demonstrate that only a few labelled examples are sufficient for the diffusion models to capture the underlying data distribution well. This is particularly impactful in the medical setting, where data for particular conditions or demographic subgroups can be scarce or, even when available, acquiring expert labels can be expensive and time consuming. For the reader that is familiar with regularization techniques, we view diffusion models as another form of regularization, which can be combined with any other architecture or learning method improvements. Even though we do not make any assumptions when training the diffusion model, we find interesting dynamics when combining real and synthetic data. In certain settings, i.e., histopathology and radiology, we observe that we can rely purely on generated data and still outperform baselines trained with real labelled data (see G.1). In other settings, like dermatology, we observe that real data is more essential for training of the downstream discriminative model. We take a step further and analyze the impact of generated data and the mechanisms underlying the improvements in robustness and fairness that we report. Synthetic samples seem to better align distributions of different domains, while at the same time allowing models to learn more complex decision boundaries that reduce their reliance on spurious correlations. Finally, we highlight some practical benefits (highlighted in green) and discuss a number of potential risks (highlighted in red) and limitations (in orange) from relying on generated data. Reusability of synthetic data.Beyond the analysis and utility of synthetic data for the particular tasks that we consider in this work, there are many other potential applications for which they can be useful. The same synthetic data can be used for data augmentation across different models and, potentially, tasks. For example, hand-crafted augmentations are often employed to introduce invariances and learn better representations in a self-supervised manner for a variety of downstream tasks. Scalable approach.As we demonstrate in subsection D.3, if we have a perfect generative model then we can perform perfectly under the fair distribution. Moreover, the better the generative model, the more our results should improve. As a result, as generative modelling improves or as more data becomes available, results should improve accordingly. Utility for leveraging private data sources.Combining this technique with privacy-preserving technologies holds a lot of promise in the medical field. One of the main reasons why transformative AI technologies have not yet demonstrated equivalent impact in the healthcare domain is due to regulations and limited data access. There is preliminary evidence that federated learning can be used to learn classification models from multiple institutions (Kaissis et al., 2021) and if it were possible to generate private synthetic data, this synthetic data could be used for data augmentation along with a smaller, public dataset to improve performance. This could have practical benefits when data sharing to protect personally identifiable information (PII) while achieving high quality performance. Such an approach would of course be associated with its own risks, some of which are discussed by Cheng et al. (2021). Overconfidence in the model.Even though we show that diffusion models can be particularly label efficient, this should not encourage practitioners to abandon their data and label acquisition efforts; nor does it imply that generated data can replace real data under any circumstances. What this research demonstrates is that, when labelled data and resources are limited, there are ways to make more of the available labelled and unlabelled data. There is also the potential that using generative models may lead to overconfidence in an AI system, because images look realistic to a non-expert. Additional data collection will always be important, along with comprehensive analysis of the underlying data and its caveats. Synthetic data from a generative model should _only_ be used as a complement to additional data collection and accompanied by rigorous evaluation on real data, ideally outside the main source domain to understand generalization capabilities of the models. In other words, synthetic data is one solution to increase diversity, but not a substitution of efforts to increase data representation for underrepresented conditions and populations. Bias in the training data.If the generative model is of poor quality or biased, then we may end up exacerbating problems of bias in the downstream model. The generative model may be unable to generate images of a certain label and sensitive attribute. In other settings, the model may always generate a specific part of the distribution for a certain label and sensitive attribute instead of capturing the true image distribution. The generative model may also create incorrect images of a given label and sensitive attribute, leading the classification model to make mistakes confidently in those regions. Therefore, it is particularly important that the evaluation data is unbiased. Bias in the evaluation.The insights that we obtain by analyzing the model are only as good as our evaluation setup. If the evaluation datasets are not diverse enough, do not capture high-risk conditions well or are not representative of the population, then any conclusions we draw from these results will be limited. Therefore, care needs to be taken in order to report and understand what each of the evaluation setups is capturing. For example, as Varoquaux and Cheplygina (2022) highlight, clinician-level performance is often overstated without validating models out-of-distribution. Categorical and unobserved attributes.Sensitive attributes are not always observed or explicitly tracked and reported (Tomasev et al., 2021), often to protect people's privacy. At the same time, the way labels are assigned may have its own limitations. For example, using binary gender and sex attributes (or using the two interchangeably) does not represent people that identify as non-binary. Similarly, researchers have criticized the Fitzpatrick Skin Type because it is less accurate on shades of darker skin tones, which could cause models to misidentify or misrepresent people with darker skin. Similarly, there are other unobserved characteristics that can influence disease and are not accounted for in a visual image of skin, for example, like social determinants of health. One instance of this is how dermatitis on a person who lives in a communal setting could have a different differential diagnosis than dermatitis in a high on a high income individual. These are important considerations when relying on such attributes to condition learned augmentations or to perform fairness analyses. Transparency when handling synthetic data.Synthetic images should be handled with caution as they may perpetuate biases in the original training data. It is important to tag and identify when a synthetic image has been added to a database, especially when considering to reuse the dataset in a different setting or by different practitioners. We see potential here for future work that improves fairness and out-of-distribution generalization by leveraging powerful generative models but without explicitly relying on pre-defined categorical labels. When we consider synthetic images as an option for addressing performance gaps across subgroups, the following challenges still need to be addressed: reducing memorization for rare attributes and conditions, providing privacy guarantees and accounting for unobserved characteristics. ## 5 Acknowledgements We would like to thank Mikolaj Binkowski for his input on the data preprocessing for the diffusion upsampler and William Isaac for his input on the ethical risks of this work. We would also like to thank Florian Stimberg, Jan Freyberg, Terry Spitz, Vivek Natarajan, Yun Liu, and David Warde-Farley for providing feedback at different stages of the project, as well as Sophie Elster, Zahra Ahmed, Nina Anderson and Patricia Strachan for their organisational support. Last, but not least, we thank Jessica Schrouff, Yuan Liu, Heather Cole-Lewis and Naama Hammel for the technical feedback they provided on the manuscript. ## 6 Author Contributions O.W., S.G. and P.K. initiated the project. O.W., I.K. and S.G. contributed to the design of the method and experiments. O.W., S.G. and T.C. contributed to the formulation of the method. A.G.R. provided pointers to the datasets. I.K, O.W., S.G. and A.G.R. contributed to software engineering. I.A. performed in-depth analysis on distribution matching and spurious correlations. R.T. and O.W. performed analysis of different sampling schemes. I.K. trained upsamplers and produced high-resolution images. O.W. performed nearest-neighbour analysis for dermatology. A.K. helped formulate the problem in the clinical setting. I.K. and O.W. performed experiments on different modalities. I.K. and O.W. analysed results from expert evaluations in dermatology. S.A.R. performed analysis on mis-classification rates for high-risk individual samples. I.K., O.W., I.A., R.T., S.A.R. and S.G. contributed to the evaluation of the work and performed analysis. I.K., O.W., I.A., R.T., S.A.R., A.G.R., A.K. and S.G. contributed to the interpretation of the results. D.B. and P.K. advised on the work. I.K., O.W., I.A. and R.T. wrote the paper. S.G., P.K., D.B. and A.K. revised the manuscript.
2304.11457
Phase biasing of a Josephson junction using Rashba-Edelstein effect
Manifestation of orbital coupling of spin degree of freedom in condensed matter systems has opened up a new dimension for the field of spintronics. The most appealing aspect of the spin-orbit coupling is the apparent Magnus force sensed by a spin system which locks the Fermi momentum with electron spin in a fascinating manner. In the current carrying state, the resulting macroscopic spin polarization becomes directly accessible in the form of spin current or spin density. At a Rashba interface, for example, a charge current shifts the spin-locked Fermi surface, leading to a non-equilibrium spin density at the interface, commonly known as the Rashba-Edelstein effect. Since the Rashba-Edelstein effect is an intrinsically interface property, direct detection of the spin moment is harder to set-up. Here we demonstrate that a simple planar Josephson Junction geometry, realized by placing two closely spaced superconducting electrodes on such a Rashba interface, allows a direct estimation of strength of the non-equilibrium spin moment. Measurements of Fraunhofer patterns of Nb-(Pt/Cu)-Nb planar Josephson junctions in a perpendicular magnetic field showed a shift of the center of the Fraunhofer pattern to a non-zero field value. By performing extensive control measurements, we argue that the screening currents in the junction effectively lock the external field with the spin moment of the Rashba-Edelstein effect induced spin-density, leading to the observed shift in the Fraunhofer patterns. This simple experiment offers a fresh perspective on direct detection of spin polarization induced by various spin-orbit effects. Very interestingly, this device platform also offers the possibility of retaining a controllable phase at zero field in the junction without using any magnetic material, and thereby useful as phase batteries for superconducting quantum circuits.
Tapas Senapati, Ashwin Kumar, Kartik Senapati
2023-04-22T18:11:54Z
http://arxiv.org/abs/2304.11457v2
# Phase biasing of a Josephson junction using Rashba-Edelstein effect ###### Abstract Manifestation of orbital coupling of spin degree of freedom in condensed matter systems has opened up a new dimension for the field of spintronics. The most appealing aspect of the spin-orbit coupling is the apparent Magnus force sensed by a spin system which locks the Fermi momentum with electron spin in a fascinating manner. In the current carrying state, the resulting macroscopic spin polarization becomes directly accessible in the form of spin current or spin density. At a Rashba interface, for example, a charge current shifts the spin-locked Fermi surface, leading to a non-equilibrium spin density at the interface, commonly known as the Rashba-Edelstein effect. Since the Rashba-Edelstein effect is an intrinsically interface property, direct detection of the spin moment is harder to set-up. Here we demonstrate that a simple planar Josephson Junction geometry, realized by placing two closely spaced superconducting electrodes on such a Rashba interface, allows a direct estimation of strength of the non-equilibrium spin moment. Measurements of Fraunhofer patterns of Nb-(Pt/Cu)-Nb planar Josephson junctions in a perpendicular magnetic field showed a shift of the center of the Fraunhofer pattern to a non-zero field value. By performing extensive control measurements, we argue that the screening currents in the junction effectively lock the external field with the spin moment of the Rashba-Edelstein effect induced spin-density, leading to the observed shift in the Fraunhofer patterns. This simple experiment offers a fresh perspective on direct detection of spin polarization induced by various spin-orbit effects. Very interestingly, this device platform also offers the possibility of retaining a controllable phase at zero field in the junction without using any magnetic material, and thereby useful as phase batteries for superconducting quantum circuits. Keywords:Rashba-Edelstein effect, Josephson effect, Fraunhofer patterns ## Introduction Spin-orbit coupling has emerged as a clean process of generating pure spin current and spin polarization in solid-state spintronics, without using a magnetic layer [1, 2, 3, 4]. The effective magnetic field arising from the drift motion of an electron in the radial atomic potential gradient of a bulk metallic system decouples the motion of spin-up and spin-down electrons, leading to a non-equilibrium spin current in a direction transverse to the charge current, popularly known as spin-Hall effect[5, 6, 7]. The Onsager reciprocity relation also allows for an inverse effect, where a bulk spin current leads to a charge drift current in a transverse direction[8, 9]. A related effect ensues dominantly at an interface with broken structural inversion symmetry due to the interfacial potential gradient. In this case, electron spin locks to the crystal momentum through the Rashba Hamiltonian (\(H_{R}=\frac{\alpha_{R}}{\hbar}(E_{z}\times p)\cdot S\)) and the up/down spin bands split in energy [10, 11, 12]. Therefore, a charge current driven by an external electric field translates the parabolic band leading to a net spin polarization[10, 11, 12]. In the real space scenario, a charge current parallel to a Rashba-interface (orthogonal to the interfacial electric field) leads to a non-equilibrium spin density orthogonal to the interfacial electric field and to the drift current following the relation (\(S=\frac{\alpha_{R}m}{e\hbar}(E_{z}\times j_{c})\)), where E\({}_{z}\) is the interfacial electric field and S is the spin polarization[13]. This inverse spin-galvanic effect or Rashba-Edelstein effect has been studied in 2D electron gas and other metallic interfaces in proximity of heavy metals, especially in the context of spin-orbit torque devices[14, 15, 16, 17, 18]. At metal-metal interfaces, the interfacial electric field arises from the relative orbital arrangement of the two metals in contact [19, 20, 21, 22]. The basic protocol that all experimental probes followed is to measure a charge current produced by an injected spin polarization in a non-local measurement technique, which is the inverse Rashba-Edelstein effect [23, 24, 22]. The interaction of charge current with the spin density at the Rashba interface has also been studied via ferromagnetic resonance (FMR) techniques, by pumping microwaves through the interface [25, 26, 27]. Direct measurement of the charge current-induced surface spin density due to the Rashba-Edelstein effect is, however, challenging. In the case of spin-Hall effect, a spin current is induced by the charge current, which can carry a torque into a proximal ferromagnetic layer for detection. Rashba-Edelstein effect, on the other hand, only creates a non-equilibrium spin density (polarization) at the Rashba interface rather than a spin-current transferable to another layer for detection. The inverse Rashba-Edelstein effect and the FMR technique require a ferromagnetic layer for detection [28]. Here we report a phase-sensitive experiment enabling direct detection of this non-equilibrium spin density at the interface, without using a ferromagnetic layer. Figure 1 describes the basic architecture of the experiment. The Rashba interface was created **Fig. 1: Concept of interface spin polarization generation and detection in a planar Josephson junction (a)** A schematic representation of the slitting of bias current in a Pt/Cu bilayer injected through Nb electrodes in the normal state of Nb. The total injected normal current, in this case, can be represented as a sum of the currents carried by the Pt layer (J\({}_{Pt}\)), the Cu layers (J\({}_{Cu}\))and the Pt/Cu interface (J\({}_{Int}\)). We have represented the Pt/Cu interface as a separate layer as the current at the interface produces a non-equilibrium spin moment M\({}_{RE}\) due to the Rashba-Edelstein effect. Current through the heavy metal Pt layer generates spin polarization in a transverse direction by the spin hall effect.**(b)** When the same device is cooled to a temperature below the transition temperature of Nb electrodes (\(T<T_{c}\)), and a Josephson coupling is established between them, then the entire injected current is carried by the proximatized Cu layer. However, at the Pt/Cu interface, pair breaking by the spin-orbit coupling effects allows for some quasiparticle current J\({}_{Q}\). The Pt/Cu Rashba interface creates an in-plane spin polarization \(M_{RE}\) due to the quasiparticle current J\({}_{Q}\)**(c)** The band representation of current driven shift of the momentum locked up-spin and down-spin bands causing a spin asymmetry at the Fermi surface. This causes the non-equilibrium spin moment depicted as M\({}_{RE}\) in panel (a) and (b).**(d)** In the presence of an external magnetic field the spin moment attains a component along the field which can couple into the junction area. by depositing a layer of Cu on top of a thin layer of Platinum[27, 28, 29]. As shown in the schematic Fig 1(a), a bias current across the Nb electrodes, in the normal state of Nb, is decomposed into parallel current channels through the Cu and the Pt layers, depicted as J\({}_{Cu}\) and J\({}_{Pt}\). Therefore, in the normal state of the Nb electrodes, J\({}_{Pt}\) produces a spin-Hall current (J\({}_{S}\)) inside the Pt layer. Similarly, the current at the interface of the Cu and Pt layers (shown as J\({}_{Int}\) in Fig 1(a)) produces a non-equilibrium spin density in the interfacial plane denoted as an equivalent moment M\({}_{RE}\). The bulk spin-Hall effect and the interface Rashba-Edelstein effects arise simultaneously in the system, which are difficult to isolate in real systems. On the other hand, sufficiently below the superconducting transition temperature of the Nb electrodes, a Josephson coupling can be established across the (Pt/Cu) barrier in the same device for a small enough separation between the Nb electrodes. In that case, the Cu layer gets proximatized and carries all the supercurrent across the junctions. In this planar Josephson Junction geometry, the bulk current in Pt can be entirely shorted through the proximatized Cu layer. This scenario is schematically shown in Fig 1(b). This simple geometry provides two immediate advantages, viz. (i) only a quasi-particle current can exist at the Cu/Pt Rashba-interface due to the pair breaking effects[30] which can generate the inverse spin-galvanic effect and (ii) the high phase sensitivity of the resulting planar Josephson Junction offers the possibility of direct sensing of the accumulated spin density [31], without having to transfer the spin-density information to another layer. Considering the geometry of the device in Fig 1 (a) and (b), the Rashba field (2\(\alpha_{R}\) k\({}_{F}\)/\(g\mu_{B}\)) points in the y-direction in the presence of a charge current in the x-direction. Here \(\alpha_{R}\) is the Rashba parameter, \(k_{F}\), g, and \(\mu_{B}\) are the Fermi momentum, g-factor, and Bohr magnetron, respectively. The current driven relative displacement of the momentum locked up-spin, and down-spin bands is shown in Fig 1(c), which leads to the Rashba-Edelstein effect. An external magnetic field applied perpendicular to any planar Josephson junction leads to the usual Fraunhofer-like critical current variations in the junction [32, 33, 34]. In the presence of an interface spin moment M\({}_{RE}\), in the Josephson device shown in Fig 1(b), a component of M\({}_{RE}\) can be coupled to the junction [35, 36] which can show up in the Fraunhofer patterns. Here we show that indeed this is the case in Nb-(Pt/Cu)-Nb Josephson junctions(JJ) and SQUIDs. ## Results ### Josephson coupling across Nb-(Cu/Pt)-Nb planar junctions and nano-SQUIDs The central results of this work have been shown in Fig. 2. Panel (a) shows the false color electron micrograph of an actual DC nano-SQUID consisting of Nb-(Cu/Pt)-Nb planar junctions with 100 nm thick Cu and 50 nm thick Pt layer. In panel (b) of Fig. 2, we plot the voltage (at a constant current of 100 \(\mu\)Amps) across the nano-SQUID as an out-of-plane magnetic field was swept between \(\pm\)20 mT. The resistance of the SQUID device shows characteristic oscillations superimposed on a Fraunhofer-like background, typically expected for a functional SQUID device. We note here that the field variation of the junction resistance is analogous to the field variation of critical current, and both quantities are related to each other in an inverse manner. We show the equivalence of both these measurements as a supplementary Fig A2 by explicitly measuring the field variation of both quantities. Fig 2(c) shows the field variation of the device resistance for a single Josephson junction. In both cases (Figs 2(b) and (c)), the Fraunhofer-like field response of the device resistance show a distinct offset between the forward and reverse sweeps of magnetic fields, which is unlike typical non-magnetic Josephson junctions [37, 38, 39]. In JJs with ferromagnetic barriers, hysteresis in the Fraunhofer pattern is observed arising from the flux remnance of the ferromagnetic barrier itself. During down sweep of the magnetic field from a positive saturation **Fig. 2**: **Signature of interface Rashba-Edelstein effect in JJs and SQUIDs(a)** False coloured SEM micrograph of an Nb-(Pt/Cu)-Nb SQUID, with 150 nm of Nb (green), 100 nm of Cu (orange), and 50 nm of Pt (uncoloured)**(b)** Magnetic field response of the voltage across the same SQUID device, measured at 2K with a bias current equal to the critical current at that temperature. The external field was applied perpendicular to the plane of the loop. The arrows represent the direction of field sweep during these measurements. There is a clear offset between the up-sweep and down-sweep Fraunhofer patterns. **(c)** Magnetic field response of a single planar Josephson junction prepared from Nb-(Pt/Cu)-Nb trilayer with 50 nm of Pt layer also shows a similar offset between the up-sweep and down-sweep. On the contrary, the Fraunhofer pattern of a similar junction without the Pt layer does not show any relative shift between the up-sweep and down-sweep data in panel (d). **(e)** The Josephson device showing a clear offset in the Fraunhofer pattern in a perpendicular field (panel (c)) did not show any such shift in panel (e), when measured with an in-plane magnetic field. **(f)** The net offset in the Fraunhofer pattern for the Nb-(Pt/Cu)-Nb JJ with 30 nm Pt layer, measured at 2K with bias currents of 100\(\mu\)A, 125\(\mu\)A and 150\(\mu\)A did not show any significant change. The inset in panel (f) shows the zero-field current-voltage(IV) curve and marks the bias currents on this plot. field, the remnant magnetic moment of a ferromagnetic barrier shifts the central maximum of the Fraunhofer pattern to the negative field, and vice-versa [40, 41, 42, 43]. However, we would like to point out that the observed relative shift in the Fraunhofer pattern with voltage oscillation of the Nb-(Cu/Pt)-Nb SQUID in Fig 2(b) is unlike the hysteresis in ferromagnetic JJs. In this case, while decreasing the magnetic field from the positive saturation, the central position of the Fraunhofer pattern appears to shift in the positive direction. The arrows in Figs 2(b) and 2(c) indicate the direction of the field sweep in these measurements. In contrast, an Nb-Cu-Nb planar junction, without the Pt layer, prepared via the same route does not show any significant offset between the forward and reverse sweep of out-of-plane magnetic field, as shown in Fig 2(d). This observation clearly points towards the fact that the Cu/Pt interface is the primary reason behind the observed effect. Fig 2(e) plotted the voltage across the same junction when the magnetic field was applied parallel to the junction, as shown in the inset schematic. In contrast to the perpendicular field, the in-plane field does not induce any offset between the forward and reverse sweeps of the magnetic field. Another important characteristic of the observed offset in Fraunhofer pattern is that the junction bias current has a negligible effect on the net offset, as shown in Fig 2(f). This is consistent with the data obtained for SQUID devices also (see supplementary Fig A5). The inset in Fig 2(f) marks the bias currents used in these measurements with respect to the current-voltage characteristics of the same junction. In the next section, we argue that the non-equilibrium spin moment created by the Rashba-Edelstein effect at the interface between Pt and Cu introduces an additional phase in the Josephson junction, resulting in the unusual shift in the Fraunhofer pattern seen in Fig 2. ### Generation of spin density at the Cu/Pt interface The most prominent feature of Fig 2 is the observation of a relative offset (\(\Delta\)H) between the forward and reverse sweep of Fraunhofer patterns. Usually, the central peak of the Fraunhofer pattern of a standard JJ corresponds to a total of zero net flux in the junctions, equivalent to a constant phase difference of \(\pi/2\) between the superconducting electrodes throughout the junction [44]. Therefore, a shift in the central peak can appear only in the presence of an additional phase in the junction. In ferromagnetic JJs, for example, the position of the central peak corresponds to the coercive fields where the net moment in the junction becomes zero [45, 46]. In the present case, while sweeping the magnetic field from negative field, the central peak of the Fraunhofer pattern appears in the negative field regime. Since there are no magnetic moments in the present case, the observed shift can arise only from a net spin-moment present in the junction. Fig 2 (d) shows that there is no relative shift in the Fraunhofer patterns in a junction without the Pt layer. Therefore, the presence noncentrosymmetric potential gradient in the Pt layer must be the origin of spin moment in the planar junction [47, 48]. There are two ways in which a Pt layer can generate spin polarization. A normal current in the bulk of the Pt layer can generate a polarization via the spin-Hall effect [49, 50, 51]. Similarly, a current at the Pt/Cu interface can generate a spin-moment via Inverse spin-galvanic effect [29, 52]. Therefore, in order to establish that the spin-moment seen by the JJs and SQUIDs in our case arises from the interface Rashba-Edelstein effect, the bulk spin-Hall effect must be ruled out. For this purpose, we have performed the following control experiments. In our case, in the superconducting state of the planar junction, the Cu layer is proximatized by the superconducting electrodes and carries all the current [53]. This can be supported by the fact that the separation between the superconducting Nb electrodes in the planar JJs and SQUIDs used in this study varied between 40 nm to 100 nm for which the Pt layer underneath the Cu layer remains in the normal state down to the lowest measurement temperature. This was verified in several junctions by milling out the Cu layer entirely from the junction region to realize Nb-Pt-Nb junctions from the same chips. A sample comparative response of junctions fabricated with and without the Cu layer in the barrier region on the same chip is shown in Fig 3(a). The temperature-dependent resistance of the junction with the Pt/Cu barrier showed a clear junction proximatization below 4 K, whereas the junction with only Pt barrier did not proximatize down to 2K. The inset current-voltage curves measured at 2K also show a clear supercurrent for the Nb-(Pt/Cu)-Nb junction, while Nb-Pt-Nb junction shows pure resistive behaviour. The energy dispersive X-ray (EDX) elemental maps of the actual devices are also shown in the inset of Fig 3(a). The absence of Cu in the junction region of the Nb-Pt-Nb junction compared to the Nb-(Pt/Cu)-Nb junction was clearly verifiable from these images. In order to directly rule out the contribution of the bulk Pt in the observed shift, we thinned down the Pt layer underneath a planar junction using focused ion beam milling to realize a suspended Nb-(Pt/Cu)-Nb junction with thinner Pt. The Fraunhofer patterns and the IV curves of the same device before and after thinning the Pt layer are compared in panels (c) and (d) in Fig. 3. The false colour FESEM images of the actual Josephson device and the schematic diagrams are shown as insets in the respective panels. In both cases, almost the same offset of \(\sim\)4 mT was observed in the Fraunhofer patterns, as shown in the main panels of Fig 3(c) and (d). This observation confirms that there is a negligible bulk contribution in the observed effect. The inset IV curves in both cases, apart from a small change in the slope, did not show any change in the critical current. Consistent with the observation of Fig 3(a), no change in the critical current of the two junctions reconfirms that Pt layer does not contribute to the supercurrent in the junction. The magnitude of the Rashba-Edelstein effect is expected to be directly proportional to the bias current. Therefore, the observation of seemingly current independent offset in the Fraunhofer pattern may appear contradictory to the above discussion. However, we note here that in these planar Josephson devices, the interface quasi-particle current J\({}_{Q}\) is responsible for the observed effect rather than the full bias current. In a Nb-(Pt/Cu)-Nb junction, measured at a fixed temperature of 2K, most of the bias current gets carried across the Josephson barrier through the Cu layer as a supercurrent for any bias current around the critical current. Therefore the magnitude of J\({}_{Q}\), responsible for the Rashba-Edelstein effect, does not get affected with the bias current, as seen in Fig 2(f). ## Discussion **Fig. 3: **Control experiments confirming a quasiparticle mediated Rashba- Edelstein effect(a)** Comparison of low temperature resistance(R(T)) of an Nb(150nm)-[Pt(50nm)/Cu(100nm)]-Nb(150nm) JJ (open symbols) and Nb(150nm)-Pt(50nm)-Nb(150nm) JJ(solid symbols)measured with a bias current of 10\(\mu\)A. The junction without Cu-layer does not show any proximatisation. Inset shows the IV characteristics for both junctions at zero field. Nb-(Pt/Cu)-Nb junction shows a critical current \(\sim\) 200\(\mu\)A. The inset EDAX elemental maps for both these junctions clearly show the absence of Cu in the Nb-Pt-Nb junction, as represented in the device schematics.**(b)** R(T) plot for an Nb-(Pt(30nm)/Cu)-Nb junction is shown on the right-hand axis along with the temperature dependence of \(\Delta\)H on the left-hand axis, shows a direct connection between quasiparticle current and \(\Delta\)H. The \(\Delta\)H values were extracted from V(H) curves measured at the respective temperatures with 200\(\mu\)A current.**(c)** The voltage response of the Nb-(Pt(30nm)/Cu)-Nb junction shows a \(\Delta\)H of 4mT. Partial thinning of the Pt layer under the junction area in the same device did not change \(\Delta\)H, as shown in panel(d). The false coloured FESEM image of the device and the schematic are inset in the respective panels, along with the IV curves. Since the above discussion excludes the bulk contribution to the induced non-vanishing spin moment, the only possible source of a net moment in the junction barrier is, therefore, the spin-moment created by the Rashba-Edelstein effect at the Pt/Cu interface [27, 48, 54]. Cooper pair breaking at the interface of Pt and Cu [55, 56] can result in a quasi-particle current at the interface, generating the Rashba-Edelstein effect and the consequent spin density [57, 58, 59, 60, 61]. The net shift in the Fraunhofer patterns (\(\Delta\)H) as a function of temperature, measured with a fixed bias current of 200 \(\mu\)A, also supports this argument. The temperature dependence of \(\Delta\)H has been plotted on the left-hand axis of Fig 3 (b). The junction resistance has been plotted on the right-hand axis on the same plot. This plot clearly shows that an increase in \(\Delta\)H follows the pattern of a decrease in the junction resistance. In the R(T) data shown in Fig 3(b), the proximatization of the junction area (essentially the Cu barrier) starts below \(\sim\)4.5 K. As the temperature decreases further, the magnitude of the supercurrent through the Cu barrier increases, leading to an increase in the quasi-particle current density J\({}_{Q}\) at the Pt/Cu interface. Since the magnitude of the Rashba-Edelstein effect is directly proportional to the bias current [13], the increase in J\({}_{Q}\) leads to the increase in the observed offset \(\Delta\)H in the Fraunhofer pattern. A closer look at the Fraunhofer patterns of planar junctions with Pt underlayer enunciates a peculiar asymmetry between the two parts of the pattern around the central minimum. A flux quantum through the junction equates to a relatively lower external field while decreasing the field magnitude compared to the case of increasing field magnitude. This general aspect of all the Nb-(Pt/Cu)-Nb junctions and SQUIDs has been emphasized in Fig 4 (a) where the V(H) curve has been plotted for a sample with 30 nm of Pt underlayer. In this figure, the arrows indicate the direction of the field sweep, and the solid lines are fitted to a Fraunhofer-type relation. The two halves of the Fraunhofer pattern around the central minimum were fitted separately due to the clear asymmetry around the central minimum observed in the data. The fittings show that the external field corresponding to two flux quanta on both sides of the central minimum differ by 2 mT in this particular junction. A schematic sketch of the magnetic flux and current distribution through the junction at the maxima of the V(H) curve (corresponding to a net zero supercurrent in the junction) are also shown in Fig 4(a). In the presence of the out-of-plane external magnetic field, a component of the induced Rashba-Edelstein spin moment M\({}_{RE}\), along the direction of the magnetic field also adds a flux into the planar junction, as M\({}_{RE}\) is generated directly under the junction area [36]. It is important to note here that the quasi-particle current density at the Pt/Cu interface is directly proportional to the supercurrent density in the proximatized Cu barrier. Therefore, the screening current distribution in the junction due to the external magnetic field is tightly coupled to the M\({}_{RE}\). When the magnitude of the external magnetic field is decreased, the junction opposes the changing magnetic field by creating screening currents in the negative direction, leading to the generation of a negative M\({}_{RE}\) in some parts of the junction. Consequently, the central minimum of the V(H) curve, corresponding to a net zero flux through the junction, appears at a positive field in the decreasing sweep of the magnetic field. Note that unwanted trapped flux in the direction of the applied magnetic field would shift the central minimum to the negative field region during the down-sweep. In order to verify this locking effect of M\({}_{RE}\) with the screening currents in the junction, we recorded the net offset \(\Delta\)H by systematically varying the range of field sweep (defined as H\({}_{minor}\)). A sample "minor loop" has been compared with the full sweep of the magnetic field in Fig 4(b). In this minor loop, the magnetic field was swept between H\({}_{minor}\)=\(\pm\)6 mT. The overall offset \(\Delta\)H between the up-sweep and the down-sweep produced at the minimum of the V(H) curve has been plotted as a function of the sweeping range of field sweep (Fig. 4(a)). The field sweep (Fig. 4(b)) is applied to the sample "minor loop". The field sweep (Fig. 4(c)) is applied to the sample "minor loop". The field sweep (Fig. 4(d)) is applied to the sample "minor loop". The field sweep (Fig. 4(e)) is applied to the sample "minor loop". The field sweep (Fig. 4(f)) is applied to the sample "minor loop". The field sweep (Fig. 4(f)) is applied to the sample "minor loop". The field sweep (Fig. 4(f)) is applied to the sample "minor loop". The field sweep (Fig. 4(g)) is applied to the sample "minor loop". The field sweep (Fig. minor field loops H\({}_{minor}\) in Fig 4(c). It shows that at the lower range of field sweep, there is a sharp rise in \(\Delta\)H which is followed by a saturating tendency at higher range of field sweeps. In fact, the nature of the curve closely follows the V(H) curve. This observation indicates that controlling the amount of the superconducting screening current in the junction could lead to tunable offset \(\Delta\)H, which amounts to tuning the effective phase of the junction at zero field. One of the major device parameters which can tune the magnitude of the screening currents in the planar Josephson junctions is the junction resistance. The geometrical dimensions of the junction and the degree of proximatization of the barrier are the primary factors which define the junction resistance. In Fig 5, we show a collection of the \(\Delta\)H values measured at 2K for several JJs and SQUIDs as a function of R\({}_{2K}\). For junctions with the same **Fig. 5**: **Tuning of \(\Delta\)H by device resistanceMapping of \(\Delta\)H value with junction resistance at 2K for various JJs with Pt thicknesses of 12, 25, 30 and 50nm. These values were extracted from V(H) measurements of the junctions biased at the respective critical currents. The solid lines are only guides to the eye. The inset shows the \(\Delta\)H values as a function R\({}_{2K}\) for SQUID devices fabricated on the same chips. Some representative V(H) data are shown in the supplementary Fig A4. thickness of Pt layer, the R\({}_{2K}\) was varied by changing the geometrical dimensions of the junction. For example, the Nb-(Pt/Cu)-Nb junctions with 10 nm of Pt, reported in Fig 5, were fabricated on the same chip by systematically increasing the milling depth of the Cu layer, which caused an increase in the junction resistance at 2K. Consequently, a systematic drop in the \(\Delta\)H was observed with increasing junction resistance, as shown in Fig 5. Junctions with other thicknesses of Pt under layer also showed a similar decreasing trend in the \(\Delta\)H value with increasing R\({}_{2K}\). The \(\Delta\)H value for SQUID devices as a function of R\({}_{2K}\), are shown in the inset of Fig 5. Although the exact dependence of the \(\Delta\)H on the junction resistance requires microscopic modelling, the experimental data clearly shows a consistent decrease in \(\Delta\)H with increasing junction resistance in JJs and SQUID devices. In summary, we have demonstrated the planar Josephson effect as a simple tool for direct detection of the equivalent magnetic strength of the non-equilibrium spin density created by the Rashba-Edelstein effect. We show that the spin moment can be easily coupled to the Josephson junction by an external magnetic field, which leads to a distinct shift (\(\Delta\)H) in the Fraunhofer pattern of the junction. From the dependence of \(\Delta\)H on the junction resistance, it was found that quasi-particle current at the Pt/Cu interface in the junction barrier region was the primary tuning parameter of the observed effect. The breaking of both spatial inversion symmetry and time reversal symmetries provides the necessary conditions for realizing a phi-phase Josephson junction[62, 63]. In our device geometry, the Rashba interface breaks the spatial inversion symmetry, and the interface spin polarization due to the Edelstein effect breaks the time-reversal symmetry. Consequently, our experiment shows that non-equilibrium spin moment coupled to planar Josephson devices can be a convenient way of attaining arbitrary zero-field phase in Josephson junctions. A junction producing an on-demand initial phase without using a magnetic layer could be very useful as a phase battery to bias quantum circuits [64, 65]. ## Methods ### Multilayer deposition The series of trilayer Pt/Cu/Nb films and bilayer Cu/Nb films were deposited on cleaned Si/SiO\({}_{2}\) substrates using DC magnetron sputtering of high purity Nb, Pt, and Cu metal targets. A 5 nm ultra-thin adhesion layer of Nb was used for the bilayer case as Cu has a very poor adhesion to the SiO\({}_{2}\). In all cases the Nb and Cu layers were kept fixed at 150 nm and 100 nm, respectively. The Pt layer thickness was varied across the series from 10 nm to 80 nm. Pt layer thickness was calibrated using X-ray reflectivity measurements as shown in the Supplementary Fig A1. Prior to device fabrication 2 \(\mu\)m wide tracks of bilayer and trilayer samples were obtained by depositing films on lithographically patterned substrates and lifting-off. ### Junction fabrication The lithographically patterned tracks were subsequently narrowed down by focused beam of Gallium ion using a Zeiss Crossbeam 340 system. We used 100 pA ion current at 30 KV for the initial narrowing of the track down to 500nm and 10 pA current at 30 KV for narrowing the track width down to 300nm and sidewall polishing to have a clean junction interface. The planar junction separation of the top Nb layer was realized by milling down from the top with a current of 5pA at 30KV. For the device shown in the Fig 3(d) the Pt layer under the planar JJ was thinned by milling at an angle of 89 degrees with the sample normal. The junction area were examined using EDAX mapping to ensure no leftover Nb in the junction area. Multiple planar Josephson junctions and SQUIDs were fabricated following the same protocol and examined using EDAX for Nb leftovers. It is important to mention here that the junction resistance in the planar junction geometry, varies directly with the separation between the Nb electrodes (L) and inversely with the width of track (d)[see Fig A3]. The only control parameter for the top milling of the planar junction area is the exposure time of the ion beam. Therefore, in the process of ensuring no Nb leftover in the junction area, some Cu also gets etched out leading to a small variation in the Cu thickness from junction to junction. It is, therefore, inevitable to have variations of resistance at 2K from junction to junction fabricated even on the same chip. ### Transport measurements The resistance of the planar JJ and SQUID devices were measured in four probe arrangements with a magnetic field perpendicular to the plane of the devices. A cryogen free low-temperature cryostat equipped with a precision low magnetic field measurement option was used for all the measurements. The general features of the Josephson devices are discussed in the supplementary Fig A2 in detail.
2303.14588
Fine-Tashkeel: Finetuning Byte-Level Models for Accurate Arabic Text Diacritization
Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained language models to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community.
Bashar Al-Rfooh, Gheith Abandah, Rami Al-Rfou
2023-03-25T23:41:33Z
http://arxiv.org/abs/2303.14588v1
# Fine-Tashkeel: Finetuning Byte-Level Models for Accurate ###### Abstract Most of previous work on learning diacritization of the Arabic language relied on training models from scratch. In this paper, we investigate how to leverage pre-trained language models to learn diacritization. We finetune token-free pre-trained multilingual models (ByT5) to learn to predict and insert missing diacritics in Arabic text, a complex task that requires understanding the sentence semantics and the morphological structure of the tokens. We show that we can achieve state-of-the-art on the diacritization task with minimal amount of training and no feature engineering, reducing WER by 40%. We release our finetuned models for the greater benefit of the researchers in the community. ## 1 Introduction Arabic has an Abjad writing system where only the consonants and long vowels are being written (de Voogt and Quack, 2012). Later modifications to the writing system introduced short vowels in the form of diacritics. These diacritics are essential to disambiguate the meaning of the text. For example, many Arabic words are homographs where multiple words have the same spelling and diacritics can disambiguate them. However, many native speakers do not include these diacritics, saving time and effort, in their day-to-day writing assuming that the correct meaning can be inferred from the context. Automatic diacritization is of great benefit as a form of suggested grammar correction which the user can accept. The diacritized text will be easier to read especially for non-native speakers (See Table 1). Moreover, text with diacritization is easier to process by text-to-speech (TTS) systems. Previous efforts on learning a statistical model for automatic diacritization relied on training machine learning models initialized randomly (Zitouni and Sarikaya, 2009). Pretrained models such as BERT (Devlin et al., 2018), T5 (Raffel et al., 2020), GPT (Radford et al., 2018) has received wide adoption from the NLP community (Mikolov et al., 2013; Peters et al., 2017). However, those models, while easy to finetune for a wide range of downstream tasks, have been pretrained mainly on English corpora, limiting their ability to be used for other languages. mT5 expands the corpora of the pretraining stage of T5 models from English to 100+ languages (Xue et al., 2021). Given the closed vocabulary approach used to segment the multilingual text using sentencepiece (Kudo and Richardson, 2018), capacity assigned to each language is variable. Another approach is to pretrain monolingual models such as AraT5 (Nagoudi et al., 2022). ByT5 simplifies mT5 models by replacing the closed vocabulary approach with an open one, where the network itself is responsible for learning the appropriate segments of a language from its utf-8 byte input sequences. We adopt ByT5 models as our foundational model since our predictions are based on character level. We model our problem as a sequence-to-sequence generation problem where the input is a sequence of Arabic characters without diacritics presented to the network after being encoded into utf-8. The target output is the same sequence of characters interleaved with \begin{table} \begin{tabular}{l l} \hline \hline **Arabic** & **Translation** \\ \hline _(half)_ & the predicted diacritics. With a small number of finetuning steps (\(\leq\)15K) we are able to leverage the multilingual capabilities of the network to learn automatic Arabic diacritization achieving state-of-the-art results. To summarize our contributions: * We leverage large pretrained models to learn the task of diacritization achieving state-of-the-art results * We study the effect of data quality and size on the finetuning process, devising a curriculum that utilizes quality and size of training data. * We study the impact of scaling our pretrained models on the performance of diacritization. ## 2 Related Work In this Section, we will discuss the previous work on machine learning based Arabic diacritization, large language models, and character level modeling. Arabic Diacritizationhave been thoroughly studied through full-supervised approaches. Recent works considered the problem of diacritizing Arabic text as a classification problem like Karim and Abandah (2021); Madhfar and Qamar (2020); Barqawi (2017).Another approach is to model the problem as a translation using a sequence to sequence model as have been proposed by Mubarak et al. (2019). In both approaches, the network is initialized randomly and does not leverage any unsupervised training nor utilize any large corpora. Stankevicius et al. (2022) utilized the pre-trained ByT5 model to recover diacritical marks in 13 Latin-script languages and achieved competitive results, within 1% of the current state-of-the-art results. On the other hand, our research accomplished a state-of-the-art result in diacritizing Arabic, leading to a minimum 40% reduction in Word Error Rate (WER). Character level modelingadopts a token-free approach where the vocabulary is not a closed finite set of segments and words. This eliminates the out-of-vocabulary (OOV) problem which tends to be severe in languages with complex morphology such as Arabic. Choe et al. (2019) shows that character level language models can match the performance of word-level language models. This approach has been adopted later by ByT5 Xue et al. (2021) where the text is represented by a sequence of tuf-8 bytes. This allows us to represent all languages with a simple and small vocabulary of 256 symbols. English characters will be composed of single bytes while Arabic and Russian ones will consume 2-3 bytes per character. ByT5 comes in several capacities ranging from small to XXL, (0.3-13B) parameters respectively. Each of those were pre-trained on mC4 corpus which was crawled from web pages that cover 100 languagesXue et al. (2021). ByT5's authors showed that models processing language at byte level handle misspellings and noise gracefully and perform very well in languages with complex morphology. ## 3 Diacritization Modeling We consider the problem of predicting diacritics as a sequence-to-sequence task instead of a classification one. More specifically, our approach uses a text-to-text format: The input, fed to the model, consists of a sequence of tuf-8 bytes where each unicode character is represented by 2-3 bytes subsequences. The model is asked to produce the same sequence interleaved with tuf-8 bytes representing the diacritics. Note, that both input and output sequences are of variable length. For example, we are asking the model to produce the following sequence:[217, 138, 217, 143, 217, 175, 217, 146, 217, 129, 217, 142, 217, 185, 217, 143] which corresponds to given the following input sequence: [217, 138, 217, 175, 217, 129, 217, 185]. This text based-input/output representation reduces the burden on the practitioner preparing datasets and integrating preprocessing, later on, in deployed models. \begin{table} \begin{tabular}{l c r r r} \hline \hline & Split & \begin{tabular}{c} Examples \\ (\(\times 10^{3}\)) \\ \end{tabular} & \begin{tabular}{c} Words \\ (\(\times 10^{6}\)) \\ \end{tabular} & \begin{tabular}{c} Diacritics \\ (\%) \\ \end{tabular} \\ \hline Tashkeela [21] & Train & 1750 & 75.7 & 78.0 \\ Tashkeela & Dev & 2.5 & 0.1 & 82.2 \\ Tashkeela & Test & 2.5 & 0.1 & 82.2 \\ CA [21] & Train & 1700 & 74.7 & 78.2 \\ \hline MSA [21] & Train & 49 & 0.86 & 59.7 \\ Clean-50 [5] & Train & 50 & 2.1 & 83.1 \\ Extra [6] & Train & 533 & 22.6 & 82.2 \\ Clean-400 [8] & Train & 400 & 19.7 & 82.2 \\ \hline \hline \end{tabular} \end{table} Table 2: Datasets variants extracted from Tashkeela corpus. By describing the problem at a high level we are assigning the tasks of pre/post-processing and problem solving to the model to handle on its own, simplifying research and applications design. ## 4 Datasets Tashkeela is a corpus of vocalized Arabic text that covers both Classical Arabic (CA) and Modern Arabic (MSA) with classical sources constituting the majority of the corpus. The text comes mainly from books that are crawled from the web (Zerrouki and Balla, 2017). While the dataset is quite instrumental in our modeling, there are several shortcomings that complicate our learning task such as: (a) Missing Diacritics (b) More than 2 diacritics on a single character (c) Inconsistent unicode representation (d) Annotated foreign language characters. Hence, each of the research efforts, that followed, devised a different filtering criteria to improve the training dataset quality. Table 2 shows several subsets of the original dataset that are generated by different rules of filtering. ## 5 Metrics To measure our progress solving the task of diacritization we organize our metrics into two classes: 1. **Diacritic Error Rate** (DER): The percentage of unicode characters with which we predicted the wrong diacritic. 2. **Word Error Rate** (WER): The percentage of words with which have at least one character with an incorrect diacritic. We define diacritics to be the unicode characters that are included in the unicode plane defined by the range (0x064B-0x0652). These characters will be removed from the input while any unicode character outside that range will stay as part of the input sequence. However, defining words represents a challenge since there is no word boundary identifier in the corpus. Moreover, Arabic language typically morphs several parts of speech into the same contiguous token. To simplify the computation of WER, we consider white spaces to be our word boundary despite its limitations. ## 6 Results & Discussion Finetuning SetupWe conducted initial experiments on the smallest dataset of Tashkeela (Clean-50) to find reasonable hyperparameters for our experiments by finetuning ByT5 small model on 8-TPUv2 cores for a maximum of 6000 steps with a batch size 256 per TPU core. We found that the optimal learning rate is \(3\times 10^{-3}\), sequence length to be 512 bytes. For more information check Appendices [A,B]. Does data quality matter?Table 3 (Rows: 1-6) shows that filtering the original dataset benefits the quality of the finetuning. For example, DER improved from 1.38% to 1.33% by only finetuning on **Clean-400** instead of the full dataset **Tashkeela**. However, aggressive filtering as being done in **Clean-50** reduces the size of the training dataset significantly, in this case, to 50K examples and therefore hurt the quality of the finetuned model increasing DER from \(\sim\)1.35% to 1.70%. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Row} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Training Dataset**} & \multicolumn{2}{c}{**Eval**} & \multicolumn{3}{c}{**Included Chars in Eval**} & \multirow{2}{*}{**DER**} & \multirow{2}{*}{**WER**} \\ & & Stage 1 & Stage 2 & Split & Numbers & Punct & Space & Last & Unlabeled & \\ \hline 1 & & CA & — & Dev & & & ✓ & ✓ & 1.55 & 4.39 \\ 2 & & MSA & — & Dev & & & ✓ & ✓ & 6.97 & 18.43 \\ 3 & & Clean-50 & — & Dev & & & ✓ & ✓ & 1.70 & 4.73 \\ 4 & & Extra & — & Dev & & & ✓ & ✓ & 1.35 & 3.74 \\ 5 & & Clean-400 & — & Dev & & & ✓ & ✓ & 1.33 & 3.75 \\ 6 & & Tashkeela & — & Dev & & & ✓ & ✓ & 1.33 & 3.75 \\ 7 & & & Tashkeela & Clean-400 & Dev & & & ✓ & ✓ & 1.38 & 4.02 \\ 8 & & Tashkeela & Clean-400 & Dev & & & & ✓ & 1.16 & 3.35 \\ 9 & & Tashkeela & Clean-400 & Dev & & & & ✓ & 0.98 & 1.97 \\ 10 & & Tashkeela & Clean-400 & Dev & & & ✓ & & 1.31 & 3.09 \\ \hline 11 & & Tashkeela & — & Dev & & & & ✓ & 1.23 & 3.67 \\ 12 & & Tashkeela & Clean-400 & Test & & & ✓ & ✓ & 1.00 & 2.92 \\ 13 & & Tashkeela & Clean-400 & Test & & ✓ & ✓ & ✓ & 0.95 & 2.49 \\ 14 & & & Tashkeela & Clean-400 & Test & ✓ & ✓ & ✓ & ✓ & 0.74 & 2.49 \\ \hline 15 & & Barawi (2017) & & & Test & & & ✓ & ✓ & 3.73 & 11.19 \\ 16 & & Fadel et al. (2019b) & & Test & & & ✓ & ✓ & ✓ & 1.78 & 5.38 \\ 17 & & Karim and Abandah (2021) & & & Test & ✓ & ✓ & ✓ & ✓ & 1.97 & 5.13 \\ 18 & & Madhfar and Qamar (2020) & & & Test & ✓ & ✓ & ✓ & ✓ & ✓ & 1.13 & 4.43 \\ \hline \hline \end{tabular} \end{table} Table 3: Results on Tashkeela Validation and Test splits. To take advantage of the diversity of the largest training datasets without being affected by the noisy examples that could be included, we devise the following curriculum learning schedule: (1) we finetune our pretrained model on the full dataset for 8K steps. (2) we, further, finetune for extra 4K steps on **Clean-400** This schedule will expose the model to diverse examples while narrowing definition of the task of diacritization to how it was demonstrated in the cleaner subset of training examples in **Clean-400**. Table 3 (Rows: 6-7) shows that our curriculum is able to capture the best of both worlds. Large datasets that improve the coverage of the model domain and cleaner more targeted dataset that adhere better to the task definition. This sequential finetuning is able to reduce DER from 1.33 to 1.16. Does scale matter?Table 3 (Rows: 6 vs 11) shows the results of finetuning **Base** ByT5 model which is slightly 2x larger than **Small**. We are able to reduce DER from 1.38% to 1.23% which is consistent with previously reported results that demonstrates improvements on downstream tasks as the pretrained model capacity increases Hernandez et al. (2021); Wei et al. (2022). Do we benefit from self-supervised training?Table 3 (Rows: 12-18) shows our model results in comparison to previously reported results on Tashkela test dataset. Previous efforts evaluated their models with varying sets of characters. We evaluated our model consistently with each baseline (Rows: \(12\leftrightarrow 16\), \(13\leftrightarrow 17\), \(14\leftrightarrow 18\)). Regardless of which evaluation methodology we have been using, we are able to reduce the error rate by at least 40% in WER. Are all diacritics hard?Arabic grammar influences the last character diacritic of the word. Therefore, predicting the diacritic of the last character tends to be a harder problem since it depends on the relative position of the word within the sentence and its meaning. To test this hypothesis we evaluate our model excluding last character (Rows: 7 vs 8). We observe a drop in DER from 1.16% to 0.98% confirming that complexity of predicting the last characters. On the other hand, it seems that the model has an easier time identifying which characters should not be annotated with diacritics from the fact that DER increased from 1.16% to 1.31% (Rows: 7 vs 9). ## 7 Analysis To understand the categories of errors our model introduces, we calculated the confusion matrix in Figure 1. In the right side, we calculate the distribution of diacritics in our dataset. Single diacritics dominated the distribution while combined ones such as _Shadda + Fathatan_ rarely appear. On the left, each cell represents the probability of predicting character at column (j) given the ground truth character (i) row. We notice that when the model is not quite certain it defaults to predicting Fatha. This could be explained by the fact it is the most common diacritic. Our model is performing very well (_No Diacritic_ Accuracy=99.7) not predicting diacritics where they should not be (row) or missing predicting them when they are needed (column). Table 4 shows several errors done by our model. ## 8 Future Work & Conclusion We have demonstrated that finetuning large pretrained multilingual models produces significant improvements in quality for automatic diacritization achieving new state-of-the-art results. We studied the benefits of scaling up the \begin{table} \begin{tabular}{l l l} \hline \hline Target & Output & Issue \\ \hline \hline \multirow{3}{*}{\begin{tabular}{} \end{tabular} } & \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & Wrongs diacritic \\ & Adding extra diacritic \\ & Missing diacritic \\ \hline \hline \end{tabular} \end{table} Table 4: Examples of our finetuned ByT5-Base model predictions on Tashkela Dev. Figure 1: Confusion matrix of our model predictions on Tashkela test dataset and the distribution of the diacritics. pretrained models and the impact of training dataset quality and size. We realize that these pretrained models tend to be computationally expensive and not practical to be deployed in edge-compute applications. We are looking into distilling our best model into smaller models that are cheaper to run.
2301.02593
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
To integrate high amounts of renewable energy resources, electrical power grids must be able to cope with high amplitude, fast timescale variations in power generation. Frequency regulation through demand response has the potential to coordinate temporally flexible loads, such as air conditioners, to counteract these variations. Existing approaches for discrete control with dynamic constraints struggle to provide satisfactory performance for fast timescale action selection with hundreds of agents. We propose a decentralized agent trained with multi-agent proximal policy optimization with localized communication. We explore two communication frameworks: hand-engineered, or learned through targeted multi-agent communication. The resulting policies perform well and robustly for frequency regulation, and scale seamlessly to arbitrary numbers of houses for constant processing times.
Vincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull, Antoine Lesage-Landry
2023-01-06T16:41:51Z
http://arxiv.org/abs/2301.02593v1
# Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads ###### Abstract. To integrate high amounts of renewable energy resources, electrical power grids must be able to cope with high amplitude, fast timescale variations in power generation. Frequency regulation through demand response has the potential to coordinate temporally flexible loads, such as air conditioners, to counteract these variations. Existing approaches for discrete control with dynamic constraints struggle to provide satisfactory performance for fast timescale action selection with hundreds of agents. We propose a decentralized agent trained with multi-agent proximal policy optimization with localized communication. We explore two communication frameworks: hand-engineered, or learned through targeted multi-agent communication. The resulting policies perform well and robustly for frequency regulation, and scale seamlessly to arbitrary numbers of houses for constant processing times. Multi-agent reinforcement learning, Demand response, Power systems, Renewable integration, Communication, Coordination + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition learn decentralized and scalable policies (4) with discrete and constrained control (1, 2) and limited and localized communications (3, 5). Once learned, these policies can take the best decisions in real time (6) based on expected value over uncertainty (7). As this problem combines the most important current challenges of MARL, i.e., communication, long-term credit assignment, coordination, and scalability (Krishnan et al., 2017), it is also an interesting benchmark for MARL algorithms. We train our agents with Multi-Agent Proximal Policy Optimization (MA-PPO) (Shen et al., 2017) with Centralized Training, Decentralized Execution (CTDE). (Zhou et al., 2018). Two local communication frameworks are tested - hand-engineered and learned - and both outperform the baselines. Our main contributions are threefold: * an open source, multi-agent environment1 simulating the real-world problem of frequency regulation through demand response at the second timescale. The simulator is compatible with the OpenAI Gym (Gym, 2018) framework. Footnote 1: The code is hosted on [https://github.com/ALLabMTL/MARL_for_fast_timescale_DR](https://github.com/ALLabMTL/MARL_for_fast_timescale_DR) * two decentralized, fast-responding agents1 trained by MAPPO. The first one has a hand-engineered communication strategy, while the second one learns what data to share through Targeted Multi-Agent Communication (TarMAC) (TararMAC) (TararMAC) (TarMAC). Both outperform baselines on two-day simulations. Footnote 1: The code is hosted on [https://github.com/ALLabMTL/MARL_for_fast_timescale_DR](https://github.com/ALLabMTL/MARL_for_fast_timescale_DR) * an in-depth analysis of the dynamics, communications, scalability and robustness of the trained agents. In the next section, we describe prior work in the field of demand response and MARL. In Section 3, we describe the environment and formulate the problem. The classical and learning-based methods are described in Section 4. Finally, Section 5 presents the experimental results and analyses of the agents' performance, dynamics, robustness, and scalability. ## 2. Related Works Frequency regulation through demand response is commonly tackled by model predictive control (MPC) (Shen et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), where the best action is chosen based on trajectory prediction over a given horizon, sometimes combined with machine learning (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018). Apart from (Wang et al., 2018), these works do not consider short-term dynamic constraints such as lockout. MPC approaches rely on mixed-integer programming, which does not scale sustainably with higher numbers of agents, preventing control at fast timescales. Moreover, these works generally require a centralized entity to access residences' data, leading to confidentiality issues. An alternative method of multipliers-based distributed MPC approach was proposed in (Grover et al., 2018). This approach did not consider the lockout constraint and is not compatible with fast timescale decision-making as it requires multiple centralized communication rounds at each time step in addition to solving several optimization problems and converting continuous setpoints to binary actions. To tackle these problems, online optimization (OO) approaches (Wang et al., 2018; Wang et al., 2018) have been used because of their high computational efficiency and scalability. In particular, (Wang et al., 2018) deploys OO for frequency regulation with binary control settings as is the case for ACs. However, these methods rely on greedy optimization and their lack of foresight leads to limited performance when facing dynamic constraints. Reinforcement learning (RL) methods have been developed to address the longer timescale power balance problems such as peak shaving through demand response (Beng et al., 2018) or coordination of loads and generators (Wang et al., 2018; Wang et al., 2018). The CityLearn environment (Xu et al., 2018) proposes a standard environment for multi-agent RL (MARL) for demand response, upon which are developed methods such as (Wang et al., 2018) to regulate the voltage magnitude in distribution networks using smart inverters and intelligent energy storage management, and (Wang et al., 2018) for load shaping of grid-interactive connected buildings. The AlphaBuilding ResCommunity environment (Xu et al., 2018) then implements detailed thermal models. Both CityLearn and AlphaBuilding ResCommunity, however, consider longer timescale control, which makes them inadequate for high-frequency regulation and removes the ACs' lockout and binary constraints. The PowerGridworld (Grover et al., 2018) environment, a more flexible alternative to CityLearn, allows fast-timescale simulation but does not provide a detailed thermal model of loads, options for lockout or binary control, or classical baseline approaches to compare with. High-frequency regulation has been addressed by MARL, but only on the power generation side (Wang et al., 2018). We are unaware of any example in the literature deploying MARL for frequency regulation with demand response, with second-timescale control and flexible binary loads such as ACs which are subject to hardware dynamic constraints like a lockout. More generally, MARL has been developed for collaboration both in virtual environments such as Dota 2 (Hide and Seek, 2018), Hanabi (Hide and Seek, 2018) or in real-world environments such as traffic light control (Hanebi et al., 2018), single-house energy management (Beng et al., 2018) or ride-sharing (Wang et al., 2018). MARL problems pose several additional challenges to the RL settings (Krishnan et al., 2017), such as the non-stationarity of the environment, the need to learn coordination and communication, or the scaling of the training and deployment. Multi-agent adaptations of known RL algorithms, such as online PPO (Hide and Seek, 2018; Wang et al., 2018), or offline DDPG (Wang et al., 2018; Wang et al., 2018) and DQN (Wang et al., 2018), have led to strong performance in many problems. However, some particular problems, such as the ones requiring communication with large numbers of agents, need specialized algorithms (Hanebi et al., 2018). TarMAC (TarMAC, 2018), for example, uses an attention mechanism to aggregate messages based on their importance. ## 3. Problem Formulation ### Environment The environment is a simulation of an aggregation of \(N\) houses, each equipped with a single air conditioning (AC) unit. The outdoor temperature \(T_{0,t}\) is assumed to be the same for every house, i.e., they are co-located in the same geographical region, and is simulated as sinusoidal with a one-day period. Unless otherwise specified, the maximal temperature of 34 \({}^{\circ}\)C is reached at 6 pm and the minimal temperature of 28 \({}^{\circ}\)C at 6 am. \(T_{0,t}\) is thus always above the target indoor temperature \(T_{T}\) of 20 \({}^{\circ}\)C, so that every household can offer its flexibility to the grid. The environment model is updated every 4 seconds. Thermostatic loads modeled as multi-zone units and equipped with more than a single AC (Beng et al., 2018) is a topic for future work. More details about the environment are given in Appendix C. A notation table is provided in Appendix A. #### 3.1.1. House thermal model Each house \(i=1,2,\ldots,N\) is simulated using a second-order model based on Gridlab-D's Residential module user's guide (Hide and Seek, 2018). At time \(t\), the indoor air temperature \(T_{h,t}^{i}\) and the mass temperature \(T_{m,t}^{i}\) are updated given the house characteristics \(\theta_{T}^{i}\) (wall conductance \(U_{h}^{i}\), thermal mass \(C_{m}^{i}\), air thermal mass \(C_{h}^{i}\) and mass surface conductance \(H_{m}^{i}\)), the outdoor temperature \(T_{o,t}\), and the heat \(Q_{a,t}^{i}\) removed by the AC. By default, the thermal characteristics are the same for each house and model a 100 square meter, 1-floor house with standard isolation. During training and deployment, the initial mass and air temperatures are set by adding a positive random noise over the target temperature. Although it is not used by default, the solar gain \(Q_{s,t}\) can also be added to the simulation, as seen in Appendix C.1.1. #### 3.1.2. Air conditioners Once again based on Gridlab-D's guide (Girdlab and D'Alessio, 2017), air conditioner \(i\)'s heat removal capacity \(Q_{a,t}^{i}\) and power consumption \(p_{a,t}^{i}\) are simulated based on the AC characteristics \(\theta_{a}^{i}\), which include their cooling capacity \(K_{a}^{i}\), their coefficient of performance \(COP_{a}^{i}\) and the latent cooling fraction \(L_{a}^{i}\). The model and parameters are also described in Appendix C.2. Additionally, a hard dynamic constraint is set to protect the compressor: after being turned off, it needs to wait a given amount of time before being allowed to turn on again (Kolmogorov, 2017). This constraint is referred to as the lockout. By default, the lockout duration \(l_{\text{max}}^{i}\) is set to 40 seconds. #### 3.1.3. Regulation signal The power system operator sends to the aggregator a signal \(\rho_{t}\), which covers the complete aggregated load consumption: the systems we cannot control such as computers, washing machines, or lights, and the flexible power consumption, in our case, the ACs. Let, \(\rho_{t}=D_{o,t}+s_{t}\) where \(D_{o,t}\) is the power demand for the non-controllable loads and \(s_{t}\) is the objective aggregated AC power consumption, i.e., the flexible load. We define \(D_{a,t}\) as the power needed by the ACs to satisfy their thermal objectives, i.e., to keep the temperature around the target. To focus on the high-frequency variations of the power generation, we assume that \(s_{t}\) is well behaved at low frequencies, i.e., its mean in the 5 minutes scale is \(D_{a,t}\). A 0-mean, high-frequency variation \(\delta_{s,t}\) is added to represent renewable intermittency the aggregator wants to mitigate. We model the regulation signal as \(s_{t}=D_{a,t}+\delta_{s,t}\). The aggregation flexible power consumption is the sum of all of the ACs' consumption: \(P_{t}=\sum_{i}^{N}p_{a,t}^{i}\). The objective is to coordinate the ACs in the aggregation so that \(P_{t}\) tracks \(s_{t}\). _Base signal._ To compute the average needed power \(D_{a,t}\), we created a dataset of the average power needed over a 5-minute period by a bang-bang controller without lockout - which is optimal for temperature - for all combinations of discrete sets of the relevant parameters. At each time step, we interpolate the average power demand of each AC from this dataset and sum them to compute \(D_{a,t}\). In practice, the base signal would be estimated or obtained from historical data. The aggregator would then consider its value when committing to track a signal \(s_{t}\). This ensures that the required power adjustment is enough to maintain the houses at acceptable temperatures while providing flexibility to the grid. Modelling high-frequency variationsThe high-frequency variation \(\delta_{s,t}\) is modelled with 1-D Perlin noise (Kolmogorov, 2017), a smooth, procedurally generated 0-mean noise. The Perlin noise produces \(\delta_{p,t}\in[-1,1]\), and we have \(\delta_{s,t}=D_{a,t}\rho_{\beta}\delta_{p,t}\) where \(\beta_{p}\) is an amplitude parameter set to 0.9. Our Perlin noise is defined by 5 octaves and 5 octave steps per period of 400 seconds; it thus is the sum of noises with periods of 80, 40, 20, 10 and 5 seconds. More details are given in Appendix C.3.2. #### 3.1.4. Communication between agents To achieve coordination between agents, they must be able to communicate. For the agent implementation to be decentralized, flexible, and privacy-preserving, we consider limited and localized communications. This enables, for example, devices communicating with simple radio-frequency emitters, without the need for any further infrastructure. As such, we limit the communication to a number \(N_{c}\) of neighbours. This is in line with the low-deployment investment argument for using demand response for frequency regulation. ### Decentralized Partially Observable Markov Decision Process In this section, we formalize the above environment as a decentralized, partially observable Markov decision process (Dec-POMDP) characterized by the tuple \(\langle\mathcal{S},\mathcal{A},\mathcal{O},\mathcal{P},\mathcal{R},\gamma\rangle\). Let \(\mathcal{S}\) be the global state, \(\mathcal{A}=\prod_{i=1}^{N}\mathcal{A}^{i}\) the joint action space, and \(\mathcal{O}=\prod_{i=1}^{N}\mathcal{O}^{i}\) the joint observation space. \(\mathcal{O}^{i}\) partially observes \(\mathcal{S}\). \(\mathcal{P}\) describes the environment's transition probabilities, \(\mathcal{R}\) the reward function for each agent and \(\gamma\) the discount parameter. #### 3.1. State, transition probabilities and actions The state of the environment \(X\in\mathcal{S}\) and its transition probabilities \(\mathcal{P}\) are unknown to the agent. They are simulated by the environment dynamics described in Section 3.1. Each agent \(i\)'s action \(a_{t}^{i}\in\mathcal{A}^{i}\) is a binary decision to control the AC status. If the remaining lockout time \(l_{t}^{i}\) is above zero, the on action will be ignored by the AC. In practice, a backup controller within the AC would prevent the on decision from being implemented. #### 3.2.2. Observations and communications By default, agent \(i\) receives observation \(o_{t}^{i}=\{T_{h,t}^{i},T_{m,t}^{i},T_{T}^{i},o_{t}^{i},l_{t}^{i},s_{t}/N,P_{t}/N\}\) at time step \(t\), where \(T_{h,t}^{i}\), \(T_{m,t}^{i}\) and \(T_{T}^{i}\) are the indoor air, mass, and target temperatures, \(o_{t}^{i}\) is the on or off status of the AC, \(l_{t}^{i}\) is its remaining lockout time, \(s_{t}/N\) is the per-agent regulation signal and \(P_{t}/N\) is the per-agent total consumption of the aggregation. Each agent \(i\) communicates with its \(N_{c}\) neighbours. The messages' sizes are not hard limited but should be small, and their contents are not constrained. We define the set of all of agent \(i\)'s \(N_{c}\) neighbours as \(M^{i}\). By default, we organize the agents in a 1-dimensional structure: \(M^{i}=\{i-\lfloor N_{c}/2\rfloor,i-\lfloor N_{c}/2\rfloor+1,\ldots,i,\ldots,i+ \lfloor N_{c}/2\rfloor-1,i+\lfloor N_{c}/2\rfloor\}\backslash\{i\}\). #### 3.2.3. Reward For each agent \(i\), reward \(r_{t}^{i}\) is computed as the weighted sum of the penalties due to its air temperature difference with the target, which is unique to the agent, and to signal tracking, which is common across all agents. This scenario is therefore cooperative with individual constraints. We normalize the reward with \(\alpha_{\text{temp}}=1\) and \(\alpha_{\text{sig}}=3\times 10^{-7}\): a 0.5 \(\cdot\)C error is penalized as much as a 912 W per-agent error (each agent consumes 6000 W). \[r_{t}^{i}=-\left(\alpha_{\text{temp}}\left(T_{h,t}^{i}-T_{T,t}^{i}\right)^{2}+ \alpha_{\text{sig}}\left(\frac{P_{t}-s_{t}}{N}\right)^{2}\right)\] ## 4. Classical and learning-based algorithms ### Classical baselines To the best of our knowledge, there is no classical baseline that performs well under all the constraints enumerated in Section 1. However, simple algorithms can optimize selected objectives, and we use them as baselines for the results of the MARL agent. #### 4.1.1. Bang-bang controller The bang-bang controller (BBC) turns the AC on when the air temperature \(T_{h,t}^{i}\) is higher than the target \(T_{T}^{i}\), and off when it is lower. This is a decentralized algorithm, which does not consider demand response but near-optimally controls the temperature. When the lockout duration \(l_{\text{max}}^{i}\) is 0, the BBC optimally controls the temperature, but does not account for the signal. As the base signal \(s_{0,t}\) is computed to allow optimal temperature control, BBC's signal tracking error is mainly due to the high-frequency variations of the signal. #### 4.1.2. Greedy myopic The greedy controller is a centralized algorithm that solves a knapsack problem (Kipip and Kipip, 1996) where the size of the collection is the regulation signal, the weight of each AC is its consumption \(P_{h,t}^{i}\), and its value is the temperature difference \(T_{h,t}^{i}-T_{T}^{i}\). At each time step, Acs are chosen based on a value priority computed by \((T_{h,t}^{i}-T_{T}^{i})/P_{h,t}^{i}\), until the aggregation's consumption \(P_{t}\) is higher than the regulation signal \(s_{t}\). As it does not plan for the future, the greedy myopic approach quickly runs out of available Acs as most of them are in lockout. However, with a 0-lockout duration \(l_{\text{max}}^{i}\), it is optimal to track the signal \(s_{t}\), and controls the temperature in second priority. We implement the greedy myopic approach as it is better adapted to these settings than the OO approach described in Section 2. Indeed, OO only uses past state information and must be implemented in a strictly online fashion. Both frameworks are myopic, and struggle similarly with the lockout constraint. #### 4.1.3. Model predictive control Model predictive control, or MPC, is in its nominal form a centralized algorithm modeling the environment and identifying the actions which will lead to the highest sum of rewards over a time horizon of \(H\) time steps. As the signal is stochastic, MPC assumes a constant future signal over horizon \(H\), and optimally solves the trajectory with lockout. However, because it is a large-scale combinatorial optimization problem, it scales poorly with the number of agents \(N\) and with a horizon \(H\). In the best case the complexity is polynomial, but it is exponential in the worst case. As a result, we were not able to run the MPC for more than 10 agents for \(H=60\)s, and had to increase the time step between each action to 12 seconds. More details are provided in Appendix D.1. ### Learning-based methods We deploy two algorithms using deep reinforcement learning, namely MA-DQN and MA-PPO, both using the CT-DE paradigm. While MA-DQN only uses hand-engineered communications, MA-PPO was implemented with two communications paradigms: hand-engineered and learned. Details about the architectures and hyperparameters are provided in Appendix D.2. #### 4.2.1. Centralized Training, Decentralized Execution The CT-DE paradigm (Kipip and Kipip, 1996) assumes that information is shared during the training of the agents, while they execute actions only based on their decentralized observations. This reduces the non-stationarity of the environment (Kipip and Kipip, 1996) and stabilizes the training. In our case, all agents are homogeneous, which allows the use of parameter sharing (Kipip and Kipip, 1996). As such, all Acs are controlled by identical instances of the same policy trained from the shared experience of all agents. #### 4.2.2. Ma-Dqn Multi-agent Deep Q-Network (MA-DQN) is the CT-DE adaptation of DQN (K is an attention-based targeted communication algorithm where each agent outputs a key, a message and a query. The key is sent along with the message to the other agents, which then multiply it with their query to compute the attention they give to the message. All messages are then aggregated using the attention as a weight. The three modules - key, message, query - are trained. TarMAC allows more flexibility to the agents: it does not restrict the contents of the communication, and it allows agents to communicate with a different number of houses than they were communicating with during training. More details are available in Appendix D.2.1. We refer to this version as TarMAC-PPO. _No communication._ It is also possible to train agents without communication. In this case, it only observes \(o_{I}^{i}\). This agent is referred to as MA-PPO-NC. #### 4.2.5. Agent training The learning agents were trained on environments with \(N_{\text{tr}}=\{10,20,50\}\) houses and communicating with \(N_{\text{clr}}=\{9,19,49\}\) other agents. We trained every agent on 16 different seeds: 4 for environment and 4 for network initialization. They were trained on 3286800 time steps, equivalent to 152 days, divided in 200 episodes. Each episode is initialized with each house having a temperature higher than the target, sampled from the absolute value of a 0-mean Gaussian distribution with \(\sigma=5^{\circ}\)C. We tuned the hyperparameters through a grid search, as shown in Appendix D. The contribution of this paper is to demonstrate that learning-based methods can lead to high performance on the problem of high frequency regulation. We therefore do not compile statistics over the trained agents; instead, for each situation, we select the two best agents over the seeds based on test return, and report the best score from these two on the benchmark environment. ## 5. Results and analysis ### Metrics of performance We deploy the agents on a benchmark environment with \(N_{\text{de}}\) houses on trajectories of 43200 steps, i.e., two full days. We evaluate their performance with the per-agent root mean square error (RMSE) between the regulation signal \(s_{I}\) and aggregated power consumption \(P_{I}\). We also measure the temperature RMSEs - one for all agents, one of the maximal temperature error of the aggregation - to ensure thermal control. Every house's temperature is initialized differently, so we start computing the RMSE when the temperature is controlled, after 5000 steps. For context, a single AC consumes 6000 W when turned on. Due to the MPC's computing time, its performance is evaluated differently, as explained in Appendix D.1. Unless mentioned otherwise, the results are the mean and standard deviation over 10 environmental seeds. ### Performance of agents Table 1 shows the performance of different agents in environments with and without lockout with \(N_{\text{de}}\) of 10, 50, 250 and 1000 houses. The per-agent signal RMSE generally goes down when \(N_{\text{de}}\) increases. This is due to the lower relative discretization error, but also because, with more agents, errors have more chances to cancel each other, as explained in Appendix E. As expected, BBC controls the temperature well, but does not track the signal. Without lockout, the greedy myopic shows near-optimal signal tracking, where errors are due to discretization. It also maintains good control of the temperature. With lockout, however, it fails, as it runs out of available agents. The MPC gives good results for 10 agents, but its performance is limited by the lower control frequency of 12 seconds. It could not be run on \(N_{\text{de}}=50\) for computing time reasons. DQN controls the temperature well but is only slightly better than BBC on the signal. Both PPO agents show significantly better performance, and TarMAC-PPO outperforms MA-PPO-HE at high \(N_{\text{de}}\). The results without communication will be discussed in Section 5.5. Figure 1 shows the behaviour of each agent over two days for 50 houses. Every point on the curves is averaged over 10 minutes. The mean offset captures the error's bias by averaging the differences such that positives and negatives cancel each other, while the mean error is the mean of the absolute differences. The signal and consumption curves start very high due to the initial situation, and then follow the sinusoidal pattern of the outdoor temperature. Without lockout, the BBC shows low temperature and signal offsets, with a significant signal error, as it does not track high-frequency variations of the signal. With the lockout, it under-consumes as explained in Section 4.1.1, leading to a positive temperature offset, and the base signal rises to compensate. As the signal variation amplitude is high, this does not strongly affect the error. The DQN agent has a smaller signal offset and error, especially at night when the amplitude of the signal variations is lower. During the day, the signal error is still significant. Both MA-PPO agents, on the other hand, have a near-0 offset in signal and temperature. Their signal error is also significantly lower than the others, because they are able to track the high-frequency variations. ### Scalability with number of agents As shown in Table 1, the PPO agents, and TarMAC-PPO especially, scale gracefully with the number of agents. Figure 2 shows the consumption and signal over 800 seconds for agents deployed over \(N_{d}=50\) and 1000 over 800 seconds. For \(N_{d}=50\), the agents do not perfectly match the signal. However, the same agent does better on 1000 houses. Indeed, as the environment is homogeneous, the local strategy scales smoothly by averaging out errors. The best performing agents for TarMAC-PPO were trained on environments with \(N_{\text{tr}}=10\) houses. With MA-PPO-HE, it is often the agents trained on \(N_{\text{tr}}=20\) that had the best results. Training with \(N_{\text{tr}}=50\) probably makes the credit assignment harder as shown in Figure 3. ### PPO agents' dynamics As visualized in Figure 4, both MA-PPO-HE and TarMAC-PPO policies keep the ACs in lockout or on, and never off. This is optimal for temperature control: an agent needing to be off to warm up after lockout, would not have had the time to warm up during the lockout and was thus on for too long beforehand. The agents turn on as soon as they can, but control when they turn off based on the context and the messages of other agents. A fascinating feature of the learned policies is the cyclic behaviour used by MA-PPO-HE agents for coordination. As shown in Figure 4, the ACs turn on one after the other based on their positions in the aggregation, with a repetitive pattern. This happened for each MA-PPO-HE agent we trained, although the pattern period or moving direction was different. These patterns enable agent coordination thanks to the stable message structure, i.e., the fixed relative position of agent \(j\)'s message to agent \(i\) in the \(\tilde{o}_{l}^{i}\) vector. The TarMAC-PPO agents, on the other hand, do not follow a pattern in their collective behavior. Indeed, aggregated messages do not contain information about the structure of the neighbours. The coordination is done through flexible message contents. ### Communications The agents need communications to coordinate and get the best results. Intuitively, the more agents to communicate with, the better the performance because the observability of the environment is improved. In practice, this is not always the case, as shown in Figure 5. For TarMAC-PPO, communicating with 9 neighbours often leads to the best performance. Higher values of \(N_{\text{de}_{\text{de}}}\) can lead to a reduction of the weight of important messages in the aggregation. For MA-PPO-HE, communicating with 19 agents yields better results than with 49. Indeed, in MA-PPO-HE, the agents must have \(N_{\text{de}}=N_{\text{de}}\). During training, communicating with more agents increases the credit assignment difficulty as it increases the input size with non-controllable elements. It is also clear in Figure Figure 5 that agents trained to communicate do not cope well when not communicating. Figure 6 shows the performance of a TarMAC agent trained with \(N_{\text{lr}}=10\) and \(N_{\text{de}_{\text{lr}}}=9\) on an environment with \(N_{\text{de}}=50\) agents, when changing the number \(N_{\text{de}_{\text{lr}}}\) of neighbours it can communicate with. The performance is bad at low communication but stabilizes around 7 or 8 agents. It is, however, possible to train an agent without communication to do better than Bang-Bang control, as shown by the performance of MA-PPO-NC in Table 1. Without coordinating with the others, \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & & \multicolumn{3}{c|}{\(N_{\text{de}}=10\)} & \multicolumn{3}{c|}{\(N_{\text{de}}=50\)} & \multicolumn{3}{c|}{\(N_{\text{de}}=250\)} & \multicolumn{3}{c|}{\(N_{\text{de}}=1000\)} \\ \hline \multicolumn{2}{|c|}{Per-agent} & Signal & T. & Max T. & Signal & T. & Max T. & Signal & T. & Max T. & Signal & T. & Max T. \\ \multicolumn{2}{|c|}{RMSE} & (W) & (C) & (C) & (W) & (C) & (C) & (W) & (C) & (C) & (W) & (C) & (C) \\ \hline No Lo & Greedy & \(194\pm 1\) & 0.04 & 0.06 & \(70\pm 1\) & 0.03 & 0.05 & \(63\pm 1\) & 0.03 & 0.052 & \(63\pm 1\) & 0.03 & 0.05 \\ & BBC & \(806\pm 147\) & 0.02 & 0.03 & \(392\pm 50\) & 0.02 & 0.04 & \(310\pm 11\) & 0.02 & 0.03 & \(272\pm 12\) & 0.02 & 0.03 \\ \hline \multirow{8}{*}{40s Lo} & Greedy & \(2668\pm 14\) & 0.87 & 0.93 & \(3166\pm 12\) & 1.09 & 1.15 & \(313\pm 12\) & 1.16 & 1.22 & \(3369\pm 15\) & 1.18 & 1.24 \\ & BBC & \(830\pm 207\) & 0.05 & 0.09 & \(426\pm 63\) & 0.05 & 0.10 & \(318\pm 7\) & 0.05 & 0.10 & \(296\pm 4\) & 0.05 & 0.10 \\ & MPC & \(344\pm 96\) & 0.07 & 0.12 & - & - & - & - & - & - & - & - & - \\ & MA-DQN & \(541\pm 86\) & 0.05 & 0.09 & \(321\pm 24\) & 0.05 & 0.10 & \(246\pm 8\) & 0.05 & 0.11 & \(234\pm 4\) & 0.05 & 0.12 \\ & MA-PPO-HE & \(253\pm 1\) & 0.04 & 0.08 & \(161\pm 8\) & 0.04 & 0.08 & \(127\pm 2\) & 0.04 & 0.11 & \(122\pm 3\) & 0.05 & 0.13 \\ & TarMAC-PPO & \(247\pm 3\) & **0.04** & **0.07** & **158\(\pm 2\)** & **0.04** & **0.09** & **115\(\pm\)** **1** & **0.05** & **0.13** & **101\(\pm\)** **2** & **0.05** & **0.14** \\ & MA-PPO-NC & \(434\pm 2\) & 0.06 & 0.08 & \(215\pm 1\) & 0.06 & 0.14 & \(132\pm 1\) & 0.06 & 0.16 & \(107\pm 1\) & 0.06 & 0.17 \\ \hline \end{tabular} \end{table} Table 1. Performance of the different agents, computed over 10 environment seeds. Figure 1. MA-PPO-HE and TarMAC-PPO outperform DQN and BBC for signal and temperature over 2 days with \(N_{\text{de}}=50\) agents. Figure 3. Training with more agents \(N_{\text{lr}}\) does not lead to better performance, even when deployed on large \(N_{\text{de}}\). Figure 2. Both MA-PPO policies scale seamlessly in the number of agents: signal and consumption on 800s for \(N_{\text{de}}=50\) and 1000. an agent can learn to act well on average to minimize the signal error. When there are only a few agents, as when \(N_{\text{de}}=10\) or \(50\), this does not perform very well. However, the performance gap decreases when \(N_{\text{de}}\) increases: a good average policy will do well when applied on many agents. Another way to see this is that, with large \(N_{\text{de}}\), each agent's importance becomes negligible in the final result. As such, the group can be seen as a single average agent, and the problem can be posed as a mean field game (Sundundar et al., 2016; Sundar et al., 2017). Interestingly, MA-PPO-HE at high \(N_{\text{de}}\) does better with communication defects. This may be because the MA-PPO-HE coordination leads to locally biased policies, which do not benefit from the averaging effect reducing the relative error when \(N_{\text{de}}\) increases. ### Robustness All the results presented were produced under certain assumptions, such as homogeneous houses and ACs, consistent outdoor temperature and signal profiles, and faultless communication. If such agents were to be deployed in the real world, they would be confronted with situations where these conditions are not satisfied. In this section, we evaluate the robustness of our trained agents to different disturbances in the deployment conditions. #### 5.6.1. Faulty communications As previously demonstrated, communications are key for good performance of the agents. In this robustness test, we simulate defective communications. At every time step, each message \(m_{j}^{i}\) is defective with a probability \(p_{d}\). In the case of TarMAC-PPO, this leads to the message not being received. For MA-PPO-HE, every element of the message is set to \(0\). We tested the best agents for \(N_{\text{de}}=10\), \(50\), \(250\) and \(1000\) houses with \(p_{d}=0.1\) and \(0.5\), as seen in Table 2. MA-PPO-HE agents' coordination is based on their stable communication structure. As a result, it copes badly with defective communications. Interestingly, when \(N_{\text{de}}\) is higher, the impact decreases, even leading to better performance at \(N_{\text{de}}=1000\). This may be due to the fact that the resulting policies cannot coordinate locally and are less locally biased. The TarMAC-PPO handles perfectly temporary defects in communication as its messages are aggregated. This is the case even with \(p_{d}=0.5\) and when the agent communicates with \(N_{\text{c}_{\text{tr}}}=9\) neighbours only. #### 5.6.2. Heterogeneous houses and ACs In reality, different houses have different thermal characteristics. The ACs also do not always have the same rated power or lockout duration. We deployed the best trained MA-PPO-HE and TarMAC-PPO agents for \(50\)-house environments that do not comply with these assumptions, to evaluate their robustness to separate disturbances. We also trained new agents on environments with these conditions, to allow the agents to learn to cope with heterogeneity. The relevant characteristics were observed by both agents as part of \(o_{t}^{i}\), and of the messages \(m_{j}^{i}\) in MA-PPO-HE. These agents are referred to with the \(\cdot\)T suffix. The thermal characteristics heterogeneity was simulated by adding a Gaussian noise to each element of \(\theta_{h}^{i}\) for each house, with a standard deviation of \(50\%\) of the original value (the final values cannot be negative). For the ACs cooling capacities \(K_{a}^{i}\), a value between \(10\), \(12.5\), \(15\), \(17.5\) and \(20\) kW was uniformly selected for each house. Finally, heterogeneity in the lockout duration \(l_{\text{max}}\) was tested by sampling uniformly between \(32\), \(36\), \(40\), \(44\) and \(48\) seconds. The results are shown in Table 3. TarMAC-PPO is much more robust to heterogeneity in agents than MA-PPO-HE. This is because Figure 4. State of 20 houses controlled with two different PPO agents. The number on the top right is the remaining lockout time. (Left) Two different agents of MA-PPO-HE with \(N_{\text{c}_{\text{de}}}=19\) show a “20-house” (up) and a “3-house” (down) pattern. (Right) Two different TarMAC-PPO agents show no such pattern. Figure 5. TarMAC-PPO’s performance does not increase after \(N_{\text{c}_{\text{de}}}=9\), while MA-PPO-HE is better with \(N_{\text{c}_{\text{de}}}=19\), for \(N_{\text{de}}=250\) agents. Figure 6. A TarMAC-PPO agent performs well as long as it communicates with \(N_{\text{c}_{\text{dr}}}=7\) agents or more, on \(N_{\text{de}}=50\). in MA-PPO-HE the coordination scheme is based on the stable dynamics of the agent's neighbours, especially with the lockout duration. TarMAC-PPO is instead more flexible with respect to different dynamics. For both agents, it is possible to reduce the effect of heterogeneity by training the agents on such environments and allowing them to observe the characteristics. This is different for heterogenity on the lockout duration, where TarMAC-PPO did not seem able to train satisfactorily on such conditions. An interesting observation is that the best TarMAC-PPO results were obtained when communicating with \(N_{\text{ct}_{\text{tr}}}=49\) agents. With heterogeneous agents, more neighbours are needed for a representative input. #### 5.6.3. Other environments We also tested our agents on environments differing from the training environment, with different outdoor temperature \(T_{\text{o}}\), solar gain \(Q_{\text{s}}\), too low or high average signal \(D_{\text{a}}\), and higher or faster signal variations \(\delta_{\text{s}}\). As can be seen in Table 4, both agents are quite robust to such changes, with TarMAC-PPO usually leading to better results. When the signal is misbehaved, i.e., it is too low or too high to allow correct control of the temperature, there is a tradeoff between the signal and the temperature objectives. MA-PPO-HE gives higher priority to temperature, leading to higher signal RMSE. ### Processing time In Table 5, we report the processing time for action selection of the baseline and trained agents. The results are shown for 25 times steps (100 seconds of simulation), except for the MPC which simulated 100 seconds with 10-time steps. They were computed on the 12-core, 2.2 GHz Intel i7-8750H CPU of a laptop computer. As the decentralized, learned agents only need a single forward pass in a relatively small neural network, the time for action selection is sufficiently low for control when using 4-second time steps. Centralized approaches such as greedy myopic scale badly with many agents. MPC, already simplified with time steps of 12 seconds instead of 4, and a short horizon of 40 seconds, takes an unacceptable amount of time for more than 10 agents. ## 6. Conclusion In this paper, we tackle the problem of high-frequency regulation with demand response by controlling discrete and dynamically constrained residential loads equipped with air conditioners with a decentralized, real-time agent trained by MA-PPO. We test two frameworks for local communication - fixed hand-engineered messages and learned targetted communication. The policies trained with few agents perform significantly better than baselines, scale seamlessly to large numbers of houses, and are robust to most disturbances. Our results show that MARL can be used successfully to solve some of the complex multi-agent problems induced by the integration of renewable energy in electrical power grids. Future works towards the application of such algorithms on real power systems could include sim2real transfer, integration of more complex flexible loads, as well as power grid safety issues. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{MA-PPO-HE} & \multicolumn{2}{c|}{MarMAC-PPO} \\ \hline Per-agent & Signal & Max T. & Signal & Max T. \\ RMSE & (W) & (C) & (W) & (C) \\ \hline Same as training & \(161\pm 8\) & 0.08 & 158 \(\pm\) 2 & 0.09 \\ Solar gain & \(190\pm 6\) & 0.09 & \(174\pm 2\) & 0.10 \\ Outdoor T. + 4°C & \(203\pm 4\) & 0.11 & 198 \(\pm\) 2 & 0.11 \\ Outdoor T. - 4°C & \(170\pm 1\) & 0.09 & \(184\pm 2\) & 0.12 \\ Signal average + 30\% & \(401\pm 2\) & 0.11 & \(302\pm 2\) & 0.14 \\ Signal average - 30\% & \(337\pm 4\) & 0.10 & \(317\pm 1\) & 0.11 \\ Signal noise amplitude + 30\% & \(188\pm 5\) & 0.08 & \(179\pm 3\) & 0.09 \\ Signal noise frequency + 100\% & \(200\pm 4\) & 0.08 & \(198\pm 5\) & 0.09 \\ \hline \end{tabular} \end{table} Table 4. Robustness on environment changes (5 seeds) \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\(N_{\text{dc}}=10\)} & \multicolumn{3}{c|}{\(N_{\text{dc}}=50\)} & \multicolumn{3}{c|}{\(N_{\text{dc}}=250\)} & \multicolumn{3}{c|}{\(N_{\text{dc}}=1000\)} \\ \hline Per-agent & Signal & T. & Max T. & Signal & T. & Max T. & Signal & T. & Max T. & Signal & T. & Max T. \\ RMSE & (W) & (C) & (C) & (W) & (C) & (C) & (W) & (C) & (C) & (W) & (C) & (C) \\ \hline \multirow{3}{*}{MA-PPO-HE} & \(p_{d}=0\) & \(253\pm 1\) & 0.04 & 0.08 & \(161\pm 8\) & 0.04 & 0.08 & \(127\pm 2\) & 0.04 & 0.11 & \(122\pm 3\) & 0.05 & 0.13 \\ & \(p_{d}=0.1\) & \(504\pm 2\) & 0.07 & 0.14 & \(207\pm 1\) & 0.04 & 0.11 & \(138\pm 2\) & 0.05 & 0.13 & \(118\pm 1\) & 0.05 & 0.14 \\ & \(p_{d}=0.5\) & \(597\pm 2\) & 0.10 & 0.19 & \(274\pm 1\) & 0.06 & 0.15 & \(148\pm 1\) & 0.06 & 0.151 & \(115\pm 2\) & 0.06 & 0.17 \\ \hline \multirow{3}{*}{TarMAC-PPO} & \(p_{d}=0\) & \(247\pm 3\) & 0.04 & 0.07 & \(158\pm 2\) & 0.04 & 0.09 & \(115\pm 1\) & 0.05 & 0.13 & \(101\pm 2\) & 0.05 & 0.14 \\ & \(p_{d}=0.1\) & \(246\pm 2\) & 0.04 & 0.07 & \(159\pm 3\) & 0.04 & 0.09 & \(115\pm 2\) & 0.05 & 0.12 & \(101\pm 1\) & 0.05 & 0.14 \\ \cline{1-1} & \(p_{d}=0.5\) & \(248\pm 2\) & 0.04 & 0.07 & \(159\pm 3\) & 0.04 & 0.09 & \(115\pm 2\) & 0.05 & 0.13 & \(101\pm 1\) & 0.05 & 0.14 \\ \hline \end{tabular} \end{table} Table 2. Performance under faulty communication (5 seeds) \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{MA-PPO-HE} & \multicolumn{2}{c|}{MA-PPO-HE-T} \\ \hline Per-agent & Signal & Max T. & Signal & Max T. \\ RMSE & (W) & (C) & (W) & (C) \\ \hline Homogeneous & \(161\pm 8\) & 0.08 & - & - \\ House thermal & \(285\pm 8\) & 0.17 & \(222\pm 7\) & 0.11 \\ AC cooling & \(292\pm 3\) & 0.15 & \(181\pm 3\) & 0.14 \\ Lockout duration & \(324\pm 9\) & 0.15 & \(246\pm 4\) & 0.09 \\ \hline \multirow{3}{*}{Home thermal AC cooling} & \(158\pm 2\) & 0.09 & - & - \\ House thermal & \(184\pm 2\) & 0.12 & \(174\pm 2\) & 0.11 \\ AC cooling & \(187\pm 2\) & 0.16 & \(185\pm 9\) & 0.16 \\ Lockout duration & \(192\pm 3\) & 0.09 & \(251\pm 4\) & 0.08 \\ \hline \end{tabular} \end{table} Table 3. Performance under house and AC heterogeneity
2308.01591
Moderate deviations for rough differential equations
Small noise problems are quite important for all types of stochastic differential equations. In this paper we focus on rough differential equations driven by scaled fractional Brownian rough path with Hurst parameter H between 1/4 and 1/2. We prove a moderate deviation principle for this equation as the scale parameter tends to zero.
Yuzuru Inahama, Yong Xu, Xiaoyu Yang
2023-08-03T07:49:50Z
http://arxiv.org/abs/2308.01591v3
# Moderate deviations for rough differential equations ###### Abstract Small noise problems are quite important for all types of stochastic differential equations. In this paper we focus on rough differential equations driven by scaled fractional Brownian rough path with Hurst parameter \(H\in(1/4,1/2]\). We prove a moderate deviation principle for this equation as the scale parameter tends to zero. **Keywords.** rough path theory, moderate deviation principle, fractional Brownian motion. **Mathematics subject classification.** 60L20, 60F10, 60G22. ## 1 Introduction Consider the following stochastic differential equation (SDE) with a deterministic initial point \(a\in\mathbb{R}^{e}\) driven by a \(d\)-dimensional standard Brownian motion \((w_{t})_{t\in[0,1]}\) scaled by a small parameter \(\varepsilon\in(0,1]\): \[dY_{t}^{\varepsilon}=b(Y_{t}^{\varepsilon})dt+\varepsilon\sigma(Y_{t}^{ \varepsilon})\star dw_{t},\qquad Y_{0}^{\varepsilon}=a.\] Here, the coefficients \(\sigma\colon\mathbb{R}^{e}\to\mathbb{R}^{e\times d}\) and \(b\colon\mathbb{R}^{e}\to\mathbb{R}^{e}\) are sufficiently regular functions and \(\star dw_{t}\) denotes either the Ito stochastic differential \(dw_{t}\) or the Stratonovich one \(\circ dw_{t}\). Investigating various limiting behaviors of \(Y^{\varepsilon}=(Y_{t}^{\varepsilon})_{t\in[0,1]}\) as \(\varepsilon\searrow 0\) is quite important not just for the standard SDE as above but also for many variants of SDEs. These problems are called small noise problems. One of the most typical examples are Freidlin-Wentzell's large deviation principle (LDP) for \(\{Y^{\varepsilon}\}_{\varepsilon\in(0,1]}\). Another example could be a central limit-type theorem for \((Y^{\varepsilon}-Y^{0})/\varepsilon\), which states that this process converges in law to a Gaussian process. In this paper we take up a moderate deviation principle (MDP), which is in fact an LDP for \(\{Z^{\varepsilon}\}_{\varepsilon\in(0,1]}\) by definition, where we set \[Z_{t}^{\varepsilon}=\frac{Y_{t}^{\varepsilon}-Y_{t}^{0}}{\varepsilon^{\lambda }},\qquad 0<\lambda<1.\] This is equivalent to Freidlin-Wentzell's LDP when \(\lambda=0\), while \(\{Z^{\varepsilon}\}\) satisfies the central limit-type theorem when \(\lambda=1\). Therefore, the MDP bridge the gap between these two famous limit theorems. The following is a partial list of preceding works on MDPs of this kind. MDPs for various stochastic systems such as jump-type SDEs [3, 2], SDEs with delay [21], stochastic Hamiltonian systems [24], slow-fast systems [10, 9, 16, 12], and Volterra-type SDEs [17, 14] have already been proved. For MDPs for stochastic PDEs, see [22, 23, 18] among others. In these works, the driving noises are standard, i.e. either Brownian or Poisson type. Study of MDPs for SDEs driven by a (mixed) fractional Brownian motion is still in its infancy. To our knowledge, there are only three works [1, 8, 25]. All of them are quite recent and study the case where Hurst parameter is larger than \(1/2\). MDPs of this type is not known in the setting of rough path theory. (Before finishing this work, however, the author was informed of [11], in which an MDP is proved for certain rough partial differential equations. These equations look quite different from those in this paper.) However, to the best of the authors' knowledge, no such result is known for rough differential equations (RDEs) of standard type. Our main result (Theorem 3.2) is an MDP for RDEs driven by a scaled fractional Brownian rough path with Hurst parameter \(H\in(1/4,1/2]\). To prove it, we only use Lyons' continuity theorem, a Schilder-type LDP for fractional Brownian rough path and the contraction principle for LDPs. The rest of this paper is structured as follows. In Section 2, we discuss RDE for the process \(Z^{\varepsilon}\). Everything in this section is deterministic. The drift term of the RDE is unbounded, but thanks to [20], we can make sure that solutions never explode. Once non-explosion is confirmed, we can show that \(Z^{\varepsilon}\) satisfies Lyons' continuity theorem, that is, it depends continuously on both the driving rough path and the small parameter \(\varepsilon\). Section 3 is a probabilistic part. We start by recalling a Schilder-type LDP for fractional Brownian rough path on the geometric rough path space. Our main result is Theorem 3.2, in which the MDP is stated and proved. The proof is almost immediate from the continuity theorem for \(Z^{\varepsilon}\) since we can combine the contraction principle and the Schilder-type LDP. Besides, a central limit-type theorem is also provided in Proposition 3.1. **Notation:** In this paper we will use the following notation (unless otherwise specified). We write \(\mathbb{N}=\{1,2,\ldots\}\). The time interval of (rough) paths and stochastic processes is \([0,1]\). All the vector spaces are over \(\mathbb{R}\). Now we will introduce the notation for some Banach spaces. (Below, \(d,e\in\mathbb{N}\) and \(\nabla\) is the standard gradient on a Euclidean space.) * For brevity, we write \(\mathbb{R}^{e\times d}\) for the set of real \(e\times d\)-matrices. The identity matrix of size \(e\) is denoted by \(\mathrm{Id}_{e}\) or simply \(\mathrm{Id}\). Similarly, we write \(\mathbb{R}^{e+d}\) for \(\mathbb{R}^{d}\oplus\mathbb{R}^{e}\). * The set of all continuous path \(\varphi\colon[0,1]\to\mathbb{R}^{d}\) is denoted by \(\mathcal{C}(\mathbb{R}^{d})\). Equipped with the usual sup-norm \(\|\varphi\|_{\infty}\), this is a Banach space. For \(\alpha\in(0,1]\), the set of \(\alpha\)-Holder continuous paths is denoted by \(\mathcal{C}^{\alpha}(\mathbb{R}^{d}):=\{\varphi\in\mathcal{C}(\mathbb{R}^{d}) \colon\|\varphi\|_{\alpha}<\infty\}\), where \(\|\varphi\|_{\alpha}\) is the usual \(\alpha\)-Holder seminorm. Similarly, for \(p\in[1,\infty)\), the set of continuous paths of finite \(p\)-variation is denoted by \(\mathcal{C}^{p\text{-var}}(\mathbb{R}^{d})=\{\varphi\in\mathcal{C}(\mathbb{R}^ {d})\colon\|\varphi\|_{p\text{-var}}<\infty\}\) where \(\|\varphi\|_{\text{p-var}}\) is the usual \(p\)-variation seminorm. The set of continuous paths that start at \(0\) is denoted by \(\mathcal{C}_{0}(\mathbb{R}^{d})\). In a similar way, \(\mathcal{C}_{0}^{\alpha}(\mathbb{R}^{d})\) and \(\mathcal{C}_{0}^{\text{p-var}}\) are defined. * Let \(U\subset\mathbb{R}^{d}\) be a domain. For \(k\in\mathbb{N}\cup\{0\}\), \(C^{k}(U,\mathbb{R}^{e})\) denotes the set of \(C^{k}\)-functions from \(U\) to \(\mathbb{R}^{e}\). (When \(k=0\), we simply write \(C(U,\mathbb{R}^{e})\) instead of \(C^{0}(U,\mathbb{R}^{e})\).) The set of bounded \(C^{k}\)-functions \(f\colon U\to\mathbb{R}^{e}\) whose derivatives up to order \(k\) are all bounded is denoted by \(C^{k}_{\text{b}}(U,\mathbb{R}^{e})\). This is a Banach space with the norm \(\|f\|_{C^{k}_{\text{b}}}:=\sum_{i=0}^{k}\|\nabla^{i}f\|_{\infty}\). (Here, \(\|\cdot\|_{\infty}\) stands for the usual sup-norm on \(U\).) As usual, we set \(C^{\infty}(U,\mathbb{R}^{e}):=\cap_{k=0}^{\infty}C^{k}(U,\mathbb{R}^{e})\) and \(C^{\infty}_{\text{b}}(U,\mathbb{R}^{e}):=\cap_{k=0}^{\infty}C^{k}_{\text{b}}( U,\mathbb{R}^{e})\). * Let \(U\subset\mathbb{R}^{d}\) be a domain and \(\gamma>0\). We write \(\gamma=k+\alpha\) for \(k\in\mathbb{N}\) and \(\alpha\in(0,1]\) in a unique way. We say \(f\colon U\to\mathbb{R}^{e}\) is of \(\text{Lip}^{\gamma}\) if \(f\in C^{k}_{\text{b}}(U,\mathbb{R}^{e})\) and \(\nabla^{k}f\) is \(\alpha\)-Holder continuous on \(U\). The set of all such \(\text{Lip}^{\gamma}\)-functions is denoted by \(\text{Lip}^{\gamma}(U,\mathbb{R}^{e})\). The \(\text{Lip}^{\gamma}\)-norm is defined by \[\|f\|_{\text{Lip}^{\gamma}}:=\|f\|_{C^{k}_{\text{b}}}+\sup_{x,y\in U,x\neq y} \frac{|f(x)-f(y)|}{|x-y|^{\alpha}}.\] Note that for \(C^{k}_{\text{b}}(U,\mathbb{R}^{e})\subsetneq\text{Lip}^{k}(U,\mathbb{R}^{e})\) for every \(k\in\mathbb{N}\). * Let \(\alpha=1/p\in(0,1]\) and \(N\in\mathbb{N}\). If \(w\) belongs to \(\mathcal{C}_{0}^{\alpha}(\mathbb{R}^{d})\) or \(\mathcal{C}_{0}^{\text{p-var}}(\mathbb{R}^{d})\), then we can define \[S_{N}(w)_{s,t}^{m}:=\int_{0\leq t_{1}\leq\cdots\leq t_{m}\leq 1}dw_{t_{1}} \otimes\cdots\otimes dw_{t_{m}},\qquad 0\leq s\leq t\leq 1\] as an iterated Young integral for all \(m\) (\(1\leq m\leq N\)). We call \(S_{N}(w)\) the natural lift of \(w\). * Let \(\alpha\in(1/4,1/2]\). We denote by \(G\Omega_{\alpha}(\mathbb{R}^{d})\) the \(\alpha\)-Holder geometric rough path space over \(\mathbb{R}^{d}\). (See [7, 19] for a precise definition.) By definition, \(G\Omega_{\alpha}(\mathbb{R}^{d})\) is the closure of \(\{S_{\lfloor 1/\alpha\rfloor}(w)\colon w\in\mathcal{C}_{0}^{1}(\mathbb{R}^{d})\}\) with respect to the \(\alpha\)-Holder rough path metric. It also coincides with the closure of \(\{S_{\lfloor 1/\alpha\rfloor}(w)\colon w\in\mathcal{C}_{0}^{\beta}(\mathbb{R}^{d})\}\) for every \(\beta\in[1,2)\). ## 2 Deterministic Part Let \(\alpha\in(1/4,1/2]\) and \(\varepsilon\in(0,1]\). In this section, we consider the following rough differential equation (RDE) driven by \(\mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d})\): \[dy_{t}^{\varepsilon}=b(y_{t}^{\varepsilon})dt+\varepsilon\sigma(y_{t}^{ \varepsilon})d\mathbf{x}_{t},\qquad y_{0}^{\varepsilon}=a\in\mathbb{R}^{e}. \tag{2.1}\] In this work \(a\) is arbitrary, but basically fixed. In sprit \(\varepsilon\) is a small constant. We will let \(\varepsilon\) tend to \(0\) later. It should be recalled that a unique solution of an RDE continously depends on both the driving rough path and the coefficients with respect to appropriate topologies under natural assumptions. Though there are several formulations of RDEs, we adopt one in Friz-Victoir' book [7] in this paper because two main preceding results we use are both proved in that formulation ([7, Theorem 12.10] and [20, Theorem 3.1]). In this formulation, a solution of an RDE is a continuous path in the usual sense and has no "higher level" objects. (In any formulation, the first level path of a solution, i.e. the component that plays the role of a usual path, coincides with a solution in the above sense after an adjustment of the initial value.) Let \(\kappa\colon(0,1]\to(0,\infty)\) be a continuous, non-increasing function such that \(\lim_{\varepsilon\searrow 0}\varepsilon\kappa(\varepsilon)=0\). (In what follows we understand \(0\kappa(0)=0\).) We are interested in the following object: \[z_{t}^{\varepsilon}:=\frac{y_{t}^{\varepsilon}-y_{t}^{0}}{\varepsilon\kappa( \varepsilon)} \tag{2.2}\] At least formally, one can easily check that \(z^{\varepsilon}\) satisfies \[dz_{t}^{\varepsilon}=\left(\int_{0}^{1}\nabla b(y_{t}^{0}+\theta\varepsilon \kappa(\varepsilon)z_{t}^{\varepsilon})\langle z_{t}^{\varepsilon}\rangle d \theta\right)dt+\kappa(\varepsilon)^{-1}\sigma(y_{t}^{0}+\varepsilon\kappa( \varepsilon)z_{t}^{\varepsilon})d\mathbf{x}_{t},\quad z_{0}^{\varepsilon}=0.\] The above heuristic consideration leads us to study the following system of RDEs: \[dy_{t}^{0} =b(y_{t}^{0})dt, y_{0}^{\varepsilon} =a, \tag{2.3}\] \[d\hat{z}_{t}^{\varepsilon} =\left(\int_{0}^{1}\nabla b(y_{t}^{0}+\theta\varepsilon\kappa( \varepsilon)\hat{z}_{t}^{\varepsilon})\langle\hat{z}_{t}^{\varepsilon}\rangle d \theta\right)dt+\sigma(y_{t}^{0}+\varepsilon\kappa(\varepsilon)\hat{z}_{t}^{ \varepsilon})d\mathbf{x}_{t}, \hat{z}_{0}^{\varepsilon} =0. \tag{2.4}\] For the rest of this section, we will show some deterministic properties of (2.3)-(2.4). Note that this system of RDEs makes sense even when \(\varepsilon=0\). **Proposition 2.1**.: _Let \(\alpha\in(1/4,1/2]\), \(\varepsilon\in[0,1]\) and consider the system (2.3)-(2.4) of RDEs driven by \(\mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d})\)._ (i) _Suppose that \(\sigma\) is of \(\mathrm{Lip}^{\gamma+1}\) for some \(\gamma>\alpha^{-1}\) and \(b\) is of \(\mathrm{Lip}^{2}\). Then, (2.3)-(2.4) has a unique (time-global) solution \((y^{0},\hat{z}^{\varepsilon})\) for every \(\mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d})\), \(a\in\mathbb{R}^{e}\) and \(\varepsilon\in[0,1]\). Moreover, for every \(r>0\) and \(a\in\mathbb{R}^{e}\), there exists a constant \(C_{a,r}>0\) such that_ \[\|y^{0}\|_{\infty}+\|\hat{z}^{\varepsilon}\|_{\infty}\leq C_{a,r} \tag{2.5}\] _for every \(\varepsilon\in[0,1]\) and \(\mathbf{x}\) with \(\sum_{i=1}^{\lfloor 1/\alpha\rfloor}\|\mathbf{x}^{i}\|_{\alpha}^{1/i}\leq r\). Here, \(C_{a,r}\) depends only on \(r\) and \(|a|\) (and \(\sigma\), \(b\), \(\alpha\))._ (ii) _Suppose that \(\sigma\) is of \(\mathrm{Lip}^{\gamma+1}\) for some \(\gamma>\alpha^{-1}\) and \(b\) is of \(C_{\mathrm{b}}^{3}\). Then,_ \[[0,1]\times G\Omega_{\alpha}(\mathbb{R}^{d})\ni\ (\varepsilon,\mathbf{x}) \mapsto\hat{z}^{\varepsilon}\ \in\mathcal{C}^{\alpha}(\mathbb{R}^{e}) \tag{2.6}\] _is continuous._ Proof.: We set \(B_{R}=\{(x,y)\in\mathbb{R}^{e+e}:|x|^{2}+|y|^{2}<R^{2}\}\) for \(R>0\). We write \(\gamma=m+\lambda\) for a unique \((m,\lambda)\in\mathbb{N}\times(0,1]\). Recall that \(\varepsilon\mapsto\varepsilon\kappa(\varepsilon)\) is continuous on \([0,1]\). First, we show (i). The diffusion coefficient of the system of RDEs is \[\mathbb{R}^{e+e}\ni\begin{pmatrix}y\\ z\end{pmatrix}\mapsto\begin{pmatrix}\mathbf{0}\\ \sigma(y+\varepsilon\kappa(\varepsilon)z)\end{pmatrix}\in\mathbb{R}^{(e+e) \times d} \tag{2.7}\] which is again of \(\mathrm{Lip}^{\gamma+1}\). Since \(\varepsilon\kappa(\varepsilon)\) is bounded in \(\varepsilon\), its \(\mathrm{Lip}^{\gamma+1}\)-norm is bounded by \(c\|\sigma\|_{\mathrm{Lip}^{\gamma+1}}\), where \(c>0\) is a constant independent of \(\varepsilon\). The drift of the system of RDEs is \[\mathbb{R}^{e+e}\ni\begin{pmatrix}y\\ z\end{pmatrix}\mapsto\begin{pmatrix}b(y)\\ \int_{0}^{1}\nabla b(y+\theta\varepsilon\kappa(\varepsilon)z)\langle z\rangle d \theta\end{pmatrix}\in\mathbb{R}^{e+e}, \tag{2.8}\] which is clearly locally Lipschitz continuous. Moreover, it is of linear growth uniformly in \(\varepsilon\in[0,1]\), that is, \[|b(y)|+\left|\int_{0}^{1}\nabla b(y+\theta\varepsilon\kappa(\varepsilon)z) \langle z\rangle d\theta\right|\leq\|b\|_{\infty}+\|\nabla b\|_{\infty}|z|, \qquad y,z\in\mathbb{R}^{e}.\] Now, we use [20, Theorem 3.1], in which Lyons' continuity theorem was extended to the case of RDEs with drift vector field of linear growth. It assures the existence of a unique global solution \((y^{0},\hat{z}^{\varepsilon})\) for every \(\mathbf{x}\) and \(\varepsilon\). Inequality (2.5) is also proved in [20]. Next, we show (ii). We write \(\tilde{\sigma}_{\varepsilon}(y,z):=\sigma(y+\varepsilon\kappa(\varepsilon)z)\). Since it holds for all \(y,z\in\mathbb{R}^{e}\) and \(i\) (\(0\leq i\leq m\)) that \[\nabla^{i}\sigma(y+\varepsilon\kappa(\varepsilon)z)-\nabla^{i} \sigma(y+\varepsilon_{0}\kappa(\varepsilon_{0})z)\] \[=\{\varepsilon\kappa(\varepsilon)-\varepsilon_{0}\kappa( \varepsilon_{0})\}\int_{0}^{1}d\tau\nabla^{i+1}\sigma\left(\tau(y+ \varepsilon\kappa(\varepsilon)z)+(1-\tau)(y+\varepsilon_{0}\kappa( \varepsilon_{0})z)\right)\langle z\rangle,\] we can easily see that \[\lim_{\varepsilon\to\varepsilon_{0}}\sup_{(y,z)\in B_{R}}|\nabla^{i}\sigma(y+ \varepsilon\kappa(\varepsilon)z)-\nabla^{i}\sigma(y+\varepsilon_{0}\kappa( \varepsilon_{0})z)|=0\] for all \(R>0\) and \(i\) (\(0\leq i\leq m\)). Moreover, since \[\nabla^{m}\sigma(y_{1}+\varepsilon\kappa(\varepsilon)z_{1})- \nabla^{m}\sigma(y_{2}+\varepsilon\kappa(\varepsilon)z_{2})\] \[=\int_{0}^{1}d\tau\nabla^{m+1}\sigma\left(\tau(y_{1}+\varepsilon \kappa(\varepsilon)z_{1})+(1-\tau)(y_{2}+\varepsilon\kappa(\varepsilon)z_{2} )\right)\langle(y_{1}-y_{2})+\varepsilon\kappa(\varepsilon)(z_{1}-z_{2})\rangle,\] we can easily show for all \(R>0\) that the \(\lambda\)-Holder norm on \(B_{R}\) of \[B_{R}\ni\ (y,z)\mapsto\nabla^{m}\sigma(y+\varepsilon\kappa(\varepsilon)z)- \nabla^{m}\sigma(y+\varepsilon_{0}\kappa(\varepsilon_{0})z)\] converges to \(0\) as \(\varepsilon\to\varepsilon_{0}\). It should be noted that we used above only the dominated convergence theorem and that \(\nabla^{m+1}\sigma\) is bounded and uniformly continuous every bounded subset. (In other words, the Holder continuity of \(\nabla^{m+1}\sigma\) was not used). Combining these, we can see that \[[0,1]\ni\ \varepsilon\mapsto\tilde{\sigma}_{\varepsilon}\ \in\mathrm{Lip}^{ \gamma}(B_{R},\mathbb{R}^{\varepsilon\times d})\] is continuous for all \(R>0\). Similarly, we set \(\tilde{\beta}_{\varepsilon}(y,z):=\int_{0}^{1}\nabla b(y+\theta\varepsilon \kappa(\varepsilon)z)\langle z\rangle d\theta\). Then, essentially in the same way as above, we can also show that \[[0,1]\ni\ \varepsilon\mapsto\tilde{\beta}_{\varepsilon}\ \in\mathrm{Lip}^{1+ \delta}(B_{R},\mathbb{R}^{\varepsilon}) \tag{2.9}\] for every \(R>0\) and sufficiently small \(\delta>0\). Now, we use [7, Theorem 12.10 and Remark 12.7 (i)], which is a version of Lyons' continuity theorem for RDEs with drift. It claims that a solution of such an RDE continuously depends on both the driving rough path and the coefficients. Thanks to (i), we can use a standard cut-off technique. Combining these, we can show that \[[0,1]\times\{\mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d}):\sum_{i=1}^{\lfloor 1 /\alpha\rfloor}\|\mathbf{x}^{i}\|_{\alpha}^{1/i}\leq r\}\ni\ (\varepsilon,\mathbf{x})\mapsto(y^{0},\hat{z}^{ \varepsilon})\ \in\mathcal{C}^{\alpha}(\mathbb{R}^{e+e})\] is continuous for all \(r>0\). This proves (ii). **Definition 2.2**.: (2.6) We denote by \(\Phi\colon[0,1]\times G\Omega_{\alpha}(\mathbb{R}^{d})\to\mathcal{C}^{\alpha}( \mathbb{R}^{e})\) the map defined by (2.6), namely, \(\Phi(\varepsilon,\mathbf{x})=\hat{z}^{\varepsilon}\). **Proposition 2.3**.: _Let \(\alpha\in(1/4,1/2]\) and \(\varepsilon\in(0,1]\). Let \(y^{\varepsilon}\) be a unique solution of RDE (2.1) and set \(z^{\varepsilon}\) by (2.2). Then, we have_ \[z^{\varepsilon}=\Phi(\varepsilon,\kappa(\varepsilon)^{-1}\mathbf{x}),\qquad \mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d}),\,\varepsilon\in(0,1].\] _Here, \(\kappa(\varepsilon)^{-1}\mathbf{x}\) is the dilation of \(\mathbf{x}\) by \(\kappa(\varepsilon)^{-1}>0\)._ Proof.: Let \(x\in\mathcal{C}^{1}(\mathbb{R}^{d})\) and denote its natural lift by \(\mathbf{x}:=S_{\lfloor\alpha\rfloor}(x)\). In this case, \(y^{\varepsilon}\) is a unique solution of the following Riemann-Stieltjes ODE: \[dy_{t}^{\varepsilon}=b(y_{t}^{\varepsilon})dt+\varepsilon\sigma(y_{t}^{ \varepsilon})dx_{t},\qquad y_{0}^{\varepsilon}=a\in\mathbb{R}^{e}.\] We can see from this that \[z_{t}^{\varepsilon} =\varepsilon^{-1}\kappa(\varepsilon)^{-1}\left\{\int_{0}^{t}\{b (y_{s}^{\varepsilon})-b(y_{s}^{0})\}ds+\varepsilon\int_{0}^{t}\sigma(y_{s}^{ \varepsilon})dx_{s}\right\}\] \[=\varepsilon^{-1}\kappa(\varepsilon)^{-1}\int_{0}^{t}\{b(y_{s}^{ 0}+\varepsilon\kappa(\varepsilon)z_{s}^{\varepsilon})-b(y_{s}^{0})\}ds+ \kappa(\varepsilon)^{-1}\int_{0}^{t}\sigma(y_{s}^{0}+\varepsilon\kappa( \varepsilon)z_{s}^{\varepsilon})dx_{s}\] \[=\int_{0}^{t}\left(\int_{0}^{1}b(y_{s}^{0}+\theta\varepsilon \kappa(\varepsilon)z_{s}^{\varepsilon})\langle z_{s}^{\varepsilon}\rangle d \theta\right)ds+\kappa(\varepsilon)^{-1}\int_{0}^{t}\sigma(y_{s}^{0}+ \varepsilon\kappa(\varepsilon)z_{s}^{\varepsilon})dx_{s}.\] Hence, we have \(z^{\varepsilon}=\Phi(\varepsilon,\kappa(\varepsilon)^{-1}S_{\lfloor\alpha \rfloor}(x))\) in this case. For a general \(\mathbf{x}\in G\Omega_{\alpha}(\mathbb{R}^{d})\), we take \(\{x_{k}\}_{k\in\mathbb{N}}\subset\mathcal{C}^{1}(\mathbb{R}^{d})\) such that \(\lim_{k\to\infty}S_{\lfloor\alpha\rfloor}(x_{k})=\mathbf{x}\) in \(G\Omega_{\alpha}(\mathbb{R}^{d})\) and use the continuity of \(\Phi(\varepsilon,\cdot)\) and \(\mathbf{x}\mapsto y^{\varepsilon}\) for each fixed \(\varepsilon\in(0,1]\) ## 3 Probabilistic Part In this section, we take parameters as follows. Let \(H\in(1/4,1/2]\). If \(H\in(1/3,1/2]\), we take \(\alpha\in(1/3,H)\). if \(H\in(1/4,1/3]\), we take \(\alpha\in(1/4,H)\). Note that \(\lfloor H^{-1}\rfloor=\lfloor\alpha^{-1}\rfloor\).) Denote by \((w_{t}^{H})_{t\in[0,1]}=(w_{t}^{H,1},\ldots,w_{t}^{H,d})_{t\in[0,1]}\) be a \(d\)-dimensional fractional Brownian motion with Hurst parameter \(H\). A canonical rough path lift of \(w^{H}\) is denoted by \({\bf W}^{H}\) and is called fractional Brownian rough path with Hurst parameter \(H\). It is viewed as a \(G\Omega_{\alpha}(\mathbb{R}^{d})\)-valued random variable. For \(m\in\mathbb{N}\), we denote by \(w^{H}(m)\) be a piecewise linear approximation of \(w^{H}\) associated with \(\{i/2^{m}:0\leq i\leq 2^{m}\}\). It is known that \(S_{\lfloor 1/\alpha\rfloor}(w^{H}(m))\) converges (at least) in probability to \({\bf W}^{H}\) with respect to the \(\alpha\)-Holder rough path topology. We denote by \({\cal H}^{H}(\mathbb{R}^{d})\) be the Cameron-Martin space of \(w^{H}\). Each \(h\in{\cal H}^{H}(\mathbb{R}^{d})\) is \(H\)-Holder continuous and of finite \(\{H+(1/2)\}^{-1}\)-variation (see [6, 5]). Note that \(1\leq\{H+(1/2)\}^{-1}<4/3\). Hence, \(S_{\lfloor 1/\alpha\rfloor}(h)\) is well-defined in the variation setting, although its Holder regularity is not so clear a priori. However, it is known that \(S_{\lfloor 1/\alpha\rfloor}(h)\in G\Omega_{\alpha}(\mathbb{R}^{d})\) (see [6]). The injection \(S_{\lfloor 1/\alpha\rfloor}\colon{\cal H}^{H}(\mathbb{R}^{d})\hookrightarrow G \Omega_{\alpha}(\mathbb{R}^{d})\) is locally Lipschitz continuous. Let \(\varepsilon{\bf W}^{H}\) be the dilation of \({\bf W}^{H}\) by \(\varepsilon\in(0,1]\). A Schilder-type LDP is known, that is, \(\{\varepsilon{\bf W}^{H}\}_{\varepsilon\in(0,1]}\) satisfies an LDP on \(G\Omega_{\alpha}(\mathbb{R}^{d})\) as \(\varepsilon\searrow 0\) with speed \(\varepsilon^{-2}\) and a good rate function \(J\), which is defined by \[J({\bf x})=\left\{\begin{array}{ll}\|h\|_{{\cal H}^{H}(\mathbb{R}^{d})}^{2}/ 2&(\mbox{if ${\bf x}=S_{\lfloor 1/\alpha\rfloor}(h)$ for some $h\in{\cal H}^{H}(\mathbb{R}^{d})$}),\\ +\infty&(\mbox{otherwise}).\end{array}\right.\] (See [7, Theorem 15.55].) Moreover, \(\{\varepsilon{\bf W}^{H}\}_{\varepsilon\in(0,1]}\) is exponentially tight on \(G\Omega_{\alpha}(\mathbb{R}^{d})\), due to a Fernique-type theorem for \({\bf W}^{H}\). (See [7, Theorem 15.33].). For \(\varepsilon\in[0,1]\), let \(Y^{\varepsilon}\) be a unique solution of (2.1) with \({\bf x}\) being replaced by \({\bf W}^{H}\), namely, \[dY^{\varepsilon}_{t}=b(Y^{\varepsilon}_{t})dt+\varepsilon\sigma(Y^{ \varepsilon}_{t})d{\bf W}^{H}_{t},\qquad Y^{\varepsilon}_{0}=a\in\mathbb{R}^{e}, \tag{3.1}\] and set for \(\varepsilon\in(0,1]\) \[Z^{\varepsilon}_{t}:=\frac{Y^{\varepsilon}_{t}-Y^{0}_{t}}{\varepsilon\kappa( \varepsilon)}. \tag{3.2}\] Clearly, \(Z^{\varepsilon}\) is a \({\cal C}^{\alpha}(\mathbb{R}^{d})\)-valued random variable. **Proposition 3.1**.: _Consider the case \(\kappa\equiv 1\). Suppose that \(\sigma\) is of \(\mathrm{Lip}^{\gamma+1}\) for some \(\gamma>H^{-1}\) and \(b\) is of \(C^{3}_{\mathrm{b}}\). Then, as \(\varepsilon\searrow 0\),_ \[Z^{\varepsilon}_{t}=\frac{Y^{\varepsilon}_{t}-Y^{0}_{t}}{\varepsilon}\ \to\ \Phi(0,{\bf W}^{H})\] _in \({\cal C}^{\alpha}(\mathbb{R}^{d})\) almost surely. Moreover, \(\Phi(0,{\bf W}^{H})\) is a mean-zero Gaussian process._ Proof.: Let \(H^{-1}<\alpha^{-1}<\gamma\wedge(\lfloor H^{-1}\rfloor+1)\). The convergence is immediate from Propositions 2.1 and 2.3 (and Definition 2.2, too). So, it remains to show that Gaussian property. Since \[\Phi(0,\mathbf{W}^{H})=\lim_{m\to\infty}\Phi(0,S_{\lfloor 1/\alpha\rfloor}(w^{H}(m))) \qquad\text{a.s.},\] it suffices to check that \(\Phi(0,S_{\lfloor 1/\alpha\rfloor}(w^{H}(m)))\), which will be denoted by \(\Xi(m)\), is Gaussian with mean zero. By definition, \(\Xi(m)\) solves the following Riemann-Stieltjes ODE: \[d\Xi(m)_{t}=\nabla b(y_{t}^{0})\langle\Xi(m)_{t}\rangle dt+\sigma(y_{t}^{0})dw ^{H}(m)_{t},\qquad\Xi(m)_{0}=0.\] Let \(M\) be a unique solution of the following \(e\times e\) matrix-valued ODE: \[dM_{t}=\nabla b(y_{t}^{0})M_{t}dt,\qquad M_{0}=\operatorname{Id}_{e}.\] Note that \(\nabla b\) is viewed as an \(e\times e\) matrix-valued function. Then, \(M_{t}\) is invertible and non-random and we have \[\Xi(m)_{t}=M_{t}\int_{0}^{t}M_{s}^{-1}\sigma(y_{s}^{0})w^{H}(m)_{s}^{\prime}ds\] for all \(t\in[0,1]\). Note that, for all \(s\), \(w^{H}(m)_{s}^{\prime}\) can be written as a linear combination of \(\{w^{H}_{i/2^{m}}\colon 0\leq i\leq 2^{m}\}\). So, \(\Xi(m)_{t}\) can be written as a limit of linear combinations of \(\{w^{H}_{s}\colon 0\leq s\leq 1\}\), which implies that \(\Xi(m)\) is a mean-zero Gaussian process. Now we provide our main theorem. It is an MDP for RDEs driven by fractional Brownian rough path with Hurst parameter \(H\in(1/4,1/2]\). A prominent example of \(\kappa\) is \(\kappa(\varepsilon)=\varepsilon^{-\theta}\) for \(0<\theta<1\). **Theorem 3.2**.: _Let \(H\in(1/4,1/2]\) and \(\alpha\in(0,H)\). Suppose that \(\kappa\colon(0,1]\to(0,\infty)\) is a continuous, non-increasing function such that \(\lim_{\varepsilon\searrow 0}\kappa(\varepsilon)=+\infty\) and \(\lim_{\varepsilon\searrow 0}\varepsilon\kappa(\varepsilon)=0\). Suppose further that \(\sigma\) is of \(\operatorname{Lip}^{\gamma+1}\) for some \(\gamma>H^{-1}\) and \(b\) is of \(C^{3}_{\mathbb{b}}\)._ _Then, \(\{Z^{\varepsilon}\}_{\varepsilon\in(0,1]}\) satisfies an LDP in \(\mathcal{C}^{\alpha}(\mathbb{R}^{d})\) as \(\varepsilon\searrow 0\) with speed \(\kappa(\varepsilon)^{2}\) and a good rate function \(I\) given by_ \[I(\xi)=\inf\{\|h\|_{\mathcal{H}^{H}(\mathbb{R}^{d})}^{2}/2\colon h\in\mathcal{H }^{H}(\mathbb{R}^{d})\text{ such that }\xi=\Xi^{h}\},\quad\xi\in\mathcal{C}^{ \alpha}(\mathbb{R}^{d}).\] _As usual we set \(\inf\emptyset=+\infty\). Here, \(\Xi^{h}\) stands for a unique solution of the following Young ODE driven by \(h\):_ \[d\Xi^{h}_{t}=\nabla b(y_{t}^{0})\langle\Xi^{h}_{t}\rangle dt+\sigma(y_{t}^{0} )dh_{t},\qquad\Xi^{h}_{0}=0. \tag{3.3}\] Proof.: The larger \(\alpha\) is, the stronger the claim of the theorem becomes. Hence, it is enough to assume \(H^{-1}<\alpha^{-1}<\gamma\wedge(\lfloor H^{-1}\rfloor+1)\). Consider the family of point masses \(\{\delta_{\varepsilon}\}_{\varepsilon\in(0,1]}\) on \([0,1]\). Clearly, it satisfies an LDP on \([0,1]\) as \(\varepsilon\searrow 0\) with speed \(\kappa(\varepsilon)^{2}\) and a good rate function \(K\), where \(K(0):=0\) and \(K(s):=+\infty\) if \(0<s\leq 1\). It is also clear that \(\{\delta_{\varepsilon}\}_{\varepsilon\in(0,1]}\) is exponentially tight on \([0,1]\) By a general fact for LDPs for product measures (see [4, p. 129] for instance), \(\{(\varepsilon,\kappa(\varepsilon)^{-1}{\bf W}^{H})\}_{\varepsilon\in(0,1]}\) satisfies an LDP on \([0,1]\times G\Omega_{\alpha}(\mathbb{R}^{d})\) as \(\varepsilon\searrow 0\) with speed \(\kappa(\varepsilon)^{2}\) and a good rate function \(\hat{J}\), where \[\hat{J}(\varepsilon,{\bf x}):=\left\{\begin{array}{ll}\|h\|^{2}_{{\cal H}^{H }(\mathbb{R}^{d})}/2&\mbox{(if $\varepsilon=0$ and ${\bf x}=S_{\lfloor 1/\alpha\rfloor}(h)$ for some $h\in{\cal H}^{H}(\mathbb{R}^{d})$)},\\ +\infty&\mbox{(otherwise)}.\end{array}\right.\] By Proposition 2.3, we have \(Z^{\varepsilon}=\Phi(\varepsilon,\kappa(\varepsilon)^{-1}{\bf W}^{H})\) and \(\Phi\) is continuous by Proposition 2.1. Therefore, we can use the contraction principle [4, Theorem 4.2.1] to obtain the desired LDP for \(\{Z^{\varepsilon}\}_{\varepsilon\in(0,1]}\) with a good rate function \(I\) given as follows: \[I(\xi)=\inf\{\|h\|^{2}_{{\cal H}^{H}(\mathbb{R}^{d})}/2\colon h\in{\cal H}^{H }(\mathbb{R}^{d})\mbox{ such that }\xi=\Phi(0,S_{\lfloor 1/\alpha\rfloor}(h))\}, \quad\xi\in{\cal C}^{\alpha}(\mathbb{R}^{d}).\] Noting that \(\Xi^{h}=\Phi(0,S_{\lfloor 1/\alpha\rfloor}(h))\), we completes the proof. **Remark 3.3**.: By specializing \(H=1/2\) in Theorem 3.2, we recover known moderate deviation results for usual SDEs at least to some extent. (Since we use rough path theory, the conditions on \(b\) and \(\sigma\) in this remark are stronger than those in preceding works.) Note that \(W^{H}\) is Stratonovich-type Brownian rough path in this case. In this remark, \(\kappa\) is the same as in Theorem 3.2. (1) Suppose that \(\sigma\) is of \(\mbox{Lip}^{\gamma+1}\) for some \(\gamma>2\) and \(b\) is of \(C_{\rm b}^{3}\). Then, the solution \(Y^{\varepsilon}\) of RDE (3.1) coincides with a unique solution of the following usual Strotonovich-type SDE driven by standard Brownian motion \((w_{t}^{1/2})_{t\in[0,1]}\): \[dy^{\varepsilon}_{t}=b(y^{\varepsilon}_{t})dt+\varepsilon\sigma(y^{\varepsilon }_{t})\circ dw^{1/2}_{t},\qquad y^{\varepsilon}_{0}=a\in\mathbb{R}^{e}.\] Note that this SDE has a unique (non-exploding) solution because both \(b\) and \(\sigma\) are of \(C_{\rm b}^{2}\) (see a corollary in [13, p. 106] for instance). Therefore, an MDP for \(\{y^{\varepsilon}\}_{\varepsilon\in(0,1]}\) (i.e. an LDP for \(\{z^{\varepsilon}\}_{\varepsilon\in(0,1]}\), where \(z^{\varepsilon}:=(y^{\varepsilon}-y^{0})/\{\varepsilon\kappa(\varepsilon)\}\)) is a special case of Theorem 3.2 above. (2) Next, we discuss Ito-type SDEs. Suppose that \(\sigma\) is of \(C_{\rm b}^{4}\) and \(b\) is of \(C_{\rm b}^{3}\). Instead of (3.1), we consider the following RDE: \[d\tilde{Y}^{\varepsilon}_{t}=\tilde{b}_{\varepsilon}(\tilde{Y}^{\varepsilon} _{t})dt+\varepsilon\sigma(\tilde{Y}^{\varepsilon}_{t})d{\bf W}^{1/2}_{t}, \qquad Y^{\varepsilon}_{0}=a\in\mathbb{R}^{e}, \tag{3.4}\] where \[\tilde{b}^{i}_{\varepsilon}(y):=b^{i}(y)-\frac{\varepsilon^{2}}{2}\sum_{j=1}^{ d}\sum_{k=1}^{e}\sigma_{kj}(y)\cdot\partial_{k}\sigma_{ij}(y),\qquad y\in \mathbb{R}^{e},\ 1\leq i\leq e.\] In other words, \(b\) in (3.1) was replaced by \(\tilde{b}_{\varepsilon}\), which is of \(C_{\rm b}^{3}\) again. Note \(\varepsilon^{2}\) in front of the Ito-Stratonovich correction term. Then, \(\tilde{Y}^{\varepsilon}\) coincides with a unique solution of the following usual Ito-type SDE: \[d\tilde{y}^{\varepsilon}_{t}=b(\tilde{y}^{\varepsilon}_{t})dt+\varepsilon \sigma(\tilde{y}^{\varepsilon}_{t})dw^{1/2}_{t},\qquad\tilde{y}^{\varepsilon}_{ 0}=a\in\mathbb{R}^{e}.\] It should be noted that since \(\tilde{b}_{\varepsilon}\) depends on \(\varepsilon\), an MDP for \(\{\tilde{Y}^{\varepsilon}\}_{\varepsilon\in(0,1]}\) (equivalently, those for \(\{\tilde{y}^{\varepsilon}\}_{\varepsilon\in(0,1]}\)) is not proved in Theorem 3.2. But, we can slightly modify Theorem 3.2 to cover this case as follows. First, \(b\) in RDE (2.4) is replaced by \(b_{\varepsilon}\). Then, the (new) drift vector field of this RDE, as a function of \(\varepsilon\), still satisfies the same property as in (2.9). Moreover, thanks to the factor \(\varepsilon^{2}\), the limiting skeleton ODE (3.3) remains unchanged, i.e. the correction term vanishes from this ODE. For these reasons, we can see the same MDP holds for \(\{\tilde{Y}^{\varepsilon}\}_{\varepsilon\in(0,1]}\) (and \(\{\tilde{y}^{\varepsilon}\}_{\varepsilon\in(0,1]}\)), too. **Acknowledgments**: This work was partly supported by JSPS KAKENHI (Grant No. 20H01807), the Key International (Regional) Cooperative Research Projects of the NSF of China (Grant 12120101002) and the NSF of China (Grant 12072264).
2306.06752
QCD with an Infrared Fixed Point -- Pion Sector
The possibility that gauge theories with chiral symmetry breaking below the conformal window exhibit an infrared fixed point is explored. With this assumption three aspects of pion physics are reproduced if the the quark mass anomalous dimension at the infrared fixed point is $\gamma_* = 1$: First, by matching the long-distance scalar adjoint correlation function. Second, by perturbing the fixed point by a small quark mass, the $m_q$-dependence of the pion mass is reproduced by renormalisation group arguments. Third, consistency of the trace anomaly and the Feynman-Hellmann theorem, for small $m_q$, imply the same result once more. This suggests the following picture for the conformal window: close to its upper boundary $\gamma_*$ is zero and grows as the number of fermions is reduced until its lower boundary $\gamma_*=1$ is reached, where chiral symmetry breaking sets in. Below, the strongly coupled gauge theory with $\gamma_*=1$ is infrared dual to the free theory of pions. A possible dilaton sector of the scenario will be addressed in a companion paper.
Roman Zwicky
2023-06-11T19:39:40Z
http://arxiv.org/abs/2306.06752v2
# QCD with an Infrared Fixed Point - Pion Sector ###### Abstract The possibility that gauge theories with chiral symmetry breaking below the conformal window exhibit an infrared fixed point is explored. With this assumption three aspects of pion physics are reproduced if the the quark mass anomalous dimension at the infrared fixed point is \(\gamma_{*}=1\): First, by matching the long-distance scalar adjoint correlation function. Second, by perturbing the fixed point by a small quark mass, the \(m_{q}\)-dependence of the pion mass is reproduced by renormalisation group arguments. Third, consistency of the trace anomaly and the Feynman-Hellmann theorem, for small \(m_{q}\), imply the same result once more. This suggests the following picture for the conformal window: close to its upper boundary \(\gamma_{*}\) is zero and grows as the number of fermions is reduced until its lower boundary \(\gamma_{*}=1\) is reached, where chiral symmetry breaking sets in. Below, the strongly coupled gauge theory with \(\gamma_{*}=1\) is infrared dual to the free theory of pions. A possible dilaton sector of the scenario will be addressed in a companion paper. ## 1 Introduction The idea that spontaneous chiral symmetry breaking in the strong interaction induces scale spontaneous symmetry breaking (SSB) predates QCD [1; 2; 3; 4; 5; 6]. The goal of this paper is to explore this idea within gauge theories, using parts of chiral perturbation theory (\(\chi\)PT) [7; 8; 9; 10] and the renormalisation group (RG). Whether this scenario corresponds to a new phase [11], or an unexplored feature of QCD has to be left open at this stage. The assumption of an infrared fixed point (IRFP) is non-standard. The main point of the paper is that under this hypothesis aspects of pion physics are reproduced consistently. An IRFP and scale SSB is accompanied by a (pseudo) Goldstone boson, known as the dilaton. Its features and interactions are less transparent than that of the pion as scale symmetry is only emergent in the IR. Since the results presented here are seemingly independent of dilaton-aspects, its main discussion is postponed to a companion paper [12].1 At the end of the paper, we briefly comment on how the addition of a dilaton does not alter the results. Footnote 1: Possibly the most spectacular aspect of a dilaton is that massive hadrons, such as a nucleon, and a traceless energy momentum tensor (EMT) \(\langle N|T^{\rho}{}_{\rho}|N\rangle=0\) are compatible with each other. [11]. The dilaton restores the dilatation Ward identity just as the pion does for chiral Ward identities. Another attractive feature is that the gauge theory contribution to the cosmological constant could be zero for \(m_{q}=0\)[11]. It is worthwhile to mention that the dilaton under discussion is not a gravity-scalar, such as in string theory, nor an accidentally light scalar but a genuine Goldstone resulting from SSB, e.g. [13] for a historical perspective. If QCD were to possess an IRFP and a dilaton, there is consensus that it corresponds to the \(\sigma\)-meson, known as the \(f_{0}(500)\) in the Particle Data Group [14]. The starting assumption is that the massless degrees of freedoms, to which we will refer to as IR-states, see the world as a conformal field theory (CFT) in the deep-IR.2 That is, the trace of the EMT on the IR-states \(\phi_{\rm IR}\) Footnote 2: See App. B for comments on scale versus conformal symmetry. \[\langle\phi_{\rm IR}(p)|T^{\rho}{}_{\rho}|\phi^{\prime}_{\rm IR}(p)\rangle \to 0\;, \tag{1}\] vanishes for zero momentum transfer.3 It is though reasonable to assume that there exists a scheme for which \(\beta_{*}=0\) if (1) holds. Amongst those degrees of freedom are the vacuum, the pions resulting from chiral SSB and possibly the dilaton [11; 12]. Eq. (1) may be regarded as the minimal form by which IR-conformality manifests itself in the dilatation Ward identity. Technically this means that correlation functions in the deep-IR, and generally physical observables, are determined by the scaling dimension \(\langle\mathcal{O}(x)\mathcal{O}(0)\rangle\propto(x^{2})^{-\Delta_{\mathcal{O}}}\). The quark mass anomalous dimension \(\gamma_{m}\), denoted by \(\gamma_{*}=\gamma_{m}|_{\mu=0}\) at the IRFP, governs the scaling dimension of many important operators. The central result of this paper is that with the IRFP-assumption, this anomalous dimension must assume Footnote 3: This does not imply that any definition of a \(\beta\)-function assumes a zero in the IR as it is only the combination of \(\beta\) times the field strength tensor which is RG-invariant cf. Sec. 2.3.1. This aspect has for example been emphasised in the review [15]. \[\gamma_{*}=1\;. \tag{2}\] This is inferred in three different ways, by matching the pion low energy physics with the gauge theory. The value (2) is then important in two respects: it marks the lower boundary of the conformal window _and_ it describes the pion physics in the chirally broken phase in terms of the strongly coupled IRFP of the gauge theory. Whereas the former is compatible with previous work and lattice Monte-Carlo studies, as discussed in the conclusions, the latter is a new perspective. The main part of the paper consists of Sec. 2 where \(\gamma_{*}=1\) (2) is derived from: a) a specific long-distance correlator, b) the hyperscaling relation of the pion mass, and c) the matching of the trace anomaly with the Feynman-Hellmann theorem, given in Secs. 2.1, 2.2 and 2.3. respectively. In Sec. 3 we comment on what happens when a dilaton is added. The paper ends with a summary and discussion in Sec. 4. Apps. A, B and C contain conventions, related discussion of scale versus conformal invariance and the soft-pion theorem in use. ## 2 Consequences of an IRFP for QCD-like Theories The conformal window is reviewed as this work builds on it, and for further reading the reader is referred to [15; 18; 19]. The starting point is an asymptotically free gauge theory with gauge group \(G\), e.g. \(G=SU(N_{c})\), and \(N_{f}\) massless quarks in a given representation of \(G\). The point of study are the IR phases of these gauge theories as a function of \(N_{c}\), \(N_{f}\) and the quark representation, cf. Fig. 1. The figure on the left depicts the standard picture for non-supersymmetric gauge theories.4 The boundary in the \((N_{c},N_{f})\)-plane of where asymptotic freedom is lost is known and for \(N_{f}\) below the boundary the theories admit a perturbative IRFP, the so-called Caswell-Banks-Zaks FP [16; 17]. This phase, shown in green, continues until the coupling becomes strong enough for chiral symmetry to break via the formation of the quark condensate \(\langle\bar{q}q\rangle\neq 0\), marked in dark blue and collectively referred to as QCD. This breaks the global flavour symmetry \(SU(N_{f})_{L}\otimes SU(N_{f})_{R}\to SU(N_{f})_{V}\), accompanied by \(N_{f}^{2}-1\) massless pions as Goldstones and is believed to cause quarks and gluons to confine into hadrons. The exact boundary between the two phases is unknown and the matter of intensive debates in the literature. All evidence points towards a monotonically increasing \(\gamma_{*}\), cf. the list of references in the conclusions. A large \(\gamma_{*}\) is important for the walking technicolor scenario, e.g. [25; 18], and gave rise to efforts to determine it from lattice Monte Carlo simulations, e.g. [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39] as reviewed in [15; 19; 40]. Figure 1: Sketch of phase diagrams of gauge theories with quark matter as a function of the number of flavours \(N_{f}\) and colours \(N_{c}\), as described in the main text. “No AF” stands for no asymptotic freedom and its boundary is known from the Caswell-Banks-Zaks analysis [16; 17]. The lower dark green line marks the end of the conformal window and its precise location is unknown in the non-supersymmetric case. In the lower dark blue phase chiral symmetry is broken, hadrons confine and \(N_{f}=3\) and \(N_{c}=3\) represents QCD. (left) Literature-standard conformal window scenario. (centre) CD-QCD as a third phase as advertised in [11]. (right) QCD and CD-QCD are one and the same. We emphasise again that boundaries other than the one of AF are unknown and are shown for illustrative purposes only. In [11] the conformal dilaton was advocated as a third phase as shown in the central figure and its domain and location should not be taken literally. In this work we refer to this phase as the conformal dilaton CD of QCD. It seemed reasonable to assume that this phase lies in between the others as it is the same for its properties. Clearly neither its existence nor its location are certainties. Any of the three cases shown in Fig. 1 are logical possibilities. This paper consists in analysing the IRFP scenario, or the CD-QCD phase. It seems worthwhile to reemphasise that none of the results obtained directly depend on the presence of a dilaton. ### Deep-IR interpretation of the adjoint scalar correlator (\(m_{q}=0\)) For \(m_{q}=0\) the theory exhibits, the previously mentioned, scaling in correlation functions and this is what we will exploit in this section. The scalar operator, with \(J^{P}=0^{+}\) quantum numbers \[S^{a}=\bar{q}T^{a}q\, \tag{1}\] where \(T^{a}\) generates the flavour symmetry, is an example that offers itself since it is not perturbed by a single Goldstone. Consistency of the IRFP interpretation means that \[\langle S^{a}(x)S^{a}(0)\rangle_{\text{CD-QCD}}=\langle S^{a}(x)S^{a}(0) \rangle_{\chi\text{PT}}\,\quad\text{for}\ x^{2}\to\infty\, \tag{2}\] must hold, since they describe the same theory in the deep-IR limit \(x^{2}\to\infty\). The next two sections are devoted to this matching.5 Footnote 5: This correlator has been used in [41] to match \(\chi\)PT and the spectral representation. It was deduced that the correction to the Dirac eigenvalue density is \(\rho(\lambda)-\rho(0)=C|\lambda|\), where \(C\) is known and \(\rho(0)=-\langle\bar{q}q\rangle/\pi\) is the famous Banks-Casher relation [42]. #### 2.1.1 The CD-QCD correlator in the deep-IR It is our assumption that QCD is described by an IRFP in the deep-IR, which in turn means that CFT methods apply in that regime. In CFTs 2- and 3-point correlators [43; 44; 45; 46; 47; 48] are entirely governed by their scaling dimensions, \(\Delta_{\mathcal{O}}=d_{\mathcal{O}}+\gamma_{\mathcal{O}}\), which is the sum of the engineering dimension and the anomalous dimension. Concretely, for a Euclidean CFT \[\langle\mathcal{O}(x)\mathcal{O}^{\dagger}(0)\rangle_{\text{CFT}}\propto(x^{2 })^{-\Delta_{\mathcal{O}}}\, \tag{3}\] where \(x^{2}=x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\) and \(\langle\ldots\rangle\) denoting, hereafter, the vacuum expectation value. The behaviour in (3) should be mirrored by the correlation function (2) in the deep-IR. The only necessary ingredient is the scaling dimension of \(S^{a}\) which is \[\Delta_{S^{a}}=d_{S^{a}}-\gamma_{*}=3-\gamma_{*}\, \tag{4}\] since \(d_{S^{a}}=3\). Eq. (4) follows from \(\Delta_{S^{a}}=\Delta_{P^{a}}\), which holds at least in perturbation theory since the \(\gamma_{5}\) can be commuted through the diagram for \(P^{a}=\bar{q}i\gamma_{5}T^{a}q\) to recover \(S^{a}\) if \(m_{q}=0\) is assumed. In turn, \(\Delta_{P^{a}}=3-\gamma_{m}\) follows from the Ward identity \(\partial^{\mu}\langle A_{\mu}^{a}(x)P^{b}(0)\rangle\propto\delta^{(4)}(x) \delta^{ab}\langle\bar{q}q\rangle\) and the fact that \(A_{\mu}^{a}\) and \(m_{q}\bar{q}q\) and are RG invariants. This is true for the former since it is a softly conserved current and for the latter it follows or instance from the quantum action principle for which the reader is referred to [49], for a discussion in the perturbative context. With (3) and (4), one concludes that \[\langle S^{a}(x)S^{a}(0)\rangle_{\text{CD-QCD}}\propto(x^{2})^{-(3-\gamma_{*})}\;,\quad x^{2}\to\infty\;. \tag{5}\] #### 2.1.2 Leading order chiral perturbation theory In order to compute the correlator (2) in \(\chi\)PT, the QCD-operator \(S^{a}\) needs to be described in terms of pion fields. This can be done by the source method [7; 8; 9; 10], starting from the LO mass Lagrangian \[\delta\mathcal{L}_{m_{q}}=\frac{F_{\pi}^{2}B_{0}}{2}\text{Tr}[\mathcal{M}U^{ \dagger}+U\mathcal{M}^{\dagger}]\;, \tag{6}\] where \(B_{0}=-\langle\bar{q}q\rangle/F_{\pi}^{2}\) and the quark mass matrix is \(\mathcal{M}=m_{q}\mathbb{1}_{N_{f}}\) in our case. The operator \(S^{a}\) is obtained by replacing \(\mathcal{M}\to T^{a}J_{S^{a}}\) and differentiating the log of the Euclidean generating functional \(\mathcal{Z}\) w.r.t. to the source, \(\langle S^{a}(x)\rangle\leftrightarrow\delta_{J_{S^{a}}(x)}\ln\mathcal{Z}\), \[S^{a}=-\frac{F_{\pi}^{2}B_{0}}{2}\text{Tr}[T^{a}U^{\dagger}+UT^{a}]\propto B_ {0}d^{abc}\pi^{b}\pi^{c}+\mathcal{O}(1/F_{\pi}^{2})\;, \tag{7}\] where the \(\mathcal{O}(1/F_{\pi}^{2})\)-terms are cutoff-suppressed and thus next-LO.6 The computation of the correlator in LO \(\chi\)PT is now straightforward Footnote 6: Generically, \(d^{abc}\neq 0\) but for \(N_{f}=2\) it vanishes; \(d^{abc}d^{abc}\propto N_{f}^{2}-4\). This accidentality is of no special concern to the argument made in this section. \[\langle S^{a}(x)S^{a}(0)\rangle_{\chi\text{PT}}\propto B_{0}^{2}d^{abc}d^{abc }\langle\pi^{e}(x)\pi^{e}(0)\rangle^{2}\propto\frac{1}{x^{4}}\;,\quad\text{for $x^{2}\to\infty$}\;, \tag{8}\] where as anticipated \(m_{q}\to 0\) limit has been assumed. Above \(\langle\pi^{e}(x)\pi^{e}(0)\rangle=\frac{1}{(4\pi)^{2}}\frac{1}{x^{2}}\), with \(e\) fixed, is the standard Euclidean propagator for a massless scalar field \(\pi^{e}(x)\). Thus the LO \(\chi\)PT is just given by a free field theory computation as illustrated in Fig. 2. #### 2.1.3 IRFP-matching and contemplation on \(\gamma_{*}=1\) in the wider picture The matching of CD-QCD and \(\chi\)PT, as in (2), with (5) and (8) enforces \[\gamma_{*}=1\;, \tag{9}\] Figure 2: Adjoint scalar correlation function in \(\chi\)PT (8) which behaves as \(1/x^{4}\) for large distances. which is the main result of this work. Let us try to put this result into perspective, before rederiving it in two different ways. First it is noted that, \(\gamma_{*}=1\) is considerably below the unitarity bound \(\gamma_{*}\leq 2\), which follows from \(\Delta_{\bar{q}q}=3-\gamma_{*}\geq 1\)[50]. The result gives rise to the following picture. For \(\gamma_{*}=0\) or \(\Delta_{S^{a}}=3\) it corresponds to two free fermions, whereas for \(\gamma_{*}=1\) or \(\Delta_{S^{a}}=2\) it describes two free scalar pions and finally for \(\gamma_{*}=2\) or \(\Delta_{S^{a}}=1\), when reaching the unitarity bound, it is equivalent to one free scalar particle [51]. The message seems to be that for integer powers of the scaling dimension, the theory lends itself to a free particle interpretation, cf. Fig. 3. Note that the gauge theories only seem to make use of the \([0,1]\)-range in \(\gamma_{*}\), which corresponds to only a third of the allowed range \(-1\leq\gamma_{*}\leq 2\) in the non-supersymmetric case. Of course, (8) cannot be viewed as novel from the \(\chi\)PT-viewpoint as it is simply the LO-analysis. However, what is new is the way in which this is realised in the gauge theory. The free pions are IR-dual to a gauge theory with a strongly coupled IRFP; strongly coupled since the anomalous dimension is large. These types of interpretations hold in many EFT formulations of weakly coupled ultraviolet Lagrangians, and may be regarded as the very purpose of the EFT-programme when the microscopic formulation is known. This suggest the following picture for the conformal window. At the upper end \(\gamma_{*}=0\) and then \(\gamma_{*}\) increases as \(N_{f}\) is lowered and when \(\gamma_{*}=1\) is reached chiral symmetry is broken and confinement sets in. The anomalous dimension \(\gamma_{*}\), then remains one in the entire domain of the CD phase. As mentioned in the introduction the latter could be or not be identical to QCD itself. The result (9) is consistent with \(\mathcal{N}=1\) supersymmetric gauge theories, as mentioned previously. ### Scaling of the pion mass implies \(\gamma_{*}=1\) (\(m_{q}\neq 0\)) In what follows the IRFP is perturbed by a non-vanishing quark mass \(m_{q}\). Even though the quark mass is scheme-dependent, the physics can be analysed by tracking powers of the rescaled bare mass. This is the standard method of hyperscaling extensively applied to the conformal window [53; 54; 55; 56; 57] where hadrons appear when a quark mass term is introduced.7 The difference in the scenario at hand is that chiral symmetry is spontaneously broken, and this introduces a natural cutoff scale \(\Lambda=4\pi F_{\pi}\)[59]. The quantity \(F_{\pi}\approx 93\,\mathrm{MeV}\) in QCD is the pion decay constant and the order parameter of chiral symmetry breaking [8; 9; 10]. We assume that the \(\chi\)PT-cutoff \(\Lambda\) does not affect the LO \(m_{q}\)-behaviour of the pion mass, which is natural from the viewpoint of \(\chi\)PT itself which is organised in a \(1/\Lambda\)-expansion. Under this assumption the behaviour of the pion mass is governed by hyperscaling due to the RG, in the same way as in the conformal window. The result, perhaps most cleanly derived in [54], is Footnote 7: The idea is that for \(m_{q}\neq 0\) the quarks decouple leaving behind pure Yang-Mills which is known to confine [58]. Hence there are hadrons and hadronic observables which however need to vanish when \(m_{q}\to 0\). The way this happens is dictated by the RG [53; 54; 56]. \[m_{\pi}^{2}|_{\mathrm{RG}}\propto m_{q}^{\frac{2}{1+\gamma_{*}}}\, \tag{10}\] where \(\gamma_{*}\) is the previously introduced mass anomalous dimension at the FP. In QCD the linear behaviour \[m_{\pi}^{2}|_{\rm QCD}\propto m_{q}\;, \tag{11}\] is deducible in many ways such as from the GMOR-relation [60], derived in App. C.1 from a double soft-pion theorem. Since Eq. (11) holds in QCD, we assume it would in a CD-QCD phase as well. The dilaton is not affecting the LO pion mass. Hence, equating Eqs. (10) and (11) implies the central result in \(\gamma_{*}=1\) (9) once more. ### Trace Anomaly and Feynman-Hellmann theorem (\(m_{q}\neq 0\)) The goal of this section is to show that the trace anomaly and the Feynman-Hellmann theorem are compatible if \(\gamma_{*}=1\) for an IRFP upon applying the formula \[2m_{\pi}^{2}=\langle\pi^{a}(p)|T^{\rho}_{\;\;\rho}|\pi^{a}(p)\rangle\;,\quad a \text{ fixed}\;. \tag{12}\] The validity of (12) when a dilaton is added will be commented on in Sec. 3. Figure 3: Range of possible IRFP anomalous dimension \(\gamma_{*}\). As emphasised in the main text, integer values seem to play a special role. The value of \(\gamma_{*}\) is bounded from above by the unitarity bound \(\gamma_{*}\leq 2(1)\), \(\Delta_{\bar{q}q}=3-\gamma_{*}\geq 1(\Delta_{\bar{Q}Q}=2-\gamma_{*}\geq 1)\)[50] in QCD-like theories (\(\mathcal{N}=1\) SUSY). The lower bound \(\gamma_{*}>-1\) comes from the requirement of soft breaking such that the PCAC is not spoiled [52]. The value \(\gamma_{*}=0\) corresponds to the trivial FP at the upper end of the conformal window cf. Fig. 1. As the number of flavours is lowered, \(\gamma_{*}\) raises and as it reaches \(\gamma_{*}=1\), chiral symmetry breaking sets in marking the lower end of the conformal window. This is true in \(\mathcal{N}=1\), cf. footnote 4, and in this paper this is conjectured to hold in QCD-like theories as well. The peculiarity of \(\mathcal{N}=1\) is that the unitarity bound and the end of the conformal window coalesce whereas this does not seem to be the case in QCD-like theories. #### 2.3.1 The \(T^{0}_{\ \rho}\)-anomaly and renormalisation group invariant combinations The part relevant to physical matrix elements of the trace anomaly reads8 Footnote 8: The trace anomaly was first observed in correlation functions [61; 62; 63] and subsequently worked out in detail [49; 64; 65; 66] including equation of motions and BRST exact terms arising upon gauge fixing. \[T^{\rho}_{\ \rho}|_{\rm phys}=\frac{\beta}{2g}G^{2}+\sum_{q}m_{q}(1\!+\!\gamma_{m} )\bar{q}q\;, \tag{13}\] where all quantities including the composite operators are renormalised. An important aspect is that \(m\bar{q}q\) is an RG invariant as mentioned previously. Since \(T^{\rho}_{\ \rho}\) is an RG invariant, the following two combinations \[O_{1}=\frac{\beta}{2g}G^{2}+\sum_{q}\gamma_{m}m_{q}\bar{q}q\;,\quad O_{2}= \sum_{q}m_{q}\bar{q}q\;, \tag{14}\] are RG invariants, or equally so, with \(\delta\gamma\equiv\gamma_{m}-\gamma_{*}\) \[O^{\prime}_{1}=\delta\gamma\,\sum_{q}m_{q}\bar{q}q+\frac{\beta}{2g}G^{2}\;, \quad O^{\prime}_{2}=(1+\gamma_{*})\sum_{q}m_{q}\bar{q}q\;, \tag{15}\] since the FP value \(\gamma_{*}\) is an RG invariant. Using (12) to (13), the \(\mathcal{O}(m_{q})\)-contribution then follows \[2m_{\pi}^{2}=(1+\gamma_{*})\sum_{q}m_{q}\langle\pi|\bar{q}q|\pi\rangle+ \mathcal{O}(m_{q}^{2})\;, \tag{16}\] The statement of (16) is that \(O^{\prime}_{2}\) is the leading operator in the quark mass and that \(O^{\prime}_{1}\) is suppressed. Expanding around the FP in \(\delta g=(g-g^{*})\), \(\frac{d}{d\ln\mu}\delta g=\beta(g)=\beta^{\prime}_{*}\delta g+\mathcal{O}( \delta g^{2})\), reveals that \[\delta g\propto\mu^{\beta^{\prime}_{*}}\propto m_{q}^{\frac{\beta^{\prime}_{*} }{1+\gamma_{*}}}\;, \tag{17}\] where in the last equality hyperscaling has been used, which can been derived from blocking transformations [54]. If \(\beta^{\prime}_{*}>0\) then all the missing terms in (16) are power-suppressed in \(m_{q}\). If \(\beta^{\prime}_{*}=0\) then they are only logarithmically suppressed.9 The region \(\beta^{\prime}_{*}<0\) is not allowed as otherwise there is no IRFP. We therefore conclude that it is justified to neglect the \(\delta\gamma\)- and \(\beta\)-terms to LO. Footnote 9: From the viewpoint of \(\chi\)PT the relative corrections are of order \(\mathcal{O}(m_{q}\ln m_{q})\), e.g. [9; 10], and thus it is possible that the RG-analysis does not reveal the true next-to-LO behaviour. This is not relevant for the point we are making but worthwhile to investigate further. #### 2.3.2 The \(T^{0}_{\ 0}=h\) viewpoint - Feynman-Hellmann theorem The Feynman-Hellmann theorem [67; 68] offers a way to obtain the LO quark mass dependence directly from the Hamiltonian \(\delta H_{m}=\sum_{q}m_{q}\bar{q}q\) by differentiation in \(m_{q}\). It is technically convenient to use states, \(\langle\hat{\pi}(p^{\prime})|\hat{\pi}(p)\rangle=(2\pi)^{3}\delta^{(3)}(\vec{ p}-\vec{p}^{\prime})\), which are normalised in non-relativistic manner. One can switch to the usual states by \(|\pi\rangle=|\hat{\pi}\rangle\sqrt{2E_{\pi}}\) after the \(m_{q}\)-differentiation. The Feynman-Hellmann formula implies \[\partial_{\ln m_{q}}E_{\pi}=\sum_{q}m_{q}\langle\hat{\pi}|\bar{q}q|\hat{\pi} \rangle+\mathcal{O}(m_{q}^{2})\;, \tag{18}\] where \(E_{\pi}=\langle\hat{\pi}|H|\hat{\pi}\rangle\), \(V\leftrightarrow(2\pi)^{3}\delta^{(3)}(0)\) and \(\partial_{m_{q}}\langle\hat{\pi}(p^{\prime})|\hat{\pi}(p)\rangle=0\) have been used. Switching back to standard pion states \(|\pi\rangle\) and using \(\partial_{m_{q}}E_{\pi}^{2}=\partial_{m_{q}}m_{\pi}^{2}\), which follows from the \(m_{q}\)-independence of the 3-momentum \(\vec{p}\), one obtains \[\partial_{\ln m_{q}}2m_{\pi}^{2}=2\sum_{q}m_{q}\langle\pi|\bar{q}q|\pi\rangle+ \mathcal{O}(m_{q}^{2})\;. \tag{2.19}\] Further assuming \(m_{\pi}^{2}=\mathcal{O}(m_{q})\) then gives10 Footnote 10: The use of the Feynman-Hellmann theorem and its derivative is crucial. If one were to use the Hamiltonian, written schematically as \(H=\vec{E}^{2}+\vec{B}^{2}+\sum_{q}m_{q}\bar{q}q\), then applying the states would result in \(2E_{\pi}=\langle\pi|\vec{E}^{2}+\vec{B}^{2}|\pi\rangle+\langle\pi|\sum_{q}m_{ q}\bar{q}q|\pi\rangle\), where the momentum dependence of \(E_{\pi}\) has to reside in the electromagnetic \(\vec{E}^{2}+\vec{B}^{2}\) matrix element, cf. [69] for related discussions. \[2m_{\pi}^{2}=2\sum_{q}m_{q}\langle\pi|\bar{q}q|\pi\rangle+ \mathcal{O}(m_{q}^{2})\;, \tag{2.20}\] the formula linear in the quark mass. The formula (2.20) in itself is not new and has been used and derived in the literature frequently e.g. [70, 71]. The correctness of (2.20) is verified in App. C.1 by reproducing the GMOR-relation [60]. This is important as an incorrect numerical prefactor, by matching to the trace anomaly below, would give an incorrect \(\gamma_{*}\). #### 2.3.3 Matching the two mass formulae The two mass formulae, (2.16) and (2.20), are, once more, compatible with each other if and only if \(\gamma_{*}=1\). We consider this an important result since the assumption is weaker than in Sec. 2.2.11 In that section we assumed that the renormalisation behaviour (hyperscaling), in the pion sector at LO in the quark mass, is unaffected by the presence of the \(\chi\)PT cutoff \(\Lambda=4\pi F_{\pi}\). Here we merely assumed that the \(\beta\)- and \(\delta\gamma\)-terms can be neglected in the vicinity of the FP. As the RG-scale \(\mu\) can be made arbitrarily small seems a lesser assumption and thus more satisfying in our view. Footnote 11: In the setting of the conformal window, with quark mass deformation ([56] and footnote 7), both approaches lead to \(m_{\phi}^{2}\propto(m_{q})^{2/(1+\gamma_{*})}\) where \(\phi\) stands for any hadron. The situation is though different in the case at hand because of the cutoff scale mentioned above. It seems worthwhile to point out that independent of whether there is an IRFP or not, \(\langle\pi|O_{1}|\pi\rangle=\langle\pi|O_{2}|\pi\rangle+\mathcal{O}(m_{q}^{2})\). Or in more familiar notation \[\langle\pi|\frac{\beta}{2g}G^{2}+\sum_{q}m_{q}\gamma_{m}\bar{q}q|\pi\rangle= \langle\pi|\sum_{q}m_{q}\bar{q}q|\pi\rangle+\mathcal{O}(m_{q}^{2})\;, \tag{2.21}\] assures that the trace anomaly and Feynman-Hellmann derivation of the LO pion mass are consistent with each other. The solution for an IRFP \(\beta\to\beta_{*}=0\) and \(\gamma_{m}\to\gamma_{*}=1\) is a straightforward one. Other solutions, not related to an IRFP, demand a specific interplay between the \(\beta\)- and \(\gamma\)-term. This is though perfectly possible since \(O_{1}\) in (2.14) is an RG invariant. Brief Comments on the Addition of a Dilaton The results obtained did not make use of the presence of a dilaton. Conversely, if pion physics can be interpreted by an IRFP then this suggests that a dilaton could be present and it is a valid question whether the latter would impact on any of the results obtained. Let us refer for practical reasons to the dilaton as the lightest state in the \(J^{PC}=0^{++}\) flavour singlet channel. If the dilaton remains massive in the limit where the explicit symmetry breaking is removed, \(m_{q}\to 0\), then it can simply be integrated out in the deep-IR and everything remains the same. If on the other hand it becomes massless in that limit then a closer inspection is needed. We proceed case by case. * For the long-distance correlator in Sec. 2.1 there would be a diagram where one of the propagating pions in Fig. 2 is replaced by a dilaton. The gives raise to the same \(1/x^{4}\) behaviour as in (8) but with a different prefactor, in particular \(N_{f}\)-independent, which excludes exact cancellation. Hence the conclusions remain unchanged. * The hyperscaling argument of Sec. 2.2 is also unaltered but it implies in turn \(m_{D}^{2}\propto m_{q}\) in the same way as it does for the pion. * The matching of the trace anomaly and the Feynman-Hellmann theorem in Sec. 2.3 is the more subtle and requires some care. In the case of a massless dilaton the standard formula \(2m_{\phi}^{2}=\langle\phi|T^{\rho}_{\rho}|\phi\rangle\), where \(\phi\) is a physical state, cannot be used because of the dilaton pole [11]. However, in the case of the pion (12) holds since the effect of the dilaton pole for massless states, such as the pion, is undone by its coupling to pions. Concretely, \[\langle\pi^{a}(p^{\prime})|T_{\mu\nu}|\pi^{a}(p)\rangle\supset c\left(q_{\mu} q_{\nu}-q^{2}\eta_{\mu\nu}\right)\frac{g_{D\pi\pi}}{q^{2}-m_{D}^{2}}\,\quad q\equiv p-p^{\prime}\] (11) where \(c=\text{const}\times F_{D}\) and \(g_{D\pi\pi}\) is given by [3, 12, 72, 73] \[g_{D\pi\pi}=\frac{1}{F_{D}}(2q^{2}+2(1-\gamma_{*})m_{\pi}^{2}+\mathcal{O}(m_{ q}^{2}))\,\] (12) where \(F_{D}\) is the dilaton decay constant as defined in [11]. The \(q^{2}\)-dependence originates from the pion kinetic term to which the dilaton couples. Taking the trace in (11) and the limit \(q^{2}\to 0\) we learn that this term does not contribute. The pion and the dilaton are both massive due to \(m_{q}\neq 0\), which would be enforced in a systematic EFT approach. The behaviour for massive hadrons is qualitatively different since \(g_{D\phi\phi}\propto m_{\phi}^{2}/F_{D}=\mathcal{O}(\Lambda)\), here for a scalar \(\phi\), does not vanish any limit. It is precisely this behaviour that gives rise to the vanishing of the trace of the EMT for a massless dilaton [11]. A further point of concern is that the dilaton could alter the evaluation of the matrix element \(\langle\pi^{a}|\bar{q}q|\pi^{a}\rangle\) in App. C.1. The dilaton contribution to this matrix element is analogous to (11) is \(\propto g_{D\pi\pi}/(q^{2}-m_{D}^{2})\) (without the \(q\)-dependent prefactor). Assuming \(\gamma_{*}\to 1\) and \(q^{2}\to 0\) we learn that this term does not contribute. Two remarks are in order. First to use \(\gamma_{*}=1\) is legitimate since it has already been concluded by equating (16) and (20). Second it is crucial to keep \(m_{D}\neq 0\), due to \(m_{q}\neq 0\). We infer from our considerations that the addition of a (massless) dilaton does not alter the results. ## 4 Summary and Conclusions In this work we have offered an interpretation of low energy pion physics in terms of a strongly coupled infrared fixed point of QCD-like gauge theory. Colloquially speaking, this means that the infrared states such as the pions experience the world as a conformal field theory in the deep infrared. Comparing observables in the conformal or renormalisation group picture with standard pion physics we deduced in three ways that the quark-mass anomalous dimension takes on the value \(\gamma_{*}=1\), at the fixed point. Namely, * by requiring consistency between the leading order \(\chi\)PT and the CFT-interpretation of the adjoint scalar correlator \(\langle S^{a}(x)S^{a}(0)\rangle\) in Sec. 2.1. * by renormalisation-group arguments and assuming that the \(\chi\)PT cutoff \(\Lambda=4\pi F_{\pi}\) does not affect leading quark-mass behaviour of the pion in Sec. 2.2. * by requiring consistency between the trace anomaly and the Feynman-Hellmann theorem in Sec. 2.3.12 Footnote 12: Note that Eq. (21) must hold in the chirally broken phase irrespective of the fixed point interpretation. These arguments are largely independent and thus any of the three could have served as a starting point for the paper. Perhaps, the third point is the strongest as it only relies on the near fixed point behaviour. The important point is, though, that by assuming an infrared fixed point we were able to derive internally consistent results. This is no substitute for a proof. In fact there are at least the three possibilities shown in Fig. 1. The scenario is not realised in any gauge theory (left), it is realised in some area outside the conformal window (centre) or it is identical to the standard QCD-type theories (right). As a byproduct this led us to conjecture that \(\gamma_{*}=1\) marks the end of the conformal window. This last point is consistent with lattice Monte Carlo computations [15; 19; 40], in particular the dilaton-EFT fits in [74; 75], perturbative computations [76; 77], gap equations [78; 79; 80], walking technicolor phenomenology [18; 25; 81], holographic approaches [82; 83] and \(\mathcal{N}=1\) supersymmetry [20; 21; 22]. However, these works do not interpret the pion physics below the boundary by an infrared fixed point, which is the main point of our work. From a certain perspective our work is more closely related to the pre-QCD work [1; 2; 3; 4; 5; 6] or its revival a decade ago [72; 84]. The difference to these papers is that there is a definite statement about the scaling of the most important operators. Another way to look at the proposal is to notice that QCD in the deep infrared is described by the free field theory of pions and is thus scale invariant.13 This makes the fixed point interpretation look natural, and is indeed assumed in the context of the \(a\)-theorem, e.g. [85]. The \(\chi\)PT gauge-theory matching can be seen as infrared duality of weak and strong coupling theories, c.f. Sec. 2.1.3, which are often the motivation for an effective-field-theory programme. That these types of dualities are more fundamental, might be related to the Seiberg-dualities [20; 21; 22], which in turn gave new motivation to the fascinating idea of hidden local symmetry, e.g. [86; 87; 88]. There are other factors supporting the infrared-fixed-point picture. The Goldstone improvement [73], ameliorates the convergence of the integrals of the \(a\)-theorem for QCD-like theories [89; 90].14 Or dense nuclear interactions support a fixed point scenario [91; 92; 93]. The addition of the dilaton sector to be discussed in [12] will offer other ways to test the scenario. As mentioned earlier the dilaton candidate in QCD is the broad \(f_{0}(500)\) meson. Importantly, if the Higgs sector is to be replaced by a gauge theory then its dilaton can take on the role of a Higgs which is hard to distinguish from the Standard Model one. This has been appreciated since a long time within the gauge theory setting e.g. [94] and without concrete ultraviolet completion e.g. [95; 96] Our work strengthens this case considerably and identifies in \(S=\bar{q}q\) the presumably most relevant operator as its scaling dimension assumes \(\Delta_{\bar{q}q}=2\) as a consequence of \(\gamma_{*}=1\). A further advantage of gauge theories for a dilaton sector is that they can be explored with analytic tools and lattice Monte Carlo simulations serving as a laboratory to further ideas in a concrete setting. Footnote 14: A sufficiently fast vanishing \(T^{\rho}{}_{\rho}\) in the IR is a necessary condition for the integral formulae to converge. ###### Acknowledgements. RZ is supported by a CERN associateship and an STFC Consolidated Grant, ST/P0000630/1. I am grateful to Steve Abel, Tim Cohen, Gilberto Colangelo, Matthew McCullough, Poul Damgaard, Luigi Del Debbio John Donoghue, John Ellis, Nick Evans, Max Hansen, Shoji Hashimoto, Andreas Juttner, Daniel Litim, Tom Melia, Kohtaroh Miura, Agostino Patella, Jeremie Quevillon, Mannque Rho, Francesco Sannino, Misha Shifman, Christopher Smith, Lewis Tunstall and Koichi Yamawaki for correspondence or discussions. ## Appendix A Conventions The Minkowski metric \(\eta_{\mu\nu}\) reads \(\text{diag}(1,-1,-1,-1)\). The Lagrangian of the gauge theory is given by \[\mathcal{L}=-\frac{1}{4}G^{2}+\sum_{q}m_{q}\bar{q}(i\not{D}-m_{q})q\;, \tag{10}\] where \(G^{2}=G^{A}_{\mu\nu}G^{A\mu\nu}\) is the field strength tensor and \(A\) the adjoint index of the gauge group. The \(N_{f}\) quark flavours are assumed to be degenerate in mass. The beta function is defined by \(\beta=\frac{d}{d\ln\mu}g\) and the mass anomalous dimension is given by \(\gamma_{m}=-\frac{d}{d\ln\mu}\ln m_{q}\). Quantities at the FP are designated by a star e.g. \(\gamma_{*}=\gamma_{m}|_{\mu=0}\) (2). QED is omitted even though the massless photon is definitely an IR degree of freedom but it does not change the picture considerably as it is weakly coupled in the IR. The \(SU(N)\) flavour symmetry generators \(T^{a}\) are normalised as \(T^{a}T^{b}=\frac{1}{2N_{c}}\delta^{ab}\mathbb{1}_{N_{c}}+\frac{1}{2}d^{abc}T^{c}+ \frac{i}{2}f^{abc}T^{c}\), \(\mathrm{Tr}[T^{a}T^{b}]=\frac{1}{2}\delta^{ab}\) and \(f/d^{abc}\) are the totally anti/symmetric tensors. ## Appendix B Conformal versus Scale Invariance Scale and conformal invariance are not distinguished in this work as it is widely believed that the former implies the latter for theories like QCD (and most non-exotic \(d=4\) theories) cf. Ref. [97] for a review. A scale invariant theory is one where \(T^{\rho}_{\ \rho}=\partial\cdot V\) such that \(J^{D}_{\mu}=x^{\nu}T_{\mu\nu}-V_{\mu}\) is conserved. Since the scaling dimension of the trace of the EMT is \(d\), the one of the virial current has to be \(d-1\) which is highly non-generic as it, usually, requires the protection of a symmetry. ## Appendix C Soft-pion Theorem Since the soft-pion theorem is important in the main text, we reproduce its form from the textbook [8] \[\langle\pi^{a}(q)\beta|\mathcal{O}(0)|\alpha\rangle=-\frac{i}{F_{\pi}}\langle \beta|[Q_{5}^{a},\mathcal{O}(0)]|\alpha\rangle+\lim_{q\to 0}iq\cdot R^{a}\;, \tag{10}\] where the square brackets denote the commutator. Above \(\alpha\) and \(\beta\) are other physical states and \(R^{a}\) is the so-called remainder \[R^{a}_{\mu}=-\frac{i}{F_{\pi}}\int d^{d}x\,e^{iq\cdot x}\langle\beta|TJ^{a}_{5 \mu}(x)\mathcal{O}(0)|\alpha\rangle\;, \tag{11}\] which vanishes unless there are intermediate states degenerate with either \(\alpha\) or \(\beta\).15 Eq. (10) is straightforward to derive from correlation functions using a dispersive representation. Footnote 15: We have checked that it vanishes in the cases at hand and will therefore not discuss it any further. A case where the remainder is relevant is the matrix element \(\langle N^{a}\pi^{b}(q)|J^{c}_{\mu}|N^{a\dagger}\rangle\). The \(|N^{a^{\prime}}\rangle\) is degenerate and one can use the Callan-Treiman relation, due to the chiral Ward identity, to infer \(\lim_{q\to 0}q^{\mu}\langle N^{a}|J^{b}_{5\mu}|N^{d}\rangle\neq 0\), implying the non-vanishing of the remainder. ### The GMOR-relation from double soft-pion theorem In Sec. 2.3.3 it was concluded that the trace anomaly and the Feynman-Hellmann theorem imply \(\gamma_{*}=1\) but this relies in particular that the _prefactor_ in Eq. (20) is correct. This can be verified by making the link to the celebrated GMOR-relation [60] of QCD. The procedure is to apply the soft theorem, summarised above, twice to eliminate the pions. Applying it once results in \[m_{\pi}^{2}=\sum_{q}m_{q}\langle\pi^{a}|\bar{q}q|\pi^{a}\rangle=\frac{-m_{q}} {F_{\pi}}\langle 0|i[Q_{5}^{a},\bar{q}\,\mathbb{1}_{N_{f}}q]|\pi^{a}\rangle= \frac{2m_{q}}{F_{\pi}}\langle 0|P^{a}|\pi^{a}\rangle\;, \tag{12}\] where \(P^{a}=\bar{q}i\gamma_{5}T^{a}q\) as previously, and \(\sum_{q}\bar{q}q\to\bar{q}\,\mathbb{1}_{N_{f}}q\) as it is a more suitable notation to evaluate the commutator. The remainder (11) can be omitted since it is zero. This is not obvious when a dilaton is present as commented on in Sec. 3. Applying the soft theorem to (12) once more, using \(d^{abc}\langle\bar{q}T^{c}q\rangle=0\), one gets \[\langle 0|P^{b}|\pi^{a}\rangle=-\frac{1}{F_{\pi}}\langle 0|i[Q_{5}^{a},P^{b} ]|0\rangle=-\frac{1}{F_{\pi}}\langle\bar{q}q\rangle\delta^{ab}\;, \tag{13}\] which combines into \[m_{\pi}^{2}F_{\pi}^{2}=-2m_{q}\langle\bar{q}q\rangle\;, \tag{102}\] the GMOR-relation [8, 9, 10, 60]. This completes the task of this appendix.
2307.09748
Watch out Venomous Snake Species: A Solution to SnakeCLEF2023
The SnakeCLEF2023 competition aims to the development of advanced algorithms for snake species identification through the analysis of images and accompanying metadata. This paper presents a method leveraging utilization of both images and metadata. Modern CNN models and strong data augmentation are utilized to learn better representation of images. To relieve the challenge of long-tailed distribution, seesaw loss is utilized in our method. We also design a light model to calculate prior probabilities using metadata features extracted from CLIP in post processing stage. Besides, we attach more importance to venomous species by assigning venomous species labels to some examples that model is uncertain about. Our method achieves 91.31% score of the final metric combined of F1 and other metrics on private leaderboard, which is the 1st place among the participators. The code is available at https://github.com/xiaoxsparraw/CLEF2023.
Feiran Hu, Peng Wang, Yangyang Li, Chenlong Duan, Zijian Zhu, Fei Wang, Faen Zhang, Yong Li, Xiu-Shen Wei
2023-07-19T04:59:58Z
http://arxiv.org/abs/2307.09748v1
# Watch out Venomous Snake Species: A Solution to SnakeCLEF2023 ###### Abstract The SnakeCLEF2023 competition aims to the development of advanced algorithms for snake species identification through the analysis of images and accompanying metadata. This paper presents a method leveraging utilization of both images and metadata. Modern CNN models and strong data augmentation are utilized to learn better representation of images. To relieve the challenge of long-tailed distribution, seesaw loss [1] is utilized in our method. We also design a light model to calculate prior probabilities using metadata features extracted from CLIP [2] in post processing stage. Besides, we attach more importance to venomous species by assigning venomous species labels to some examples that model is uncertain about. Our method achieves 91.31% score of the final metric combined of F1 and other metrics on private leaderboard, which is the 1st place among the participators. The code is available at [https://github.com/xiaoxsparraw/CLEF2023](https://github.com/xiaoxsparraw/CLEF2023). Snake Species Identification, Fine-grained image recognition, Long-tailed, Metadata, SnakeCLEF ## 1 Introduction Fine-grained visual categorization is a well-established and pivotal challenge within the fields of computer vision and pattern recognition, serving as the cornerstone for a diverse array of real-world applications [3]. The SnakeCLEF2023 competition, co-hosted as an integral part of the LifeCLEF2023 lab within the CLEF2023 conference, and the FGVC10 workshop in conjunction with the esteemed CVPR2023 conference, is geared towards advancing the development of a robust algorithm for snake species identification from images and metadata. This objective holds profound significance in the realm of biodiversity conservation and constitutes a crucial facet of human health preservation. In this paper, we introduce a method that addresses the recognition of snake species by leveraging both metadata and images. ConvNeXt-v2 [4] and CLIP [2] are used to extract images features and metadata features separately, and the image features and text features are concatenated to be input of MLP classifier, thus getting better representation of examples and recognition results. Seesaw loss [1] are utilized in our method, thereby alleviating the long-tailed distribution problem. Notably, our proposed method takes into careful consideration the critical real-world need to distinguish venomous and harmless snake species by using the Real-World Weighted Cross-Entropy (RWWCE) loss [5] and post-processing, resulting in exemplary performance surpassing that of other solutions presented in this year's competition. Experiments and competition results show that our method is effective in snake species recognition task. The subsequent sections of this paper provide a comprehensive overview of the key aspects. Section 2 introduces the competition challenges and datasets, accompanied by the examination of the evaluation metric utilized. Section 3 describes our proposed methodologies, offering a comprehensive and detailed introduction to the techniques. Section 4 presents the implementation details, alongside a comprehensive analysis of the principal outcomes achieved. Finally, Section 5 concludes this paper by summarizing the key findings and offering future research directions. ## 2 Competition Description Understanding datasets and metrics is an essential requirement for engaging in a machine learning competition. Within this section, we aim to introduce our comprehension of the datasets and provide overview of the evaluation metrics employed by the competition organizers. ### Challenges of the Competition Past iterations of this competition have witnessed remarkable accomplishments by machine learning models [6, 7, 8, 9, 10, 11]. To further enhance the competition's practical relevance and address the exigencies faced by developers, scientists, users, and communities, such as addressing post-snakebite incidents, the organizers have imposed more stringent constraints. The ensuing challenges of this year's competition can be summarized as follows: * Fine-grained image recognition: The domain of fine-grained image analysis has long posed a challenging problem within the FGVC workshop, deserving further investigation and study. * Utilization of metadata: The incorporation of metadata, particularly pertaining to the geographical distribution of snake species, plays a vital role in their classification. Such metadata is commonly employed by individuals to identify snakes in their daily lives. Hence, utilization of location metadata holds significance and needs careful consideration. * Long-tailed distribution: Long-tailed distributions are common in real-world scenarios, and the distribution of snake species is no exception. * Identification of venomous and harmless species: The distinction between venomous and harmless snake species is meaningful, as venomous snake bites lead to large number of death each year. Consequently, leveraging deep learning methodologies to address this problem is of paramount urgency. * Model size limitation: A strict limitation has been imposed on the model size, constraining it to a maximum of 1GB. ### Dataset The organizers provide a dataset, consisting of 103,404 recorded snake observations, supplemented by 182,261 high-resolution images. These observations encompass a diverse range of 1,784 distinct snake species and have been documented across 214 geographically varied regions. It is worth to note that the provided dataset is in a heavily long-tailed distribution, as shown in Fig. 1. In this distribution, the most frequently encountered species have 1,262 observations consists of 2,079 accompanying images. However, the least frequently encountered species is captured by a mere 3 observations, showing its exceptional rarity within the dataset. ### Evaluation Metric In addition to the conventional evaluation metrics of Accuracy (Acc) and Mean F1-Score, this year's competition incorporates a novel evaluation metric, denoted as "public_score_track1" on the leaderboard. This metric combines the F1-Score with an assessment of the confusion errors related to venomous species. It is calculated as a weighted average, incorporating both the macro F1-score and the weighted accuracy of various types of confusions: \[M=\frac{w_{1}F_{1}+w_{2}\left(100-P_{1}\right)+w_{3}\left(100-P_{2}\right)+w_ {4}\left(100-P_{3}\right)+w_{5}\left(100-P_{4}\right)}{\sum_{i}^{5}w_{i}}\,, \tag{1}\] where \(w_{1}=1.0,w_{2}=1.0,w_{3}=2.0,w_{4}=5.0,w_{5}=2.0\) are the weights of individual terms. The metric incorporates several percentages, namely \(F_{1}\) representing the macro F1-score, \(P_{1}\) denoting the percentage of harmless species misclassified as another harmless species, Figure 1: Long-tailed distribution of the SnakeCLEF2023 training dataset. The blue color means head classes, which means most images in the dataset belong to these classes. The orange color means tail classes, which means most classes in the dataset are tail classes. \(P_{2}\) indicating the percentage of harmless species misclassified as a venomous species, \(P_{3}\) reflecting the percentage of venomous species misclassified as another harmless species, and \(P_{4}\) representing the percentage of venomous species misclassified as another venomous species. This metric is bounded below by 0% and above by 100%. The lower bound is attained when all species are misclassified, including misclassification of harmless species as venomous and vice versa. Conversely, if the F1-score reaches 100%, indicating correct classification of all species, each \(P_{i}\) value must be zero, leading to an overall score of 100%. ## 3 Method In this section, we shall introduce the methodologies employed to address the task of snake species classification. ### Data Preprocessing Data preprocessing plays a crucial role in machine learning, as it influences not only the final performance but also the feasibility of problem resolution. Upon obtaining the dataset provided by the competition organizers, several issues emerged. For instance, certain images listed in the metadata CSV file were found to be nonexistent within the corresponding image folders. To address this, we generated a new metadata CSV file by eliminating the affected rows from the original file. Additionally, a subset of images within the dataset was found to be corrupted, potentially due to network transmission or other factors. To mitigate this concern, we utilized OpenCV to read the problematic images and subsequently re-wrote them to the file system, thereby solving the corruption issue. The SnakeCLEF dataset includes valuable metadata pertaining to the observation locations. Leveraging this location information is of great significance, as certain snake species inhabit geographically confined areas. However, the metadata presents the location in the form of country or region codes, which cannot be directly utilized as inputs for convolutional neural network (CNN) or Vision Transformer (ViT) [12]. To address this challenge, we employ CLIP [2] to extract location features without engaging in fine-tuning. Subsequently, Principal Component Analysis (PCA) [13] is employed to reduce the dimension of the resulting feature vectors. Data augmentation serves as a key technique in computer vision tasks. Within our methodology, we leverage fundamental image augmentation methods from Albumentations [14]. These methods encompass RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, RandomBrightnessContrast, PiecewiseAffine, HueSaturationValue, OpticalDistortion, ElasticTransform, Cutout, and GridDistortion. Furthermore, we incorporate data mixing augmentation techniques, such as Mixup [15], CutMix [16], TokenMix [17], and RandomMix [18], during the course of the competition. These data augmentation methods provide strong regularization to models by softening both images and labels, avoiding the model overfitting in training dataset. ### Model Throughout the competition, we explored various models, including both classical and state-of-the-art architectures, such as Convolutional Neural Networks and Vision Transformers. Models employed during the competition include ResNet [19], VOLO [20], ConvNeXt [21], BEiT-v2 [22], EVA [23] and ConvNeXt-v2 [4]. The implementation of these models was facilitated using the timm [24] library. In light of the imposed limitations on model parameters and the consideration of the model representation capabilities, we selected ConvNeXt-v2 [4] as the backbone architecture in our final method. However, relying solely on the visual backbone is insufficient for effectively addressing the task at hand. Given the availability of metadata in the competition and the inherent challenges associated with fine-grained image classification, it becomes necessary to modify the architecture of the vision model to achieve superior performance. The architectural design of the model employed in our final submission is illustrated in Fig. 2. Following the completion of the third stage of ConvNeXt-v2 [4], the intermediate-level feature map is combined with the high-level image features after the final stage, along with the metadata features. This concatenation process yields a comprehensive representation that captures both the image and metadata information. To mitigate overfitting, we have incorporated MaxPooling [25], BatchNorm [26], and Dropout [27] techniques into our methodology. Once the comprehensive representation is obtained, a classifier comprising two linear layers and ReLU [28] activation functions follows and generates classification results. ### Optimization Procedure Addressing long-tailed recognition is another challenge encountered in the competition. To tackle this issue, we extensively explored various techniques implemented in BagofTricks-LT [29]. In our final submission, we incorporated the seesaw loss [1] as a key component. The seesaw loss formulation can be expressed as follows: Figure 2: Architecture of our model. Take ConvNeXt-v2 [4] as the backbone, which is made up of 4 stages, feature vector extracted from metadata (\(v_{1}\)), original feature vector (\(v_{2}\)) and feature vector from middle stage of the backbone (\(v_{3}\)) are concatenated to get the final feature vector \(v\), a MLP classifier is followed to get the final classification results. \[\begin{split} L_{\text{seesaw}}\left(\mathbf{z}\right)=-\sum_{i=1}^{C }y_{i}\log\left(\widehat{\sigma}_{i}\right)\,,\\ \text{with }\widehat{\sigma}_{i}=\frac{e^{z_{i}}}{\sum_{j\neq i}^{C }\mathcal{S}_{ij}e^{z_{j}}+e^{z_{i}}}\,,\end{split} \tag{2}\] where \(\mathbf{z}\) denotes the output obtained from the fully connected layer, \(C\) represents the total number of classes, and \(y_{i}\) corresponds to the one-hot label of the image. The hyper-parameters \(\mathcal{S}_{ij}\) are carefully set based on the distribution characteristics inherent in the dataset. Distinguishing between venomous and non-venomous snake species and the consequential assignment of varying costs to different classification errors are of great importance in this year's challenge, as demonstrated by Eq. 1. In accordance with these requirements, loss function that effectively models the real-world costs associated with mislabeling [5] is utilized by us. To align with this objective, we incorporate the Real-World Weighted Cross-Entropy (RWWCE) loss function [5] during the final three epochs of training, employing a reduced learning rate. In addition to the choice of loss functions, the selection of an optimizer and an appropriate learning rate decay strategy are important in the training of our models. For optimization, we adopt the AdamW optimizer [30]. To enhance convergence speed and overall performance, we implement cosine learning rate decay [31] coupled with warmup techniques during the training process. These strategies collectively facilitate more effective and efficient model convergence. ### Post-processing In this year's challenge, the task requires the solution to accurately identify the venomous nature of snakes, particularly focusing on distinguishing the venomous species, with the limited model capacity. It is challenging but fortunately, the organizers provided a metadata repository, with a particular focus on geographical information. In practical contexts, where reliance solely on visual cues may prove insufficient performance on fine-grained classification, the supplementation of geographical details assumes a crucial role in assisting human experts in making judgment. Thus, the integration of geographical information within the metadata exhibits the potential to enhance the decision-making prowess of classification models. Inspired by [32], assuming the above-mentioned trained model as \(f\), we developed a simple prior model denoted as \(g\). This prior model is simple but efficiently, composed of three fully connected layers with non-linear activation function and employed dropout regularization. In the training process of this light model, we adopt the AdamW [30] optimizer and performed balanced sampling on the training data, to mitigate the impact of the long-tail distribution in the dataset. The objective of this training process was to minimize the following loss function: \[\begin{split}\mathcal{L}_{loc}(\mathbf{x},\mathbf{r},\mathbf{O}, y)=&\lambda\log\left(s\left(g(\mathbf{x})\mathbf{O}_{:,y}\right) \right)+\sum_{\begin{subarray}{c}i=1\\ i\neq y\end{subarray}}^{C}\log\left(1-s\left(g(\mathbf{x})\mathbf{O}_{:,i} \right)\right)+\\ &\sum_{i=1}^{C}\log\left(1-s\left(g(\mathbf{r})\mathbf{O}_{:,i} \right)\right)\,,\end{split} \tag{3}\] where the metadata features extracted from CLIP is denoted as \(\mathbf{x}\). \(\mathbf{O}\) is the category embedding matrix, where each column is the prototype of different category, pre-computed by our trained model \(f\), e.g., ConvNeXt-v2 [4]. Furthermore, \(\mathbf{r}\) signifies a uniformly random location data point, and \(\lambda\) serves as a hyper-parameter for weighting positive observations. It is important to note that if a category \(y\) has been observed at the spatial location \(\mathbf{x}\) within the training set, the value of \(s\left(g(\mathbf{x})\mathbf{O}_{:,y}\right)\) should approximate 1. Conversely, if the category has not been observed, the value should approximate 0. During the inference stage, our prior model efficiently calculates the prior class embeddings denoted as \(\mathbf{P}\). Utilizing the following equation: \[\mathbf{S}^{\prime}=Softmax(\mathbf{P})\odot\mathbf{S}, \tag{4}\] where \(\mathbf{S}\) is the prediction score computed by \(f\). We derive the final class scores \(\mathbf{S}^{\prime}\) by computing the joint probability of predictions from the two models \(f\) and \(g\). In real-world scenarios, misclassifying a non-venomous snake as venomous carries significant consequences and is deemed unacceptable. To address this concern, we implement a robust post-processing approach. When the predicted confidence of an image \(\mathbf{x}\) is relatively low, we analyze its top-5 predictions. If any of these predictions include a venomous class, we classify the image as venomous. This post-processing technique represents a well-considered compromise between precision and recall. Notably, this approach enable us to get the 1st place in the private leaderboard. We firmly believe that this strategy possesses considerable advantages for practical applications. ## 4 Experiments In this section, we will introduce our implementation details and main results. ### Experiment Settings The proposed methodology is developed utilizing the PyTorch framework [33]. All models employed in our approach have been pre-trained on the ImageNet dataset [34], readily available within the timm library [24]. Fine-tuning of these models was conducted across 4 Nvidia RTX3090 GPUs. The initial learning rate was set to \(2\times 10^{-5}\), and the total number of training epochs was set to 15, with the first epoch dedicated to warm-up, employing a learning rate of \(2\times 10^{-7}\). To optimize model training, we utilized the AdamW optimizer [30] in conjunction with a cosine learning rate scheduler [31], setting the weight decay to \(2\times 10^{-5}\). During inference on the test dataset, we adopted test time augmentation. Furthermore, considering that an observation may consist of multiple images, we adopted a simple averaging approach to obtain a singular prediction for each observation. ### Main Results In this section, we present our primary findings attained throughout the challenge, as illustrated in Tab. 1. The "Metric" column within the table corresponds to the public track1 metric featured on the leaderboard. As indicated by Tab. 1, the model parameters and image resolution hold crucial significance in image recognition tasks, aligning with conventional expectations. An increase in model parameters and image resolution corresponds to improvement in the public leaderboard score. Furthermore, data augmentation plays as a key factor in enhancing the generalization capacity of models. Notably, CutMix [16] outperforms alternative data mixing augmentation techniques, such as RandomMix [18], based on our experimental observations. Metadata plays a pivotal role in the recognition of snake species, enabling models to acquire enhanced representations of observations and thereby achieve superior classification results. In our experiments, the utilization of metadata facilitated the acquisition of enriched contextual information, leading to improved model performance. Additionally, the incorporation of the Seesaw loss [1] demonstrated notable efficacy in mitigating the challenges posed by long-tailed distributions, surpassing the conventional CrossEntropy loss. Moreover, the integration of middle-level features proved effective in alleviating the complexities associated with fine-grained image recognition, enabling more precise discrimination between similar snake species. Given that the final evaluation metric takes into account the demands of real-world applications and imposes greater penalties for misclassifying a venomous snake species as harmless compared to misclassifying a harmless species as venomous, we place significant emphasis on post-processing techniques. Specifically, when the model exhibits uncertainty in its predictions for a particular observation, we adopt a cautious approach and classify it as a venomous snake species based on the top-5 predictions. This post-processing strategy has proven highly advantageous, leading to substantial improvements in both the public leaderboard and the private test data performance, as evidenced by Tab. 1. \begin{table} \begin{tabular}{c c c c} \hline \hline Backbone & Resolution & Metric (\%) & Comments \\ \hline ResNet50 [19] & \(224\times 224\) & 72.22 & baseline \\ BEiT-v2-L [22] & \(224\times 224\) & 82.59 & stronger backbone \\ BEiT-L [35] & \(384\times 384\) & 88.74 & cutmix \\ EVA-L [23] & \(336\times 336\) & 86.82 & cutmix \\ Swin-v2-L [36] & \(384\times 384\) & 88.19 & cutmix \\ VOLO [20] & \(448\times 448\) & 88.50 & cutmix \\ ConvNeXt-v2-L [4] & \(384\times 384\) & 88.98 & seesawloss + randommix \\ ConvNeXt-v2-L [4] & \(384\times 384\) & 89.47 & seesawloss + cutmix \\ ConvNeXt-v2-L [4] & \(512\times 512\) & 90.86 & seesawloss + cutmix + metadata \\ ConvNeXt-v2-L [4] & \(512\times 512\) & 91.98 & seesawloss + cutmix + middle-level feature \\ ConvNeXt-v2-L [4] & \(512\times 512\) & 93.65 & seesawloss + cutmix + metadata \\ \hline \hline \end{tabular} \end{table} Table 1: Main results of SnakeCLEF. ## 5 Conclusion Fine-grained visual analysis holds great practical significance, particularly in accurately discerning the toxicity of snakes within the domain of snake sub-classification. This paper focuses on addressing the snake classification problem by harnessing the valuable metadata present in the dataset for posterior filtering. Additionally, a robust post-processing technique is employed to facilitate toxicity identification. These approaches have culminated in our noteworthy achievement of securing the first-place position in the challenge, attaining an impressive overall evaluation score of 91.31% on the private leaderboard.
2301.12780
Equivariant Architectures for Learning in Deep Weight Spaces
Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). As a first step towards this goal, we present here a novel network architecture for learning in deep weight spaces. It takes as input a concatenation of weights and biases of a pre-trained MLP and processes it using a composition of layers that are equivariant to the natural permutation symmetry of the MLP's weights: Changing the order of neurons in intermediate layers of the MLP does not affect the function it represents. We provide a full characterization of all affine equivariant and invariant layers for these symmetries and show how these layers can be implemented using three basic operations: pooling, broadcasting, and fully connected layers applied to the input in an appropriate manner. We demonstrate the effectiveness of our architecture and its advantages over natural baselines in a variety of learning tasks.
Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron
2023-01-30T10:50:33Z
http://arxiv.org/abs/2301.12780v2
# Equivariant Architectures for Learning in Deep Weight Spaces ###### Abstract Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction. Unfortunately, the unique symmetry structure of deep weight spaces makes this design very challenging. If successful, such architectures would be capable of performing a wide range of intriguing tasks, from adapting a pre-trained network to a new domain to editing objects represented as functions (INRs or NeRFs). As a first step towards this goal, we present here a novel network architecture for learning in deep weight spaces. It takes as input a concatenation of weights and biases of a pre-trained MLP and processes it using a composition of layers that are equivariant to the natural permutation symmetry of the MLP's weights: Changing the order of neurons in intermediate layers of the MLP does not affect the function it represents. We provide a full characterization of all affine equivariant and invariant layers for these symmetries and show how these layers can be implemented using three basic operations: pooling, broadcasting, and fully connected layers applied to the input in an appropriate manner. We demonstrate the effectiveness of our architecture and its advantages over natural baselines in a variety of learning tasks. ## 1 Introduction Deep neural networks are the primary model for learning functions from data, from classification to generation. Recently, they also became a primary model for representing data samples, for example, INRs for representing images, 3D objects, or scenes (Park et al., 2019; Sitzmann et al., 2020; Tancik et al., 2020; Mildenhall et al., 2021). In these two cases, representing functions or data, it is often desirable to operate directly over the weights of a pre-trained deep model. For instance, given a trained deep network that performs visual object recognition, one may want to change its weights so it matches a new data distribution. In another example, given a dataset of INRs or NeRFs representing 3D objects, we may wish to analyze the shape space spanned by them by directly applying machine learning to their raw network representation, namely their weights and biases. In this paper, we seek a principled approach for learning over neural weight spaces. We ask: _"What neural architectures can effectively learn and process neural models that are represented as sequences of weights and biases?"_ The study of learning in neural weight spaces is still in its infancy. Few pioneering studies (Eilertsen et al., 2020; Unterthiner et al., 2020; Schurholt et al., 2021) used generic architectures such as fully connected networks and attention mechanisms to predict model accuracy or hyperparameters. Even more recently, three papers have partially addressed the question in the context of INRs (Dupont et al., 2022; Xu et al., 2022; Anonymous, 2023). Unfortunately, it is not clear if and how these approaches could be applied to other types of neural networks because they make strong assumptions about the dimension of the input domain or the training procedure. It remains an open problem to characterize the principles for designing deep architectures that can process the weights of other deep models. Traditional deep models have been designed to process data instances with well-understood structures like fixed-sized tensors or sequences. In contrast, the weights of deep models live in spaces with a very different structure, which is still not fully understood (Hecht-Nielsen, 1990; Chen et al., 1993; Brea et al., 2019; Entezari et al., 2021). **Our approach.** This paper takes a step forward toward learning in deep-weight spaces by developing architectures that account for the structure of these spaces in a principled manner. More concretely, we address learning in spaces that represent a concatenation of weight (and bias) matrices of Multilayer Perceptrons (MLPs). Motivated by the recent surge of studies that incorporate symmetry into neural architectures (Cohen and Welling, 2016; Zaheer et al., 2017; Ravanbakhsh et al., 2017; Kondor and Trivedi, 2018; Maron et al., 2019; Esteves et al., 2018), we analyze the symmetry structure of neural weight spaces. Then we design architectures that are equivariant to the natural symmetries of the data. Specifically, we focus on the main type of symmetry found in the weights of MLPs; We follow a key observation, made more than 30 years ago (Hecht-Nielsen, 1990) which states that for any two consecutive internal layers of an MLP, simultaneously permuting the rows of the first layer and the columns of the second layer generates a new sequence of weight matrices that represent exactly the same underlying function. To illustrate this, consider a two-layer MLP of the form \(W_{2}\sigma(W_{1}x)\). Permuting the rows and columns of the weight matrices using a permutation matrix \(P\) in the following way: \(W_{1}\mapsto P^{T}W_{1},\bar{W}_{2}\mapsto W_{2}P\) will, in general, result in different weight matrices that represent _exactly_ the same function. More generally, any sequence of weight matrices and bias vectors can be transformed by applying permutations to their rows and columns in a similar way, while representing the same function, see Figure 1. After characterizing the symmetries of deep weight spaces, we define the architecture of Deep Weight-Space Networks (_DWSNets_) - deep networks that process other deep networks. As with many other equivariant architectures, e.g., Zaheer et al. (2017); Hartford et al. (2018); Maron et al. (2019), DWSNets are composed of simple affine equivariant layers interleaved with pointwise non-linearities. A key contribution of this work is that it provides the first characterization of the space of affine equivariant layers for the symmetries of weight spaces discussed above. Interestingly, our characterization relies on the fact that the weight space is a direct sum of group representations, and reveals that our linear equivariant layers, which we call _DWS-layers_, have a block matrix structure. Each block maps between specific weight and bias spaces of the input network. Furthermore, we show that these blocks can be implemented using three basic operations: broadcasting, pooling, or standard dense linear layers. This allows us to implement DWS-layers efficiently, significantly reducing the number of parameters compared to fully connected networks. Finally, we analyze the expressive power of DWS networks and prove that this architecture is capable of approximating a forward pass of an input network. Our findings provide a basis for further exploration of these networks and their capabilities. We demonstrate this by proving that DWS networks can approximate certain functions defined on the space of functions represented by the input MLPs. In addition, while this work focuses on MLPs, we discuss other types of input architectures, such as convolutional networks or transformers, as possible extensions. We demonstrate the efficacy of DWSNets on two types of tasks: (1) processing INRs; and (2) processing standard neural networks. The results indicate that our architecture performs significantly better than natural baselines based on data augmentation and weight-space alignment. **Contributions.** This paper makes the following contributions: (1) It introduces a symmetry-based approach for designing neural architectures that operate in deep weight spaces; (2) It provides the first characterization of the space of affine equivariant layers between deep weight spaces; (3) It analyzes aspects of the expressive power of the proposed architecture; and (4) It demonstrates the benefits of the approach in a series of applications from INR classification to the adaptation of networks to new domains, showing advantages over natural and recent baselines. ## 2 Previous work In recent years several studies suggested operating directly on the parameters of NNs. In both Eilertsen et al. (2020); Unterthiner et al. (2020) the weights of trained NNs were used to predict properties of networks. Eilertsen et al. (2020) suggested to predict the hyper-parameters used to train the network, and Unterthiner et al. (2020) proposed to predict the network generalization capabilities. Both of these studies use standard NNs on the flattened weights or on some statistics of them. Dupont et al. (2022) suggested applying deep learning tasks, such as generative modeling, to a dataset of INRs fitted from the original data. To obtain useful representations of the data, the authors used meta-learning techniques to learn low dimensional vectors, termed modulations, which were used in normalization layers. Unlike this approach, our method can work on any MLP and is agnostic to the way that it was trained. In Schurholdt et al. (2021) the authors suggested methods to learn representations of NNs using self-supervised methods, and in Schurholdt et al. (2022) this approach was leveraged for NN model generation. Xu et al. (2022) proposed to process neural networks by applying a neural network to a concatenation of their high-order spatial derivatives. Peebles et al. (2022) proposed a generative approach to output a target network based on an initial network and a target metric such as the loss value or return. Finally, in a recent submission, Anonymous (2023) suggested a methodology for processing of INRs by using a set-like architecture (Zaheer et al., 2017). See Appendix A for more related work. ## 3 Preliminaries **Notation.** we use \([n]=\{1,...,n\}\) and \([k,m]=\{k,k+1,\dots,m\}\). we use \(\Pi_{d}\) for the set of \(d\times d\) permutation matrices (bi-stochastic matrices with entries in \(\{0,1\}\)). \(S_{d}\) is the symmetric group of \(d\) elements. \(\mathbf{1}\) is an all ones vector. **Group representations and equivariance.** Given a vector space \(\mathcal{V}\) and a group \(G\), a _representation_ is a group homomorphism \(\rho\) that maps a group element \(g\in G\) to an invertible matrix \(\rho(g)\in GL(\mathcal{V})\). Given two vector spaces \(\mathcal{V},\mathcal{W}\) and corresponding representations \(\rho_{1},\rho_{2}\) a function \(L:\mathcal{V}\rightarrow\mathcal{W}\) is called _equivariant_ (or a \(G\)-linear map) if it commutes with the group action, namely \(L(\rho_{1}(g)v)=\rho_{2}(g)L(v)\) for all \(v\in\mathcal{V},g\in G\). When \(\rho_{2}\) is trivial, namely the output is the same for all input transformations, \(L\) is called an _invariant_ function. A _sub-representation_ of a representation \((\mathcal{V},\rho)\) is a subspace \(\mathcal{W}\subseteq\mathcal{V}\) for which \(\rho(g)w\in\mathcal{W}\) for all \(g\in G,w\in\mathcal{W}\). A direct sum of representations \((\mathcal{W},\rho^{\prime}),(\mathcal{U},\rho^{\prime\prime})\) is a new group representation \((\mathcal{V},\rho)\) where \(\mathcal{V}=\mathcal{W}\oplus\mathcal{U}\) and \(\rho(g)((w,u))=(\rho^{\prime}(g)v,\rho^{\prime\prime}(g)u)\). A permutation representation of a permutation group \(G\leq S_{n}\) maps a permutation \(\tau\) to its corresponding permutation matrix. For an introduction to group representations, refer to Fulton & Harris (2013). **MultiLayer Perceptrons.** MLPs are sequential neural networks with fully connected layers. Formally, an \(M\)-layer MLP \(f\) is a function of the following form: \[f(x)=x_{M},\quad x_{m+1}=\sigma(W_{m+1}x_{m}+b_{m+1}),\quad x_{0}=x \tag{1}\] Here, \(W_{m}\in\mathbb{R}^{d_{m}\times d_{m-1}}\) and \(b_{m}\in\mathbb{R}^{d_{m}}\), \([W_{m},b_{m}]_{m\in[M]}\) is a concatenation of all the weight matrices and bias vectors, and \(\sigma\) is a pointwise non-linearity like a ReLU or a sigmoid. Note that \(d_{m}\) is the dimension of \(x_{m}\), \(m=0,\dots,M\). ## 4 Permutation symmetries of neural networks In a fundamental work, Hecht-Nielsen (1990) observed that MLPs have permutation symmetries: swapping the order of the activations in an intermediate layer does not change the underlying function. Motivated by previous works (Hecht-Nielsen, 1990; Brea et al., 2019; Ainsworth et al., 2022) we define the _weight-space_ of an \(M\)-layer MLP as: \[\mathcal{V}=\bigoplus_{m=1}^{M}\left(\mathbb{R}^{d_{m}\times d_{m-1}}\oplus \mathbb{R}^{d_{m}}\right):=\bigoplus_{m=1}^{M}\left(\mathcal{W}_{m}\oplus \mathcal{B}_{m}\right), \tag{2}\] Figure 1: Symmetries of deep weight spaces, shown here on a 3-layer MLP. For any pointwise nonlinearity \(\sigma\), the permutations \(\tau_{1},\tau_{2}\) can be applied to rows and columns of successive weight matrices without changing the function represented by the network. where \(\mathcal{W}_{m}:=\mathbb{R}^{d_{m}\times d_{m-1}}\) and \(\mathcal{B}_{m}:=\mathbb{R}^{d_{m}}\). Each summand in the direct sum corresponds to a weight matrix and bias vector for a specific layer, i.e., \(W_{m}\in\mathcal{W}_{m},b_{m}\in\mathcal{B}_{m}\). We define the symmetry group of the weight space to be the direct product of symmetric groups for all the intermediate dimensions in the MLP \(m\in[1,M-1]\): \[G:=S_{d_{1}}\times\cdots\times S_{d_{M-1}}. \tag{3}\] Let \(v\in\mathcal{V}\), \(v=[W_{m},b_{m}]_{m\in[M]}\), then a group element \(g=(\tau_{1},\ldots,\tau_{M-1})\) acts on \(v\) as follows*: Footnote *: We note that a similar formulation first appeared in Ainsworth et al. (2022) \[\rho(g)v:=[W^{\prime}_{m},b^{\prime}_{m}]_{m\in[M]}, \tag{4a}\] \[W^{\prime}_{1}=P^{T}_{\tau_{1}}W_{1},\;b^{\prime}_{1}=P^{T}_{\tau_{1}}b_{1},\] (4b) \[W^{\prime}_{m}=P^{T}_{\tau_{m}}W_{m}P_{\tau_{m-1}},\;b^{\prime}_{m}=P^{T}_{\tau_ {m}}b_{m},\;m\in[2,M-1]\] (4c) \[W^{\prime}_{M}=W_{m}P_{\tau_{M-1}},\;b_{M^{\prime}}=b_{M}. \tag{4d}\] Here, \(P_{\tau_{m}}\in\Pi_{d_{m}}\) is the permutation matrix of \(\tau_{m}\in S_{d_{m}}\). Figure 1 illustrates these symmetries for an MLP with three layers. It is straightforward to show that for any pointwise nonlinearity \(\sigma\), the transformed set of parameters represents the same function as the initial set. It is also easy to see that all the vector spaces in Equation 2, namely \(\mathcal{W}_{m},\mathcal{B}_{\ell}\) are invariant to the action we just defined, which implies that \(\mathcal{V}\) is a direct sum of these representations. The symmetries described in Equation (4) were used in several studies in the last few years, mainly to investigate the loss landscape of neural networks (Brea et al., 2019; Tatro et al., 2020; Arjevani and Field, 2021; Entezari et al., 2021; Simsek et al., 2021; Ainsworth et al., 2022; Pena et al., 2022), but also in Schurholt et al. (2021) as a motivation for a data augmentation scheme. It should be noted that there are other symmetries of weight spaces that are not considered in this work. One such example is scaling transformations (Neyshabur et al., 2015; Badrinarayanan et al., 2015). Incorporating these symmetries into DWSNets architectures is left for future work. ## 5 A characterization of linear invariant and equivariant layers for weight-spaces In this section, we describe the main building blocks of DWSNets, namely the DWS-layers. The first subsection provides an overview of the section and its main results. In the following subsections, we discuss the finer details and explain how these results can be formally proved. Figure 2: Block matrix structure for linear equivariant maps between weight spaces. Left: an equivariant layer for the weight space \(\mathcal{V}\) to itself can be written as four blocks that map between the general weight space \(\mathcal{W}\) and general bias space \(\mathcal{B}\). Right: Each such block can be further written as a block matrix between specific weight and bias spaces \(\mathcal{W}_{m},\mathcal{B}_{\ell}\). Each color in each block matrix represents a different type of linear equivariant function between the sub-representations \(\mathcal{W}_{m},\mathcal{B}_{\ell}\). Blocks of the same type have different parameters. Repeating colors in different matrices are not related. See Tables 4-7 for a specification of the layers. ### Overview and main results Characterizing all affine equivariant and invariant maps for the weight space \(\mathcal{V}\) requires finding bases for three linear spaces: (1) the space of linear equivariant maps between the weight space \(\mathcal{V}\) to itself; (2) the space of constant equivariant functions (biases) on the weight space; and (3) the space of linear invariant maps on the weight space. As we show in Section 5.4 and Appendix B, we can readily adapt previous results in order to find bases for (2)-(3), and the main challenge is (1), which will be our main focus. To find a basis for the space of equivariant layers we will use a strategy based on a decomposition of the weight space \(\mathcal{V}\) into multiple sub-representations, corresponding to the weight and bias spaces. This is based on the classic result that states that any linear equivariant map between direct sums of representations can be represented in block matrix form, with each block mapping between two constituent representations in an equivariant manner. A formal statement can be found in Section 5.2. Importantly, this strategy simplifies our characterization and enables us to implement each block independently. First, we introduce another, coarser, decomposition of \(\mathcal{V}\) into two sub-representations \(\mathcal{V}=\mathcal{W}\oplus\mathcal{B}\). Here, \(\mathcal{W}:=\bigoplus_{m=1}^{M}\mathcal{W}_{m}\) is a direct sum of the spaces that represent weight matrices, and \(\mathcal{B}:=\bigoplus_{m=1}^{M}\mathcal{B}_{m}\) is a direct sum of spaces that represent biases. We further divide the layer \(L\) into four linear maps that cover all equivariant linear maps between the weights \(\mathcal{W}\) and the biases \(\mathcal{B}\): \(L_{\text{ww}}:\mathcal{W}\rightarrow\mathcal{W}\), \(L_{\text{wb}}:\mathcal{W}\rightarrow\mathcal{B}\), \(L_{\text{bw}}:\mathcal{B}\rightarrow\mathcal{W}\), \(L_{\text{bb}}:\mathcal{B}\rightarrow\mathcal{B}\). Figure 2 (left) illustrates this decomposition. Our next step is constructing equivariant layers between \(\mathcal{W},\mathcal{B}\), namely finding a basis for \(L_{\text{ww}},L_{\text{wb}},L_{\text{bw}},L_{\text{bb}}\). This is done by splitting them again into the sub-representations from Equation (2), i.e., \(\mathcal{W}_{m},\mathcal{B}_{\ell}\) and characterizing all the equivariant maps between these representations. We show that all these maps are either previously characterized linear equivaraint layers on sets (Zaheer et al., 2017; Hartford et al., 2018), or simple combinations of pooling, broadcasting and fully connected linear layers. This topic is discussed in detail in Section 5.3. Figure 2 illustrates the block matrix structure of each linear map \(L_{\text{ww}},L_{\text{wb}},L_{\text{bw}},L_{\text{bb}}\) according to the decomposition to sub-representations \(\{\mathcal{W}_{m},\mathcal{B}_{\ell}\}_{m,\ell\in[M]}\). Each color represents a different layer type as specified in Tables 4-7. Formally, our result can be stated as follows: **Theorem 5.1** (A characterization of linear equivariant layers between weight spaces).: _A linear equivariant layer between the weight space \(\mathcal{V}\) to itself can be written in block matrix form according to the decomposition of \(\mathcal{V}\) to sub-representations \(\mathcal{W}_{m},\mathcal{B}_{\ell}\). Moreover, each block can be implemented using a composition of pooling, broadcast, or fully connected linear layers. Tables 4-7 summarize the block structure, number of parameters, and implementation of all these blocks._ The layer \(L:\mathcal{V}\rightarrow\mathcal{V}\) is implemented by executing all the blocks independently and then summing the outputs according to the output sub-representations. As mentioned in the introduction, we call the layers from Theorem 5.1_DWS-layers_ and the architectures that use them (interleaved with pointwise nonlinearities), _DWSNets_. Readers who are not interested in the technical details can continue reading in Section 6. ### Linear equivariant maps for direct sums As mentioned above, a key property we will leverage is the fact that every equivariant linear operator between direct sums of representations can be written in a block matrix form, where each block is an equivariant map between the corresponding sub-representations in the sum. This property is summarized in the following classical result: **Proposition 5.2** (A characterisation of linear equivariant maps between direct sums of representations).: _Let \((\mathcal{V}_{m},\rho_{m}),m\in[M],\quad(\mathcal{V}^{\prime}_{\ell},\rho^{ \prime}_{\ell}),\ell\in[M^{\prime}]\) be orthogonal representations of a permutation group \(G\) of dimensions \(d_{m},d^{\prime}_{\ell}\) respectively. Let \((\mathcal{V},\rho):=\bigoplus_{m=1}^{M}\mathcal{V}_{m},(\mathcal{V}^{\prime}, \rho^{\prime}):=\bigoplus_{\ell=1}^{M^{\prime}}\mathcal{V}^{\prime}_{\ell}\) be direct sums of the representations above. Let \(B_{m\ell}\) be a basis for the space of linear equivariant functions between \((\mathcal{V}_{m},\rho_{m})\) to \((\mathcal{V}^{\prime}_{\ell},\rho^{\prime}_{\ell})\). Let \(B^{P}_{m\ell}\) be zero-padded versions of \(B_{m\ell}\): every element of \(B^{P}_{m\ell}\) is an all zero matrix in \(\mathbb{R}^{d^{\prime}\times d}\) for \(d=\sum_{m}d_{m},\ d^{\prime}=\sum_{\ell}d^{\prime}_{\ell}\) except for the \((m,\ell)\) block that contains a basis element from \(B_{m\ell}\). Then \(B=\cup_{m\ell}B^{P}_{m\ell}\) is a basis for the space of linear equivariant functions from \(\mathcal{V}\) to \(\mathcal{V}^{\prime}\)._ We refer the readers to Appendix E for the proof. Intuitively, proposition 5.2 reduces the problem of characterizing equivariant maps between direct sums of representations to multiple simpler problems of characterizing equivariant maps between the constituent sub-representations. A similar observation was made in the context of irreducible representations in Cohen & Welling (2017). ### Linear equivariant layers for deep weight spaces In this subsection, we explain how to construct a basis for the space of linear equivariant functions between a weight-space to itself: \(L:\mathcal{V}\rightarrow\mathcal{V}\). These layers will serve as basic components in our network and will be composed in order to construct deeper networks. We will first find a basis assuming a single feature dimension, and then discuss a simple way to extend our result to the general case in Appendix B. As mentioned in Section 5.1, each linear function \(L\) can be split into four maps: \(L_{\text{ww}},L_{\text{wb}},L_{\text{bw}},L_{\text{bb}}\), which themselves map a direct sum of representations to another direct sum of representations. To find a basis for all such linear equivariant maps, we use Proposition 5.2 and find bases for the linear equivariant maps between all the sub-representations \(\mathcal{W}_{m},\mathcal{B}_{\ell}\). To provide intuition, we begin by discussing the bias-to-bias part \(L_{\text{bb}}:\mathcal{B}\rightarrow\mathcal{B}\) which is the simplest case. As a next step, we discuss the basic operations that will be used to implement all the equivariant layers we propose and conclude by presenting the rules that we use in order to define the equivariant maps between all sub-representations. **Bias-to-bias layers.**\(L_{\text{bb}}\) is composed of blocks that map between bias spaces that are of the form \(T:\mathbb{R}^{d_{j}}\rightarrow\mathbb{R}^{d_{i}}\). Importantly, the indices \(i,j\) determine how the map \(T\) is constructed. Let us review three examples: (i) When \(i=j=M\), \(G\) acts trivially on both spaces and the most general equivariant map between them is a fully connected linear layer. Formally, this block can be written as \(b_{i}^{\text{new}}=Ab_{i}^{\text{old}}\) for a parameter matrix \(A\in\mathbb{R}^{d_{M}\times d_{M}}\). (ii) When \(i=j<M\), \(G\) acts jointly on the input and output by permuting them using the _same_ permutation. It is well known that the most general permutation equivariant layer in this case is a DeepSets layer (Zaheer et al., 2017). Hence, this block can be written as \(b_{i}^{\text{new}}=a_{1}b_{1}^{\text{old}}+a_{2}\mathbf{1}\mathbf{1}^{T}b_{i} ^{\text{old}}\) for two scalar parameters \(a_{1},a_{2}\in\mathbb{R}\). (iii) When \(i\neq j<M\) we have two dimensions on which \(G\) acts by independent permutations. We show that the most general linear equivariant layer first sums on the \(d_{j}\) dimension, then multiplies the result by a learnable scalar, and finally broadcasts the result on the \(d_{i}\) dimension. This block can be written as \(b_{i}^{\text{new}}=a\mathbf{1}\mathbf{1}^{T}b_{i}^{\text{old}}\) for a single scalar parameter \(a\in\mathbb{R}\). We refer the readers to Table 5 for the characterization of the remaining bias-to-bias layers. The block structure of \(L_{\text{bb}}\) is illustrated in the rightmost panel of Figure 2 where the single block of type (i) is colored in blue, blocks of type (ii) are colored in red, and blocks of type (iii) are colored in gray and cyan. Note that blocks of the same type have different parameters. **Basic operations for constructing layers between sub-representations.** In general, implementing linear equivariant maps between the sub-representations \(\mathcal{W}_{m},\mathcal{B}_{\ell}\) requires three basic operations: Pooling, Broadcast, and fully-connected linear maps. They will now be defined in more detail. (1) _Pooling:_ A function that takes an input tensor with one or more dimensions and sums over a specific dimension. For example, for \(x\in\mathbb{R}^{d_{1}\times d_{2}}\), \(POOL(d_{i})\) performs summation over the \(i\)-th dimension; (2) _Broadcast:_ A function that adds a new dimension to a vector by copying information along a particular axis. \(BC(d_{i})\) broadcasts information along the \(i\) -th dimension; (3) _Linear_: A fully connected linear map that can be applied to either vectors or matrices. \(LIN(d,d^{\prime})\) is a linear transformation represented by a \(d^{\prime}\times d\) matrix. Two additional operations that can be implemented using operations (1)-(3) * which will be useful for us are: (i) _DeepSets_ (Zaheer et al., 2017): the most general linear layer between sets; and (ii) Equivarant layers for a product of two sets as defined in Hartford et al. (2018) (See a formal definition in Appendix A). Footnote *: See Albooyeh et al. (2019) for a general discussion on implementing permutation equivariant functions with these primitives. **Definition of Layers between \(\mathcal{W}_{m},\mathcal{B}_{\ell}\).** Let \(T:\mathcal{U}\rightarrow\mathcal{U}^{\prime}\) be a map between sub-representations, i.e., \(\mathcal{U},\mathcal{U}^{\prime}\in\{\mathcal{W}_{m},\mathcal{B}_{\ell}\}_{m,\ell\in[M]}\). Both the domain and the codomain of \(T\) represent a specific weight or bias space and are associated with one or two indices reflecting the layers in the input MLP they represent. For example, one such layer is between \(\mathcal{U}=\mathbb{R}^{d_{1}\times d_{0}}\) and \(\mathcal{U}^{\prime}=\mathbb{R}^{d_{1}}\). We will now define three useful terms that will help us define a set of rules for constructing layers between these spaces. We call an index \(m\in[0,M]\), a _set_ index (or dimension), if \(G\) acts on it by permutation, otherwise, we call it _free_ index. From the definition, it is clear that \(0,M\) are free indices while all other indices, namely \(m\in[1,M-1]\) are set indices. Additionally, if indices in the domain and codomain are the same, we call them _shared_ indices. Based on the basic operations described above, the following rules are used to define equivariant layers between sub-representations \(\mathcal{W}_{m},\mathcal{B}_{\ell}\). (1) In the case of two shared set indices, which happens when mapping \(\mathcal{W}_{m},\ m\in[2,M-1]\) to itself, we use Hartford et al. (2018). (2) In the case of a single shared set index, for example, when mapping \(\mathcal{B}_{m},\ m\in[1,M-1]\) to itself we use DeepSets (Zaheer et al., 2017). (3) In case both the domain and the codomain have free indices, we use a dense linear layer. For example when mapping \(\mathcal{B}_{M}\) to itself. (4) We use pooling to contract unshared set input dimensions and linear layers to contract free input dimensions and, (5) We use broadcasting to extend output set dimensions, and linear layers to extend output free dimensions. Tables 4-7 provide a complete specification of linear equivariant layers between all sub-representations \(\{\mathcal{B}_{m},\mathcal{W}_{\ell}\}_{m,\ell\in[M]}\). **Proving that these layers form a basis.** At this point, we have created a list of equivariant layers between all sub-representations. We still have to prove that these layers span the space of linear equivariant maps between the corresponding representations. We do this by using a dimension-counting argument: we calculate the dimension of the space of linear equivariant maps for each pair of representations and show that the number of independent parameters in each proposed layer is equal to this dimension. The proof is presented in Appendix D. **Multiple channels and biases.** We refer the reader to Appendix B for a characterization of the bias terms (of DWSNets) and a generalization of Theorem 5.1 to multiple input and output channels. ### Linear invariant maps for weight-spaces Here, we provide a characterization of linear \(G\)-invariant maps \(L:\mathcal{V}\rightarrow\mathbb{R}\). Invariant layers (which are often followed by fully connected networks) are typically placed after a composition of several equivariant layers when the task at hand requires a single output, e.g., when the input network represents an INR of a 3D shape and the task is to classify the shapes. We use the following characterization of linear invariant maps from Maron et al. (2019b): **Proposition 5.3**.: _Let \(G\leq S_{n}\) be a permutation group and \(P\) its permutation representation on \(\mathbb{R}^{n}\). Every linear \(G\)-invariant map \(L:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is of the form \(L(x)=\sum_{i=1}^{O}w_{i}a_{i}^{T}x\) where \(w_{i}\) are learnable scalars, \(a_{i}\in\mathbb{R}^{n}\) are indicator vectors for the orbits of the action of \(G\) on \([n]\) and \(O\) is the number of such orbits._ This proposition follows directly from the fact that a weight vector \(w\) has to obey the following equation \(w=\rho(g)w\) for all group elements \(g\in G\). In our case, \(G\) is a permutation group acting on the index space of \(\mathcal{V}\), i.e., the indices of all the weights and biases of an input network. In order to apply Theorem 5.3, we need to find the orbits of this action on the indices of \(\mathcal{V}\). Importantly, each such orbit is a subset of the indices that correspond to a specific weight or bias vector. These orbits are summarized in Table 8. It follows that every linear invariant map defined on \(\mathcal{V}\) can be written as a summation of the maps listed below: (1) a distinct learnable scalar times the sum of \(W_{m}\) for \(m\in[2,M-1]\) and the sum of \(b_{m}\) for \(m\in[1,M-1]\); (2) a sum of columns of \(W_{1}\), and the sum of rows of \(W_{M}\) weighted by distinct learnable scalars for each such column and row (3) an inner product of \(b_{M}\) with a learnable vector of size \(d_{M}\). ### Extension to other architectures. In this paper, we primarily focus on MLP architectures as the input for DWSNets. However, the characterization can be extended to additional architectures. We discuss possible extensions to two architectures, namely convolutional neural networks (CNNs) and Transformers (Vaswani et al., 2017), in Appendix H. ## 6 Expressive power The expressive power of equivariant networks is an important concern since by restricting our hypothesis class we might unintentionally impair its function approximation capabilities. For example, this is the case with Graph neural networks (Morris et al., 2019; Xu et al., 2019; Morris et al., 2021). Here, we provide a first step towards understanding the expressive power of DWSNets by demonstrating that these networks are capable of approximating feed-forward procedures on input networks. Figure 3: _Sine wave regression_. Test MSE (log scale) for a varying number of training examples. **Proposition 6.1** (DWSNets can approximate a feed-forward pass).: _Let \(M,d_{0},\ldots,d_{M}\) specify an MLP architecture with ReLU nonlinearities. Let \(K\subset\mathcal{V}\), \(K^{\prime}\subset\mathbb{R}^{d_{0}}\) be compact sets. DWSNets with ReLU nonlinearities are capable of uniformly approximating a feed-forward procedure on an input MLP represented as a weight vector \(v\in K\) and an input to the MLP, \(x\in K^{\prime}\)._ The proof can be found in Appendix F. We note that the inherent ability of DWSNets to evaluate input networks could be a very useful tool, for example, in order to separate MLPs that represent different functions. As another example, below, we show that DWSNets can approximate any "nice" function defined on the space of functions represented by MLPs with weights in some compact subset of \(\mathcal{V}\). **Proposition 6.2**.: _(informal) Let \(g:\mathcal{F}_{\mathcal{V}}\rightarrow\mathbb{R}\) be a function defined on the space of functions represented by \(M\)-layer ReLU MLPs with dimensions \(d_{0},...,d_{M}\), whose weights are in a compact subset of \(\mathcal{V}\) and their input domain is a compact subset of \(\mathbb{R}^{d_{0}}\). Assume that \(g\) is \(L\)-Lipshitz w.r.t \(||\cdot||_{\infty}\) (on functions), then under some additional mild assumptions specified in Appendix G, DWSNets with ReLU nonlinearities are capable of uniformly approximating \(g\)._ The full proof can be found in Appendix G. We note that Theorem 6.2 differs from most universality theorems in the relevant literature (Maron et al., 2019; Keriven and Peyre, 2019) since we do not prove that we can approximate any \(G\)-equivariant function on \(\mathcal{V}\). In contrast, we show that DWSNets are powerful enough to approximate functions on the function space defined by the input MLPs, that is, functions that give the same result to all weights that represent the same functions. ## 7 Experiments We evaluate DWSNets in two families of tasks. (1) First, taking input networks that represent data, like INRs (Park et al., 2019; Sitzmann et al., 2020). Specifically, we train a model to classify INRs based on the class they represent or predict continuous properties of the objects they represent. (2) Second, taking input networks that represent standard input-output mappings such as image classification. We train a model to operate on these mappings and adapting them to new domains. We also perform additional experiments, for example predicting the generalization performance of an image classifier in Appendix K. Full experimental and technical details are discussed in Appendix J. **Baselines.** Our objective in this section is to compare different _architectures_ that operate directly on weight spaces, using the same data, loss function, and training process. As learning on weight spaces is a relatively new problem, we consider five natural and recent baselines. (**i) _MLP_ : A standard MLP applied to a vectorized version of the weight space. (**ii) _MLP + augmentations_**:**, apply the MLP from (i) but with permutation-based data augmentations sampled randomly from the symmetry group \(G\). (**iii) _MLP + weight alignment_**:** We perform a weight alignment procedure prior to training using the algorithm recently suggested in Ainsworth et al. (2022), see full details in Appendix J. **(iv) _INR2Vec_**:** The architecture suggested in Anonymous (2023) (see Appendix A for a discussion)*. (**v) _Transformer_**:** The architecture of Schurholt et al. (2021). It adapts the transformer encoder architecture and attends between different rows in weight and bias matrices to form a global representation of the input network. Footnote *: We do not use their pre-training since we are interested in comparing only the architectures. **Data preparation.** We train all input networks independently starting with a different random seed in order to test our architecture on diverse data obtained from multiple independent sources. To support future research and the reproducibility of our results, we will release the datasets and our source code. \begin{table} \begin{tabular}{l c c} \hline \hline & MNIST INR & Fashion-MNIST INR \\ \hline MLP & \(17.55\pm 0.01\) & \(19.96\pm 0.38\) \\ MLP + Perm. aug & \(29.26\pm 0.18\) & \(22.47\pm 0.26\) \\ MLP + Alignment & \(58.98\pm 0.52\) & \(47.98\pm 0.46\) \\ INR2Vec (Arch.) & \(23.69\pm 0.10\) & \(22.62\pm 0.06\) \\ Transformer & \(26.57\pm 0.18\) & \(26.97\pm 0.33\) \\ \hline DWSNets (ours) & \(\mathbf{85.71\pm 0.57}\) & \(\mathbf{65.59\pm 0.48}\) \\ \hline \hline \end{tabular} \end{table} Table 1: _INR classification:_ The class of an INR is defined by the image that it represents. We report the average test accuracy. ### Results **Regression of sine wave frequency.** To first illustrate the operation of DWSNets, we look into a regression problem. We train INRs to fit sine waves on \([-\pi,\pi]\), with different frequencies sampled from \(U(0.5,10)\). Each sine wave is represented as an MLP trained as an INR network, and the task is to have the DWSNet predict the frequency of a given test INR network. To illustrate the generalization capabilities of the architectures, we repeat the experiment by training the DWSNet with a varying number of training examples (INRs). Figure 3 shows that DWSNets performs significantly better than baseline methods even with a small number of training examples. **Classification of images represented as INRs.** Here, INRs were trained to represent images from MNIST (LeCun et al., 1998) and Fashion-MNIST (Xiao et al., 2017). The task is to have the DWSNet recognize the image content, like the digit in MNIST, by using the weights of these INRs as input. Table 1 shows that DWSNets outperforms all baseline methods by a large margin. **Self-supervised learning for dense representation.** Here we wish to embed neural networks into a semantic coherent low dimensional space, similar to Schurholt et al. (2022). To that end, we fit INRs on sine waves of the form \(a\sin(bx)\) on \([-\pi,\pi]\). Here \(a,b\sim U(0,10)\) and \(x\) is a grid of size \(2000\). We use a SimCLR-like training procedure and objective (Chen et al., 2020): Following Schurholt et al. (2022), we generate random views from each INR by adding Gaussian noise (with standard deviation of 0.2) and random masking (with probability 0.5). We evaluate the different methods in two ways. First, we qualitatively observe a 2D TSNE of the resulting space. The results are presented in Figure 4 and Figure 8. For quantitative evaluation, we train a (linear) regressor for predicting \(a,b\) on top of the embedding space obtained by each method. See results in Table 3. **Learning to adapt networks to new domains.** Adapting a network to a new data distribution is an important task. Here we train a model to adapt an input classification model to a new domain. Specifically, given an input weight vector \(v\), we wish to output residual weights \(\Delta v\) such that a classification network parametrized using \(v-\Delta v\) performs well on the new domain. We note that it is natural to require that \(\Delta v\) will be permuted if \(v\) is permuted, and hence a \(G\)-equivariant architecture is appropriate. At test time, our model can adapt an unseen classifier to the new domain using a single forward pass. Using the CIFAR10 (Krizhevsky et al., 2009) dataset as the source domain, we train multiple image classifiers. To increase the diversity of the input classifiers, we train each classifier on the binary classification task of distinguishing between two randomly sampled classes. For the target domain, we use a version of CIFAR10 corrupted with random rotation, flipping, Gaussian noise and color jittering. The results are presented in Table 2. Note that in test time the model should generalize to unseen image classifiers, as well as unseen images. ### Analysis of the results In this section, we evaluated DWSNets on several learning tasks and showed that it outperforms all other methods, usually by a large margin. Furthermore, compared to the most natural baseline of network alignment, DWSNets scale significantly better with the data. In reality, it is challenging to use this baseline due to the fact that the weight-space alignment problem is hard (Ainsworth et al., 2022). The problem is further amplified when having large input networks or large (networks) datasets. Figure 4: _Dense representation:_ 2D TSNE of the resulting low-dimensional space. We present the results for DWSNets and the second best performing baseline, INR2Vec (architecture). See Appendix K.2 for full results. ## 8 Conclusion and future work This paper considers the problem of applying neural networks directly on neural weight spaces. We present a principled approach and propose an architecture for the network that is equivariant to a large group of natural symmetries of weight spaces. We hope this paper will be one of the first steps towards neural models capable of processing weight spaces efficiently in the future. **Limitations.** One limitation of our method is that an equivalent layer structure is currently tailored to a specific MLP architecture. However, we believe that this can be alleviated in the future, for example by sharing the parameters of the equivariant blocks between inner layers. In addition, we found it difficult to train DWSNets on some learning tasks, presumably because finding a suitable weight initialization scheme for DWSNetswas very hard. See Appendix K.4 for a discussion on these issues. As a final point, the implementation of our DWSNets is somewhat complicated. Our code and data will be made public so that others can build on it and improve it. **Future work.** Several potential directions for future research could be explored, including modeling other weight space symmetries in architectures, understanding how to initialize the weights of DWSNets, and studying the approximation power of DWSNets. Other worthwhile directions are finding efficient data augmentation schemes for training on weight spaces, and incorporating permutation symmetries for other types of input architectures. ## 9 Acknowledgements The authors wish to thank Nadav Dym and Derek Lim for providing valuable feedback on early versions of the manuscript. In addition, they would like to thank Yaron Lipman for helpful Discussions. This study was funded by a grant to GC from the Israel Science Foundation (ISF 737/2018), and by an equipment grant to GC and Bar-Ilan University from the Israel Science Foundation (ISF 2332/18). AN and AS are supported by a grant from the Israeli higher-council of education, through the Bar-Ilan data science institute (BIU DSI). IA is supported by a PhD fellowship from DSI BIU.
2301.03855
Continuous optical-to-mechanical quantum state transfer in the unresolved sideband regime
Optical-to-mechanical quantum state transfer is an important capability for future quantum networks, quantum communication, and distributed quantum sensing. However, existing continuous state transfer protocols operate in the resolved sideband regime, necessitating a high-quality optical cavity and a high mechanical resonance frequency. Here, we propose a continuous protocol that operates in the unresolved sideband regime. The protocol is based on feedback cooling, can be implemented with current technology, and is able to transfer non-Gaussian quantum states with high fidelity. Our protocol significantly expands the kinds of optomechanical devices for which continuous optical-to-mechanical state transfer is possible, paving the way towards quantum technological applications and the preparation of macroscopic superpositions to test the fundamentals of quantum science.
Amy Navarathna, James S. Bennett, Warwick P. Bowen
2023-01-10T08:57:43Z
http://arxiv.org/abs/2301.03855v1
# Continuous optical-to-mechanical quantum state transfer in the unresolved sideband regime ###### Abstract Optical-to-mechanical quantum state transfer is an important capability for future quantum networks, quantum communication, and distributed quantum sensing. However, existing continuous state transfer protocols operate in the resolved sideband regime, necessitating a high-quality optical cavity and a high mechanical resonance frequency. Here, we propose a continuous protocol that operates in the unresolved sideband regime. The protocol is based on feedback cooling, can be implemented with current technology, and is able to transfer non-Gaussian quantum states with high fidelity. Our protocol significantly expands the kinds of optomechanical devices for which continuous optical-to-mechanical state transfer is possible, paving the way towards quantum technological applications and the preparation of macroscopic superpositions to test the fundamentals of quantum science. The ability to transfer quantum states between optical communication channels and quantum computing nodes is a necessary ingredient of the emerging quantum internet [1]. Quantum state transfer also has important applications in quantum-enhanced sensing [2; 3], quantum-secure communications [4], and fundamental tests of macroscopic quantum mechanics [5; 6; 7; 8; 9; 10]. A leading approach is to mediate the transfer using an optomechanical resonator [11; 12; 13; 14; 15; 16]. This is attractive because mechanical resonators interact via radiation pressure with electromagnetic fields of all frequencies [1] and can also be functionalized to interact with most quantum computing nodes, such as spins [18; 19; 20], superconducting devices [21; 22; 23] and atomic ensembles [24]. The first step in the transfer process is an optical-to-mechanical state transfer, with a subsequent transfer to the final computing node [25; 26; 27]. An optical cavity is employed to enhance the radiation pressure during the optical-to-mechanical state transfer. Leading proposals work only in the _resolved sideband regime_, where the decay rate of this cavity is lower than the mechanical resonance frequency [12; 28]. By contrast, most optomechanical systems operate in the _unresolved sideband regime_[29]. In many cases this is due to the benefits that low mechanical frequencies convey for applications, for instance in precision sensing [30; 31; 32]. In others, it is because of the difficulty of simultaneously achieving a low decay rate, a high resonance frequency, and sufficient radiation pressure coupling [33]. To date, the only proposals for optical-to-mechanical state transfer in the unresolved sideband regime have used pulsed, rather than continuous, optomechanical interactions [34; 35; 36]. This narrows the range of applications, introduces significant technical challenges due to the additional timing and phase accuracy required [37; 38; 36], and involves large radiation pressure impulse forces that can be problematic [39; 40; 35]. It is well known that a mechanical resonator can be feedback cooled close to its motional ground state in the unresolved sideband regime [41]. Here we propose a continuous optical-to-mechanical state transfer protocol based on the same concept. By modelling the open quantum system dynamics, we show that feedback cooling can be understood as the transfer of a vacuum state of light onto the mechanical resonator. We find that appropriate choice of the feedback parameters allows the transfer of arbitrary quantum states. The requirements for successful transfer closely match those for ground-state cooling - once the optomechanical cooperativity exceeds the thermal occupancy of the mechanical resonator, a coherent state can be transferred with near unity fidelity and the Wigner-negativity of non-Gaussian states can be preserved. Moreover, the feedback parameters can be used to phase-sensitively amplify (or _squeeze_) the transferred state, to engineer its temporal profile, and - in direct analogy to state-transfer via resolved sideband cooling [42] - to achieve the transfer of a single optical sideband. Our work extends continuous optomechanical state transfer beyond the resolved sideband limit, to low-quality optical cavities and low frequency mechanical resonators. Feedback cooling of a mechanical resonator to near its motional ground state has recently been demonstrated, both in cryogenic [43] and room temperature environments [44]. As such, our proposal can be directly implemented with existing technology, providing a new tool for quantum networks and opening a new pathway to create and study macroscopic quantum systems. Our work also provides new insights into feedback cooling, showing that the process is in fact a quantum state transfer from light to mechanical motion. We consider an optomechanical system in the unresolved sideband, high mechanical quality regime (\(\kappa\gg\Omega\gg\Gamma\)) with resonant optical driving, where \(\kappa\) (\(\Gamma\)) is the optical (mechanical) energy decay rate, and \(\Omega\) the mechanical resonance frequency. In this scenario, the amplitude quadrature of the input optical field \(X_{\mathrm{in}}\) is directly imprinted on the mechanical motion via radiation pressure. The phase quadrature \(Y_{\mathrm{in}}\) is not, but is encoded on the phase quadrature of the output optical field as [1]: \[Y_{\mathrm{out}}=-\sqrt{\eta}Y_{\mathrm{in}}+2\sqrt{\eta\Gamma C}Q+\sqrt{1-\eta} Y_{\mathrm{v}}, \tag{1}\] where \(\eta\) is the detection efficiency, \(C=4g_{\mathrm{om}}^{2}/\Gamma\kappa\) is the optomechanical cooperativity with \(g_{\mathrm{om}}\) being the coherent-amplitude-boosted optomechanical coupling rate, \(Y_{\mathrm{v}}\) is the vacuum noise introduced due to detection loss, \(Q\) (\(P\)) is the dimensionless mechanical position (momentum) operator with \([Q,P]=i\), and all optical quadrature operators are normalised such that \([X(t),Y(t^{\prime})]=i\delta(t-t^{\prime})\). We propose to detect the output phase quadrature and use continuous feedback to transfer it to the mechanical resonator, as shown in Fig. 1. We note that feed-forward, similar to our feedback, has been applied to improve microwave-to-optical state transfer in the resolved sideband regime [45]. In contrast, the feed-forward functioned in that experiment to suppress correlated noise terms, while both optical quadratures were transferred by radiation pressure. Our scheme is analogous to feedback cooling [46, 47, 48, 49, 41, 43, 44, 45, 46, 47, 48, 49, 50], with the detected signal applied as a force onto the mechanical resonator. Using quantum Langevin equations, we find that it is described by the following equations of motion: \[\dot{Q}=\Omega P-\frac{\Gamma}{2}Q+\sqrt{\Gamma}Q_{\mathrm{in}}, \tag{2}\] and \[\dot{P}= -\Omega Q-\frac{\Gamma}{2}P+\sqrt{\Gamma}P_{\mathrm{in}}-2\sqrt{ \Gamma C}X_{\mathrm{in}} \tag{3}\] \[-\frac{\Gamma G}{2}f(t)\circled{\infty}\left(-\left(Y_{\mathrm{in} }-\sqrt{\frac{1-\eta}{\eta}}Y_{\mathrm{v}}\right)\frac{1}{2\sqrt{\Gamma C}}+Q \right),\] where \(P_{\mathrm{in}}\) and \(Q_{\mathrm{in}}\) are white thermal noise operators that satisfy \([Q_{\mathrm{in}}(t),P_{\mathrm{in}}(t^{\prime})]=i\delta(t-t^{\prime})\), and we have made the rotating wave approximation (RWA) with respect to the mechanical bath [1, 51]. The last term of Eq. (3) represents the feedback force, where the measured photocurrent is convolved with an arbitrary causal filter function \(f(t)\in\mathbb{R}\) and amplified by the gain factor \(G\). The filter function is normalised so that \(|f(\Omega)|=1\), where \(f(\omega)=\int_{-\infty}^{\infty}f\left(t\right)e^{\mathrm{i}\omega t} \mathrm{d}t\) is the Fourier transform of \(f(t)\). The steady-state solutions to Eqs (2) and (3) are found by moving into frequency space and adiabatically eliminating the dynamics of the optical cavity field (Supplementary Material, Section I [56]). This results in the quadratures \[Q\left(\omega\right) =\sqrt{\Gamma}\chi(\omega)\bigg{[}Q_{\mathrm{in}}+\phi(\omega)P_ {\mathrm{in}}-2\sqrt{C}\phi(\omega)X_{\mathrm{in}}+\frac{Gf(\omega)\phi( \omega)}{4\sqrt{C}}\left(Y_{\mathrm{in}}-\sqrt{\frac{1-\eta}{\eta}}Y_{\mathrm{ v}}\right)\bigg{]}, \tag{4}\] \[P\left(\omega\right) =\sqrt{\Gamma}\chi(\omega)\bigg{[}P_{\mathrm{in}}-\left(\frac{Gf (\omega)\Gamma}{2\Omega}+1\right)\phi(\omega)Q_{\mathrm{in}}-2\sqrt{C}X_{ \mathrm{in}}+\frac{Gf(\omega)}{4\sqrt{C}}\left(Y_{\mathrm{in}}-\sqrt{\frac{1- \eta}{\eta}}Y_{\mathrm{v}}\right)\bigg{]}, \tag{5}\] where \[\phi(\omega)=\frac{\Omega}{\Gamma/2-i\omega}, \tag{6}\] the feedback-broadened mechanical susceptibility is \[\chi(\omega)=\frac{1}{\Omega\phi(\omega)^{-1}+(\Omega+G\Gamma\frac{f(\omega) }{2})\phi(\omega)}, \tag{7}\] and the adiabatic elimination is valid in the unresolved sideband regime (\(\{\Omega\), \(C\Gamma\}\ll\kappa\)) taken throughout this paper. From Eq. (7), we see that the mechanical susceptibility decreases as \(G\) increases. This suppresses most of the mechanical terms in Eqs (4) and (5). The only term that remains is \(Q_{\mathrm{in}}\) in \(P(\omega)\), but this is suppressed by the large mechanical quality factor (\(\Omega/\Gamma\gg 1\)). It is this combined suppression of all mechanical terms that enables optical state transfer with high fidelity. The optical input field consists of a continuum of optical modes. To build insight into which of these modes is best transferred to the single mechanical mode, as well as the gain and noise of the transfer process, we re-write Figure 1: Schematic optomechanical system with feedback. Light is coupled into an optomechanical cavity. The reflected light is measured through homodyne detection. The detected photocurrent (\(Y_{\mathrm{out}}(t)\)) is convolved with a filter \(f(t)\) and directly fed back to the momentum of the mechanical resonator. Eqs (4) and (5) as: \[Q =g_{X}X_{\rm trans}+Q_{\rm noise,optical}+Q_{\rm noise,mechanical} \tag{8}\] \[P =g_{Y}Y_{\rm trans}+P_{\rm noise,optical}+P_{\rm noise,mechanical}. \tag{9}\] Here, \(X_{\rm trans}\) and \(Y_{\rm trans}\) are the optical quadratures transferred to position and momentum, respectively, and \(g_{X}\) and \(g_{Y}\) are the transfer gains. Terms labelled with a subscript 'noise' encompass the residual thermal variance remaining after feedback, and any optical terms not arising from the temporal mode of interest (_i.e._, inefficient detection, mode mismatch). The input optical quadratures transferred to \(Q\) and \(P\) in Eqs. (4) and (5) are not perfectly conjugate observables. The difference is embodied in \(\phi\), and is a result of the retarded response of the mechanical position to an applied force. The imperfection introduces an ambiguity in the optical mode that is optimally transferred - a different mode is best transferred to \(P\) and \(Q\). Here, we choose to assess the transfer of the mode that is optimally transferred to \(P\). This mode is described by the annihilation operator \[a_{\rm trans}(\omega)=u(\omega)a_{\rm in}(\omega) \tag{10}\] and spectral modeshape \[u(\omega)=\frac{2\sqrt{\Gamma C}}{g_{Y}}\chi(\omega)\left(\frac{Gf(\omega)}{8C} -i\right), \tag{11}\] where \(a_{\rm in}(\omega)=(X_{\rm in}(\omega)+iY_{\rm in}(\omega))/\sqrt{2}\). Using the relations \(X_{\rm trans}=(a_{\rm trans}^{\dagger}+a_{\rm trans})/\sqrt{2}\) and \(Y_{\rm trans}=i(a_{\rm trans}^{\dagger}-a_{\rm trans})/\sqrt{2}\), its amplitude and phase quadratures are found to be \[X_{\rm trans} =\frac{2\sqrt{\Gamma C}}{g_{Y}}\chi(\omega)\left(\frac{Gf(\omega )}{8C}X_{\rm in}+Y_{\rm in}\right) \tag{12}\] \[Y_{\rm trans} =\frac{2\sqrt{\Gamma C}}{g_{Y}}\chi(\omega)\left(-X_{\rm in}+ \frac{Gf(\omega)}{8C}Y_{\rm in}\right). \tag{13}\] Comparison of Eq. (13) with Eq. (5) confirms that \(Y_{\rm trans}\) is reproduced exactly in \(P(\omega)\), scaled by the momentum gain \(g_{Y}\). The phase quadrature transfer gain, \(g_{Y}\), can be determined by enforcing the boson commutation relation \([a_{\rm trans}(t),a_{\rm trans}^{\dagger}(t)]=1\) on Eq. (10); while that for the amplitude quadrature, \(g_{X}\), can be found by requiring that the optical noise on position commutes with both \(X_{\rm trans}\) and \(Y_{\rm trans}\), _i.e._, \([Q_{\rm noise,optical}(t),X_{\rm trans}(t)]=[Q_{\rm noise,optical}(t),Y_{\rm trans }(t)]=0\), where \(Q_{\rm noise,optical}\) is obtained by rearranging Eq. (8). Together, these give \[g_{Y} =\left[\frac{4\Gamma C}{2\pi}\int_{-\infty}^{\infty}|\chi(\omega )|^{2}\left(|f(\omega)|^{2}+1\right)\mathrm{d}\omega\right]^{1/2} \tag{14}\] \[g_{X} =-\frac{1}{g_{Y}}\frac{8\Gamma C}{2\pi}\int_{-\infty}^{\infty}| \chi(\omega)|^{2}\Im(\phi(\omega))\Im(f(\omega))\mathrm{d}\omega. \tag{15}\] The spectral modeshape and quadratures of the transferred mode depend on both the feedback-broadened mechanical susceptibility \(\chi(\omega)\) and the feedback filter function \(f(\omega)\), so that the transferred state can be controlled through appropriate choice of the filter properties. Thus far our results are valid for an arbitrary real-valued causal filter function. In the remainder of the paper we choose the generalized-Lorentzian filter \[f(\omega)=\frac{\Gamma^{\prime}\Omega}{\omega^{2}-\Omega^{2}+\mathrm{i}\Gamma^ {\prime}\omega}, \tag{16}\] where \(\Gamma^{\prime}\) is the filter bandwidth. This filter is commonly used for feedback cooling [41; 50; 52; 48] and is close to the known optimal filter for both momentum estimation [53] and feedback cooling [54]. \(\Gamma^{\prime}\) is chosen to be much larger than \(\Omega\), so that the filter acts as an integrator near the mechanical resonance frequency. The gain factor \(G\) can then be understood as the fractional increase in the mechanical decay rate due to the feedback. With the filter in Eq. (16) and in the limit of large filter bandwidth and mechanical quality factor \((\Omega/\Gamma\gg 1)\), the amplitude and phase quadrature transfer gains can be approximated as \[g_{Y}=2\sqrt{C\left(\frac{1+\frac{G^{2}}{64C^{2}}}{2+G}\right)}\;\;\text{and} \;\;g_{X}=\frac{1}{g_{Y}(1+2/G)}. \tag{17}\] We define the overall gain of the transfer process as \(\sqrt{g_{X}g_{Y}}\), so that it is independent of unitary squeezing operations on the transferred state [55], and define the level of squeezing applied during the transfer as \(g_{X}/g_{Y}\). The overall gain and squeezing level are plotted as a function of the feedback gain factor \(G\) in Fig. 2 using both numerical calculations and the analytic approximations of Eqs (17). For these plots and throughout the paper we use the system parameters \(\Omega/2\pi=1\,\mathrm{MHz}\), \(\Gamma/2\pi=1\,\mathrm{Hz}\), \(\Gamma^{\prime}/2\pi=1.59\,\mathrm{MHz}\), \(\kappa/2\pi=100\,\mathrm{MHz}\), and \(g_{\rm om}/2\pi=395\,\mathrm{kHz}\), \(T=30\,\mathrm{mK}\) which have been achieved in a range of optomechanical experiments [43; 44; 33]. The overall transfer gain approaches unity for \(G\gg 1\), and the transfer generally involves amplitude quadrature squeezing (\(g_{X}/g_{Y}<1\)). Only at \(G=8C\) do we find that the input state is transferred without any squeezing (\(g_{X}/g_{Y}=1\)). Comparison of Eq. (12) with Eq. (4) shows that, in the high quality limit for which \(f(\omega)\) can be substituted with \(f(\pm\Omega)=\mp i\), this choice of gain also results in near-agreement between \(X_{\rm trans}\) and the optical input terms in \(Q\). The remaining discrepancy arises from the retardation factor \(\phi(\omega)\), and this discrepancy approaches zero in the high-quality-factor limit. We therefore select \(G=8C\) for the remainder of the paper. It is illustrative to consider how our choice of filter function and gain factor influences the spectral mode-shape \(u(\omega)\). The frequency dependence of the prefactor in Eq. (11) depends only on \(\chi(\omega)\), and is sharply peaked at both \(\pm\Omega\). However, since \(f(\pm\Omega)=\mp i\), for \(G=8C\) the term in parentheses is precisely zero at \(-\Omega\) and equals \(-2i\) at \(\Omega\). Our particular choice, therefore, enables a single-sideband state transfer, transferring only the lower optical sideband and doing this with a modeshape given approximately by \(\chi(\omega)\) (see also Supplementary Material, Section II [56]). To quantitatively assess the quality of transfer we first consider an input vacuum state. We calculate the contributions to the position and momentum variances from this input and from the noise sources specified in Eqs (8) and (9) (see Supplementary Material, Sections II & III [56]). We separate the optical noise into contributions arising from inefficiences and mode mismatch, so that the non-ideality of the transfer that arises due to \(\phi(\omega)\) can be assessed. The results are plotted in Fig. 3 (a) as a function of \(C/n_{\text{th}}\) (with \(G=8C\)). The variance of the transferred optical mode increases with \(C\), asymptoting to the vacuum variance of \(1/2\) once \(C\gg 1\). Conversely, the mechanical noise contribution decreases, dropping below the vacuum level for \(C\gg n_{\text{th}}\). The variance of the optical inefficiency noise has a cooperativity dependence that is similar to the optical signal, increasing with \(C\) and asymptoting to a constant value once \(C\gg 1\). As expected, this noise increases as the detection efficiency degrades. However, even for \(\eta\) as low as \(0.5\) the transferred signal variance still dominates inefficiency noise for the whole range of \(C/n_{\text{th}}\). The mode-mismatch noise on \(Q\) is very low for small \(C\), increases approximately linearly with \(C\), and eventually exceeds the signal variance. Thus, the mode-mismatch ultimately constrains the performance of the state transfer. Using the analytic expressions for the gains in Eqs (17), we derive analytic expressions for the different variance contributions that are valid in the same high-quality, high-bandwidth limit (see Supplementary Material [56], Section III). With the exception of the mismatch noise, which is zero in the limit of high quality, these expressions agree well with the numerical results in Fig. 3 (a). From them, we find that when \(C\gg 1\) the noise variance introduced by optical inefficiency is \(V_{\eta}=(1-\eta)/4\eta\), and that the mechanical noise variance is suppressed below the vacuum noise level once \(C>\bar{n}_{\text{th}}/2\). Since the feedback process is linear and all noise sources are Gaussian, it is straightforward to extend our analysis beyond the transfer of vacuum states, to more elaborate states such as Schrodinger cat states. This can be achieved using Wigner functions (Supplementary Material, Section IV [56]). Imperfections introduced by the thermal noise, mode mismatch, and inefficiency tend to'smear out' quantum features of the transferred optical mode's Wigner function. Mathematically, this is represented by convolving the signal's Wigner function with a Figure 3: (a) Contributions to the variance as a function of interaction strength of mechanical noise (blue), optical signal (yellow), and two contributions of optical noise: mode mismatch on \(Q\) (black), and inefficiency (red). The size of the markers correspond to the inefficiency (\(\eta=\,0.9,\,0.75,\,0.5\)) for decreasing size respectively. (b) The transfer fidelity (\(\mathcal{F}\)) as a function of interaction strength for a coherent state (black), cat state (green) (\(\alpha=2\)) and single photon Fock state (dark blue). Inset shows \(\mathcal{F}\) as a function of \(\eta\) for the coherent state, at a fixed value of \(C/n_{\text{th}}=10\). (c) Corresponding plots of the Wigner distributions for a coherent state (top row), cat state (middle row) and Fock state (bottom row) at the interaction strengths indicated by the grey lines connected to subplot (b). The black dotted circle in the top right indicates the length scale of the contour of the ground state. The orientation of the plots is indicated by the black arrows in the top right plot. Figure 2: Transfer gain (\(\sqrt{g_{X}g_{Y}}\), red) and squeezing (\(g_{X}/g_{Y}\), blue) as a function of the feedback strength by cooperativity (\(G/C\)). The dashed line indicates \(\text{G}=1\) and the full grey line indicates the optimal gain (\(G=8C\)), where \(g_{X}/g_{Y}=1\). The dots are numerically obtained, and the lines are using the analytic expressions derived in the high quality factor limit. Gaussian noise kernel \(\mathcal{G}(\mathbf{r})\) (with \(\mathbf{r}=(Q\;\;P)^{T}\)) [57]: \[W_{\text{transfer}}(\mathbf{r})=(W\;\;\&\;\mathcal{G})\left(\mathbf{r}\right). \tag{18}\] In the regime relevant to this paper, \(\mathcal{G}\) is typically close to symmetric, with a slight wider spread in the \(Q\) direction due to mode mismatch. The transfer fidelity can then be determined for any pure input state as \[\mathcal{F}=2\pi\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}W(\mathbf{r})W_ {\text{transferred}}(\mathbf{r})\text{d}^{2}\mathbf{r}. \tag{19}\] We plot the fidelity for input coherent, Fock, and cat states in Fig. 3 (b) as a function of \(C/n_{\text{th}}\) and assuming that \(\eta=1\). The coherent state fidelity exceeds the classical limit of \(1/2\) at \(C/n_{\text{th}}=0.25\) and the no-cloning bound of \(2/3\) at \(C/n_{\text{th}}=0.50\). The fidelity for the non-Gaussian states also reach fidelities greater than \(0.5\) at similar, experimentally accessible [33; 58; 43] cooperativities. For the chosen experimental parameters, the maximum achievable fidelities are \(0.98\), \(0.93\), and \(0.82\) for coherent, Fock, and cat states, respectively, and are limited by the mode-mismatch noise. The fidelity is robust against measurement inefficiencies as visible in the inset of Fig. 3 (b), which shows that the coherent state fidelity can exceed \(1/2\) even with a detection efficiency as low as \(\eta=0.2\). Fig. 3 (c) plots the Wigner distributions of transferred coherent, Fock, and cat states at three different values of \(C/n_{\text{th}}\), showing that the negativity of the Fock and cat states can be transferred, and therefore non-classical properties of the input state preserved. In conclusion, we have identified that feedback can be used to achieve continuous optical-to-mechanical state transfer in the unresolved sideband regime. We predict that state transfer can be achieved with high fidelity and whilst preserving non-classical features such as Wigner negativity. The ability to implement continuous state transfer in the unresolved sideband regime significantly widens the class of optomechanical systems that can be used as interfaces in quantum networks. ## Acknowledgements The authors thank Mr S. Khademi and Dr C. Meng for useful discussions. This research was primarily supported by the Australian Research Council Centre of Excellence for Engineered Quantum Systems (EQUS, CE170100009). Support was also provided by the by the Air Force Office of Scientific Research under award number FA9550-20-1-0391.
2309.01874
Exact Results for the Distribution of the Partial Busy Period for a Multi-Server Queue
Exact explicit results are derived for the distribution of the partial busy period of the M/M/c multi-server queue for a general number of servers. A rudimentary spectral method leads to a representation that is amenable to efficient numerical computation across the entire ergodic region. An alternative algebraic approach yields a representation as a finite sum of Marcum Q-functions depending on the roots of certain polynomials that are explicitly determined for an arbitrary number of servers. Asymptotic forms are derived in the limit of a large number of servers under two scaling regimes, and also for the large-time limit. Connections are made with previous work. The present work is the first to offer tangible exact results for the distribution when the number of servers is greater than two.
Josef Zuk, David Kirszenblat
2023-09-05T00:57:30Z
http://arxiv.org/abs/2309.01874v1
# Exact Results for the Distribution of the Partial Busy Period for a Multi-Server Queue ###### Abstract Exact explicit results are derived for the distribution of the partial busy period of the M/M/\(c\) multi-server queue for a general number of servers. A rudimentary spectral method leads to a representation that is amenable to efficient numerical computation across the entire ergodic region. An alternative algebraic approach yields a representation as a finite sum of Marcum Q-functions depending on the roots of certain polynomials that are explicitly determined for an arbitrary number of servers. Asymptotic forms are derived in the limit of a large number of servers under two scaling regimes, and also for the large-time limit. Connections are made with previous work. The present work is the first to offer tangible exact results for the distribution when the number of servers is greater than two. queueing theory; partial busy period Msc2000 subject classification Primary: 90B22; secondary: 60K25, 60J74 OR/MS subject classification Primary: Queues: Busy period analysis; secondary: Queues: Algorithms _History: Date created: May 27, 2023. Last update: August 21, 2023._ ## 1 Introduction One of the fundamental concepts in queueing theory is the busy period (BP). Other basic quantities include the stationary queue-size and waiting-time distributions. For the archetypal M/M/\(c\) multi-server queueing model, the latter are extensively covered in all introductory textbooks, _e.g._[12, 28]. But they are relevant only in the ergodic region where the total traffic intensity is less than unity. By contrast, the BP persists as a valid concept beyond this region. However, there is little available to be said about its calculation, even in the ergodic case. There are two main characterizations of a BP. The full BP refers to the time interval during which all servers are occupied. The partial BP refers to the time interval during which at least one server is occupied. For a single-server queue, the two definitions trivially coincide. It is the distribution of the partial BP that will be studied here, and use of the term 'busy period' on its own will assume the partial variant. We shall confine our attention to the M/M/\(c\) queue, and denote the number of servers by \(N\). Thus \(c=N\) in our discussion. Within this class of models, the BP distribution for the single-server case is known. For a multi-server queue, the full BP is simply related to that of the single-server case with an adjusted service time [9]. On the other hand, results on the partial BP for a multi-server queue are sparse. For more than two servers, there exists in the published literature neither numerical algorithms nor analytic expressions for the distribution of the partial BP. The present work provides both of these. We first demonstrate a spectral method that serves as the basis for a simple and efficient numerical algorithm that covers at least the entire ergodic region and handles server numbers up to around \(N=80\). We have tested it on the interval of traffic intensities \(0\leq r\leq 1\). We also develop an algebraic method that gives rise to an explicit representation of the BP distribution for any number of servers as a finite sum of Marcum Q-functions, dependent on the roots of a certain family of polynomials whose coefficients we determine. Karlin and McGregor [16] give an explicit integral representation for the case of two servers based on a spectral method involving families of orthogonal polynomials. But, while they claim that their method can in principle be extended to a larger number of servers, this must be done on a case by case basis, and the calculations become so unwieldy beyond \(N=2\) that this line of attack has not hitherto been pursued by anyone. By contrast, we show that a much more rudimentary spectral method yields more generally applicable results with greater ease. Arora [3] has also derived the distribution for the two-server problem as a infinite series of modified Bessel functions. Our algebraic approach can be viewed as a generalization of this to an arbitrary number of servers. All other work appears to be centered around computing moments of the BP distribution. Natvig [21] obtains the first and second order moments of the length of the partial BP. Second moments of the partial BP distribution are also derived by Omahen and Marathe [22]. Further progress has been made more recently by Artalejo and Lopez-Herrero [4] who show how to compute arbitrary moments. As commented upon by these authors, prior to their paper in 2001, general results were only known for the one and two server cases, and moments up to second order had appeared for the general problem. We are not aware of anything more recent that alters this state of affairs. ## 2 Motivation The BP distribution for the basic M/M/\(c\) queue is equally applicable to a large family of more complex, albeit memoryless, queueing systems that incorporate multiple classes of arrivals each comprised of multiple priority levels that are processed under a variety of queuing disciplines (_e.g._ preemptive or non-preemptive). A concrete example is furnished by the ambulance ramping problem encountered by hospital emergency departments (ED), where one question that may be studied is to what extent an ambulance offload zone alleviates the ambulance queue that builds up while patients wait to be admitted and prevents ambulances from being dispatched to other calls [18; 15]. Patients are differentiated via arrival source into the ED as either walk-ins and ambulance arrivals, with ambulance arrivals further sub-divided into those who enter the ED from the ambulance queue or the offload zone. Multiple priorities are assigned to patients in each arrival class depending on their acuity levels, and each arrival-class/priority-level combination can exhibit a different arrival rate. The main constraint is that of a common mean service time (_i.e._ treatment time) for all patients. It does not matter to the servers how the patients are arranged prior to entry, or on the mixture currently in service. Hence the BP distribution is just that of the basic model. Rastpour [25] has recently studied partial BP distributions in the context of emergency medical services. Our interest in the partial BP emerged from its relevance to regenerative simulation of the steady-state limit of multi-server, multi-priority, multi-class queues as discussed above. The regenerative simulation technique was introduced by Crane and Iglehart [8]. Empirical distributions for queue lengths and waiting times per class or priority level can be ascertained from steady-state discrete event simulations, and subsequently compared with theoretical predictions. In the ergodic region, sample means for summary statistics are easily estimated. However, confidence intervals are also required to judge whether the null hypothesis that the empirical and theoretical values are generated by the same candidate distribution is to be rejected at some given level of statistical significance. In steady state simulation, since one is producing a very long single run of a stochastic (_e.g._ Markov) process, the data underlying the empirical distributions are highly correlated. Difficulties with estimating confidence intervals arising from unwanted correlation can be avoided by recognizing that one is dealing with a regenerative process, and partitioning the time series into consecutive regeneration cycles. These are statistically independent and identically distributed as each cycle effectively restarts the process without memory of the past. The natural regeneration point in a queuing problem is the empty state, which occurs an infinite number of times in an ergodic system. Thus, each regeneration cycle comprises an initial idle (_i.e._ empty) period followed by a partial BP. The distribution of the idle period is simply the inter-arrival distribution, assumed here to be exponential. Therefore, the interesting quantity in the regeneration cycle problem is the partial BP. In carrying out a state-state simulation of a queueing system, it is a straightforward matter to collect data on the lengths of regeneration cycles and BPs. Comparison of these empirical results with theoretical expectations serves as a useful diagnostic tool for judging the soundness of the simulation. It was the paucity of theoretical results for this comparison that led to the present investigation. The remainder of the paper is organized as follows. Our starting point in Section 3 will be the recursive system of equations considered by Artalejo and Lopez-Herrero [4], which can be solved to give the moment generating function (MGF) of the partial BP. We begin by recasting this as a matrix inversion problem for a tridiagonal matrix. The matrix, which has the form of a generator for a birth-death process, is inverted by means of its spectral decomposition. This leads to an explicit representation of the distribution in terms of the eigenstates that is amenable to efficient numerical implementation. This is followed in Section 4 by a algebraic approach where the recurrence relations of Artalejo and Lopez-Herrero [4] are solved directly for the MGF. The method leads to a finite continued fraction representation similar to previous findings in the literature, but which does not appear to be suitable for further analysis. An alternative method yields an explicit form for the MGF in terms of a family of polynomials, whose degree increases with the number of servers, and which depend parametrically on the traffic intensity. A simple two-dimensional system of recurrence relations allows the coefficients of these polynomials to be computed. In Section 5, asymptotic limits as the number of servers \(N\) approaches infinity are derived under two different scaling regimes. These are subsequently used for comparison in validating exact results for large but finite \(N\), and determining how large \(N\) must be for one to judge that asymptotic behaviour has effectively set in. In Section 6, the polynomial-based representation of the MGF is combined with a complex contour integral that implements the inverse Laplace transform, whose structure is determined by poles that arise from zeros of the polynomials. The contour integration evaluates to explicit closed-form expressions for the partial BP distribution as a finite sum of Marcum Q-functions. In Section 7, various summary statistics are computed from the posited theoretical distributions and compared, across a large range of model parameters, with known exact results and with the output of Monte Carlo simulation. Excellent agreement is demonstrated over the entire ergodic range of traffic intensity, and for server numbers up to well within the asymptotic regime. Concluding remarks are presented in Section 8. ## 3 Spectral Method We consider the M/M/\(c\) queue with \(c=N\) servers, Poisson arrival rate \(\lambda\) and mean treatment (or service) time \(1/\mu\). Following [4], the quantities \(\phi_{n}(s)\), \(n=1,2,\ldots\), where \(n\) labels the number of busy servers, are defined to be MGFs of the first passage times from the state \(n=1,2,\ldots,N\) to the empty state \(0\). Also, let \(T_{\rm bp}\) be the random variable (RV) describing the partial BP, in which case its MGF is given by \(\phi_{1}(s)\equiv\langle e^{-sT_{\rm bp}}\rangle_{T_{\rm bp}}\). Then, if we set \[\begin{array}{rcl}\psi_{\pm}(s)&\equiv&\frac{1}{2\lambda} \left[\lambda+N\mu+s\pm\sqrt{(\lambda+N\mu+s)^{2}-4N\mu\lambda}\right]\\ &\equiv&\frac{1}{2r}\left[r+1+s/(N\mu)\pm\sqrt{(r+1+s/(N\mu))^{2}- 4r}\right]\;,\end{array} \tag{1}\] where \(r\equiv\lambda/(N\mu)\) is the total traffic intensity, the MGF for the single-server problem (\(N=1\)) reads \(\phi_{1}(s)=\psi_{-}(s)\). We confine our attention to the ergodic region \(0\leq r<1\), but we shall also study the boundary value \(r=1\) as a special case. We also introduce \(\mu_{n}\equiv n\mu\), for \(n=1,2,\ldots,N\), and without loss of generality, we may set \(\mu=1/N\), so that \(r=\lambda\) and \[\psi_{\pm}(s)=\frac{1}{2r}\left[r+1+s\pm\sqrt{(r+1+s)^{2}-4r} \right]\,. \tag{2}\] Finally, we write \(\mu_{n}\equiv n\mu_{1}\) with \(\mu_{1}=1/N\), so that \(0\leq\mu_{n}\leq 1\) for all \(n\). The linear recurrence relations of Artalejo and Lopez-Herrero [4] may be cast as follows: For \(n=1,2,\ldots,N-1\), \[\mu_{n}\phi_{n-1}-(r+\mu_{n}+s)\phi_{n}+r\phi_{n+1}=0\,, \tag{3}\] and \(\phi_{N}=\psi_{-}(s)\phi_{N-1}\), with \(\phi_{0}\equiv 1\). As the original paper does not make the derivation of this result explicit, we supply the details in the Appendix. These equations have an \((N-1)\times(N-1)\) tridiagonal matrix structure that may be expressed as \[\begin{bmatrix}r+\mu_{1}+s&-r&0&\cdots&\cdots\\ -\mu_{2}&r+\mu_{2}+s&-r&\cdots&\cdots\\ 0&-\mu_{3}&r+\mu_{3}+s&-r&\cdots\\ \vdots&&\ddots&\ddots&\ddots\\ 0&\cdots&\cdots&-\mu_{N-2}&r+\mu_{N-2}+s&-r\\ 0&\cdots&\cdots&0&-\mu_{N-1}&r+\mu_{N-1}+s\end{bmatrix}\begin{bmatrix}\phi_{1} \\ \phi_{2}\\ \vdots\\ \phi_{N-2}\\ \phi_{N-1}\end{bmatrix}=\begin{bmatrix}\mu_{1}\\ 0\\ \vdots\\ 0\\ r\phi_{N}\end{bmatrix}\,. \tag{4}\] In Dirac notation [31], this matrix equation reads \[(s\mathbb{I}-M)|\phi\rangle=|w\rangle\,, \tag{5}\] where the matrix \(M\) is the birth-death process generator \[M=\begin{bmatrix}-(r+\mu_{1})&r&0&\cdots&\cdots&0\\ \mu_{2}&-(r+\mu_{2})&r&\cdots&\cdots&0\\ 0&\mu_{3}&-(r+\mu_{3})&r&\cdots&0\\ \vdots&&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&\cdots&\mu_{N-2}&-(r+\mu_{N-2})&r\\ 0&\cdots&\cdots&0&\mu_{N-1}&-(r+\mu_{N-1})\end{bmatrix}\,. \tag{6}\] It may be compared with the matrices introduced in (1.3) and (4.1) of Karlin and McGregor [16]. Its trace and determinant, useful for checking the numerical solution of the eigenvalue problem, are \[\operatorname{tr}(M)=-(N-1)\left(r+\tfrac{1}{2}\right)\,,\quad\det(M)=(-r)^{N- 1}\sum_{k=0}^{N-1}\frac{k!}{(Nr)^{k}}\,. \tag{7}\] If we introduce the standard orthonormal Cartesian basis on \(\mathbb{R}^{N-1}\) as \(|e_{i}\rangle\), \(i=1,2,\ldots,N-1\), then we have that \(\phi_{n}\equiv\langle e_{n}|\phi\rangle\). Also, \[|w\rangle\equiv\mu_{1}|e_{1}\rangle+r\phi_{N}|e_{N-1}\rangle\,. \tag{8}\] Since \(M\) is non-symmetric, it has distinct right and left eigenvectors so that its eigenvalue problem is written as \[M|v_{\mathrm{R}}^{(k)}\rangle=\xi_{k}|v_{\mathrm{R}}^{(k)}\rangle\,,\quad \langle v_{\mathrm{L}}^{(k)}|M=\xi_{k}\langle v_{\mathrm{L}}^{(k)}|\,. \tag{9}\] With the eigenvectors required to satisfy the orthonormality conditions \(\langle v_{\mathrm{L}}^{(k)}|v_{\mathrm{R}}^{(\ell)}\rangle=\langle v_{ \mathrm{R}}^{(k)}|v_{\mathrm{L}}^{(\ell)}\rangle=\delta_{k\ell}\), the spectral decomposition of \(M\) reads \[M=\sum_{k=1}^{N-1}|v_{\mathrm{R}}^{(k)}\rangle\xi_{k}\langle v_{\mathrm{L}}^{ (k)}|\,. \tag{10}\] To determine \(\phi_{1}(s)\), we invert (5) to obtain \[|\phi\rangle=\left(s\mathbb{I}-M\right)^{-1}|w\rangle\,, \tag{11}\] and observe that it suffices to consider just the first and last elements of this equation, namely \[\begin{array}{rcl}\langle e_{1}|\phi\rangle&=&\mu_{1}\sum_{k=1}^{N-1}\frac{1}{s- \xi_{k}}\langle e_{1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{(k)}|e_{1} \rangle+r\phi_{N}\sum_{k=1}^{N-1}\frac{1}{s-\xi_{k}}\langle e_{1}|v_{\rm R}^{( k)}\rangle\langle v_{\rm L}^{(k)}|e_{N-1}\rangle\,,\\ \langle e_{N-1}|\phi\rangle&=&\mu_{1}\sum_{k=1}^{N-1}\frac{1}{s- \xi_{k}}\langle e_{N-1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{(k)}|e_{1} \rangle+r\phi_{N}\sum_{k=1}^{N-1}\frac{1}{s-\xi_{k}}\langle e_{N-1}|v_{\rm R}^ {(k)}\rangle\langle v_{\rm L}^{(k)}|e_{N-1}\rangle\,.\end{array} \tag{12}\] The matrix \(M\) may be symmetrized by means of a diagonal similarity transformation \(D={\rm diag}[d_{1},d_{2},\ldots,d_{N-1}]\) where \(d_{1}=1\) and \(d_{k}=\sqrt{\mu_{k}/r}\cdot d_{k-1}\), for \(k=2,\ldots,N-1\), according to \(S=D^{-1}MD\). Its spectral decomposition reads \[S=\sum_{k=1}^{N-1}|v^{(k)}\rangle\xi_{k}\langle v^{(k)}|\,, \tag{13}\] and the eigenvectors (assumed orthonormal) are related by \[|v_{\rm R}^{(k)}\rangle=D|v^{(k)}\rangle\,,\quad|v_{\rm L}^{(k)}\rangle=D^{-1 }|v^{(k)}\rangle\,. \tag{14}\] Let us define \[\begin{array}{rcl}v_{11}^{(k)}&\equiv&\langle e_{1}|v^{(k)}\rangle\langle v ^{(k)}|e_{1}\rangle\,,\\ v_{22}^{(k)}&\equiv&\langle e_{N-1}|v^{(k)}\rangle\langle v^{(k)}|e_{N-1} \rangle\,,\\ v_{12}^{(k)}&\equiv&\langle e_{1}|v^{(k)}\rangle\langle v^{(k)}|e_{N-1} \rangle\,,\\ &=&\langle e_{N-1}|v^{(k)}\rangle\langle v^{(k)}|e_{1}\rangle\,.\end{array} \tag{15}\] It follows that \[\begin{array}{rcl}\langle e_{1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{( k)}|e_{1}\rangle&=&v_{11}^{(k)}\,,\\ \langle e_{N-1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{(k)}|e_{N-1}\rangle&=& v_{22}^{(k)}\,,\\ \langle e_{1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{(k)}|e_{N-1}\rangle&=& \Lambda^{1/2}v_{12}^{(k)}\,,\\ \langle e_{N-1}|v_{\rm R}^{(k)}\rangle\langle v_{\rm L}^{(k)}|e_{1}\rangle&=& \Lambda^{-1/2}v_{12}^{(k)}\,,\end{array} \tag{16}\] where \[\Lambda\equiv(d_{1}/d_{N-1})^{2}=\prod_{k=2}^{N-1}(r/\mu_{k})\,. \tag{17}\] It is useful to note that the following consequence of the completeness relation for the eigenvector basis \[\sum_{k=1}^{N-1}|v^{(k)}\rangle\langle v^{(k)}|=\mathbb{I}\quad\Rightarrow \quad\sum_{k=1}^{N-1}v_{mn}^{(k)}=\delta_{mn}\,, \tag{18}\] for \(m,n=1,2\), where \(\delta_{mn}\) is the Kronecker delta. We can now express (12) as \[\begin{array}{rcl}\phi_{1}&=&\mu_{1}G_{11}+r\phi_{N}\Lambda^{1/2}G_{12}\,,\\ \phi_{N-1}&=&\mu_{1}\Lambda^{-1/2}G_{12}+r\phi_{N}G_{22}\,,\end{array} \tag{19}\] where we have introduced the resolvent functions \[G_{11}(s)\equiv\sum_{k=1}^{N-1}\frac{v_{11}^{(k)}}{s-\xi_{k}}\,,\quad G_{12}( s)\equiv\sum_{k=1}^{N-1}\frac{v_{12}^{(k)}}{s-\xi_{k}}\,,\quad G_{22}(s)\equiv \sum_{k=1}^{N-1}\frac{v_{22}^{(k)}}{s-\xi_{k}}\,. \tag{20}\] We can eliminate \(\phi_{N}\) by applying the relation \(\phi_{N}=\psi_{-}(s)\phi_{N-1}\) to the second equation in (19), which yields \[\phi_{N}=\frac{\mu_{1}}{r\Lambda^{1/2}}\cdot\frac{G_{12}}{\psi_{+}-G_{22}}\,, \tag{21}\] upon noting that \(\psi_{+}(s)\psi_{-}(s)=1/r\). This expression is then inserted into the first equation in (19) to provide the explicit result for the MGF \[\phi_{1}(s)/\mu_{1}=G_{11}(s)+\frac{G_{12}^{2}(s)}{\psi_{+}(s)-G_{22}(s)}\,. \tag{22}\] It should be noted that, due to cancellations, there are actually no singularities at the eigenvalues \(s=\xi_{k}\). The singularity structure in the complex \(s\)-plane of \(\phi_{1}(s)\) comprises a cut and potentially a finite number of poles. The cut is due to the presence of \(\psi_{+}(s)\), and lies along the negative real axis corresponding to the finite interval for which the discriminant \(\Delta(s)\equiv b^{2}(s)-4r\) is negative, and where \(b(s)\equiv r+1+s\). Thus, the cut spans \(x_{-}\leq-s\leq x_{+}\), where the cut limits are given by \(x_{\pm}=(1\pm\sqrt{r})^{2}\). The poles arise from the vanishing of the denominator in (22), and only lie on the negative real axis between the origin and the near cut boundary, as discussed in [16]. This is illustrated in Figure 1 for the case \(r=0.5\), \(N=30\), the details of which will be explained later on. The BP distribution is recovered from the MGF \(\phi_{1}(s)\) by an inverse Laplace transform implemented as a Bromwich contour integral in the complex \(s\)-plane. The contour may be deformed onto the negative real axis, giving rise to discrete residue contributions from any relevant poles, plus a real-valued integral on the cut. An explicit representation of this integral has been given for the two-server case in equation (6.25) of [16]. It is also of the same type as the integral arising in the waiting-time distribution for priority queues, as can be seen in [10]. One should observe that, in the present formalism, the two-server problem is trivial as the matrix \(M\) is one dimensional, so that there is no eigenvalue problem to solve. To evaluate the contribution to the BP distribution from the cut, we introduce the product function \[\Pi(s)\equiv\prod_{k=1}^{N-1}\left(s-\xi_{k}\right), \tag{23}\] whose square will multiply both the numerator and denominator in (22) in order to eliminate spurious singularities at the eigenvalues. This results in the appearance of the non-singular functions \[\begin{array}{ll}H_{12}(s)\equiv G_{12}(s)\Pi(s)&=\sum_{k=1}^{N-1}v_{12}^{(k)} \prod_{\ell=1\atop\ell\neq k}^{\ell-1}(s-\xi_{\ell})\,,\\ H_{22}(s)\equiv G_{22}(s)\Pi(s)&=\sum_{k=1}^{N-1}v_{22}^{(k)}\prod_{\ell=1 \atop\ell\neq k}^{N-1}(s-\xi_{\ell})\,.\end{array} \tag{24}\] Next, letting \(D^{\pm}(s)\equiv\psi_{\pm}(s)-G_{22}(s)\) so that \(D^{+}(s)\) is the denominator in (22), we see that \[D^{+}(s)D^{-}(s)=\left[1-b(s)G_{22}(s)+rG_{22}^{2}(s)\right]/r\,. \tag{25}\] By appealing to the identity \(1/D^{+}(s)=D^{-}(s)/[D^{+}(s)D^{-}(s)]\), we are able to construct the cut contribution as \[\phi_{1}(s)/\mu_{1}\mathop{\longrightarrow}_{\rm cut}(\pm i)R_{\rm cut}(s) \sqrt{|\Delta(s)|}\,, \tag{26}\] where the cut function \(R_{\rm cut}(s)\) is given by \[R_{\rm cut}(s)\equiv\frac{H_{12}^{2}(s)}{\Pi^{2}(s)-b(s)\Pi(s)H_{22}(s)+rH_{2 2}^{2}(s)}\,. \tag{27}\] This definition takes account of the fact that the cut is traversed in two opposite directions. For \(N=1\), the cut function is just the constant \(R_{\rm cut}(s)=1/r\) while, for \(N=2\), we obtain \[R_{\rm cut}(s)=\frac{2}{r-1/2-s}\,. \tag{28}\] Now, since \(\Pi(\xi_{k})=0\) for all \(k=1,2,\ldots,N-1\), it follows that \[R_{\rm cut}(\xi_{k})=\frac{1}{r}\cdot\left[\frac{H_{12}(\xi_{k})}{H_{22}(\xi _{k})}\right]^{2}\,. \tag{29}\] But we also have that \[H_{12}(\xi_{k})=v_{12}^{(k)}\prod_{\ell=1\atop\ell\neq k}^{N-1}(\xi_{k}-\xi_{ \ell})\,,\quad H_{22}(\xi_{k})=v_{22}^{(k)}\prod_{\ell=1\atop\ell\neq k}^{N-1 }(\xi_{k}-\xi_{\ell})\,. \tag{30}\] Thus, \[R_{\rm cut}(\xi_{k})=(v_{12}^{(k)}/v_{22}^{(k)})^{2}/r\,, \tag{31}\] which provides a useful numerical check. By considering the point \(s=0\), we obtain the identities \[\begin{array}{r@{\,=\,}l}1\,=\,&\mu_{1}G_{11}(0)+rG_{12}\sqrt{\Lambda}\,, \\ 1\,=\,&\mu_{1}G_{11}(0)+\mu_{1}G_{12}^{2}(0)/[1/r-G_{22}(0)]\,,\end{array} \tag{32}\] which imply that \[\mu_{1}G_{12}(0)/\sqrt{\Lambda}+rG_{22}(0)=1\,. \tag{33}\] The eigenvalue problem for the symmetric matrix \(S\) is solved numerically using the LAPACK routine dstevd, that implements a divide-and-conquer method [2]. Identical results are produced when using Matlab's eig function, which is probably an indication that eig invokes the same algorithm when applied to this problem. On making the change of integration variable \(x=-s\) to obtain positive values for the coordinates of the cut, we can express the cut contribution to the BP probability density function (PDF) as \[P_{\rm cut}(t)=\frac{\mu_{1}}{2\pi}\int_{(1-\sqrt{r})^{2}}^{(1+\sqrt{r})^{2}} dx\,e^{-xt}R_{\rm cut}(-x)\sqrt{|\Delta(-x)|}\,. \tag{34}\] For the survival function1 (SF), Footnote 1: Also known as the complementary cumulative distribution function. \[\bar{F}_{\rm cut}(t)=\frac{\mu_{1}}{2\pi}\int_{(1-\sqrt{r})^{2}}^{(1+\sqrt{r} )^{2}}\frac{dx}{x}\,e^{-xt}R_{\rm cut}(-x)\sqrt{|\Delta(-x)|}\,. \tag{35}\] Let \(\Delta x\equiv x_{+}-x_{-}=4\sqrt{r}\) denote the length of the cut. Then a further change of variable, such that \(x=x_{-}+\Delta x\cdot u\), puts the integral over the unit interval \[\bar{F}_{\rm cut}(t)=\frac{2\mu_{1}\sqrt{r}}{\pi}e^{-4a\sqrt{r}}\int_{0}^{1} \frac{du}{u+a}\,e^{-4\sqrt{r}u}R_{\rm cut}(-4\sqrt{r}(u+a))\sqrt{u(1-u)}\,, \tag{36}\] where \[a\equiv x_{-}/\Delta x=\tfrac{1}{4}\left(\sqrt{r}+1/\sqrt{r}\right)-\tfrac{1} {2}\,. \tag{37}\] A further change of variable \(v=2u-1\) casts the integral into a form that is amenable to efficient Gauss-Chebyshev quadrature: \[\bar{F}_{\rm cut}(t)=\frac{\mu_{1}\sqrt{r}}{\pi}e^{-(r+1)t}\int_{-1}^{+1}dv\, \sqrt{1-v^{2}}e^{-2\sqrt{r}tv}.\frac{R_{\rm cut}(-2\sqrt{r}(v+\alpha))}{v+ \alpha}\,, \tag{38}\] with \(\alpha\equiv 2a+1=(\sqrt{r}+1/\sqrt{r})/2\). It is known that Gauss-Chebyshev quadrature of the second kind is equivalent to the trapezoidal rule on the unit circle [6]. Thus, if we set \(v=\cos(\pi\tau)\), so that \(0\leq\tau\leq 1\), and subsequently uniformly discretize the unit interval into \(L\) sub-intervals according to \(\tau_{n}=n/L\), then we obtain the \(L\)-point quadrature rule for the SF given by \[\bar{F}_{\rm cut}(t)\simeq\sum_{n=1}^{L}w_{n}e^{-x_{n}t}\,, \tag{39}\] with nodes and weights, respectively, \[x_{n}\equiv 1+r+2\sqrt{r}\cos(\pi\tau_{n})\,,\quad w_{n}\equiv\frac{\mu_{1} \sqrt{r}}{L}\cdot\frac{\sin^{2}(\pi\tau_{n})}{\alpha+\cos(\pi\tau_{n})}\cdot R _{\rm cut}(-x_{n})\,. \tag{40}\] Quadrature for the PDF is given by \[P_{\rm cut}(t)\simeq\sum_{n=1}^{L}w_{n}x_{n}e^{-x_{n}t}\,. \tag{41}\] Equation (38) may also be expressed in a form suitable for Gauss-Chebyshev quadrature of the first kind, namely, \[\bar{F}_{\rm cut}(t)=\frac{\mu_{1}\sqrt{r}}{\pi}e^{-(r+1)t}\int_{-1}^{+1} \frac{dv}{\sqrt{1-v^{2}}}\,e^{-2\sqrt{\tau}tv}.\frac{(1-v)R_{\rm cut}(-2\sqrt {r}(v+\alpha))}{1+(\alpha-1)/(1+v)}\,. \tag{42}\] It is known that Gauss-Chebyshev quadrature of the first kind is equivalent to the mid-point rule on the unit circle, which gives rise to the same quadrature scheme as described for the trapezoidal rule, but where the nodes are generated by \(\tau_{n}=(n-1/2)/L\) for \(n=1,2,\ldots,L\), so that \(0<\tau_{n}<1\) which avoids any potential singularities. We find that the most numerically robust scheme over the entire range of traffic intensities \(r\) corresponds to the mid-point rule with the quadrature weights expressed in the form \[w_{n}=\frac{2\mu_{1}\sqrt{r}}{L}\cdot R_{\rm cut}(-x_{n})\cdot\frac{\sin^{2}( \pi\tau_{n}/2)}{1+(\alpha-1)/[2\cos^{2}(\pi\tau_{n}/2)]}\,. \tag{43}\] The trapezoidal rule is generally a poorly performing quadrature scheme. However, it has been shown to be exponentially convergent for some specific classes of integrand, such as those comprising smooth periodic functions in which the integration extends over a period [26; 29]. It should be noted that the integrand above is unit-periodic in \(\tau\). Because of this, and the fact that it is equivalent to Gauss-Chebyshev quadrature, the trapezoidal rule is an efficient way to compute the BP distribution. This is also true for the mid-point rule. Alternatively, in order to attain a guaranteed level of accuracy, we may apply an iterative trapezoidal or mid-point rule to the integral \[\bar{F}_{\rm cut}(t)=\int_{0}^{1}d\tau\,w(\tau)e^{-x(\tau)t}\,, \tag{44}\] with \[w(\tau)\equiv\mu_{1}\sqrt{r}\frac{\sin^{2}(\pi\tau)}{\alpha+\cos(\pi\tau)}\cdot R_ {\rm cut}(-x(\tau))\,, \tag{45}\] and \[x(\tau)=1+r+2\sqrt{r}\cos(\pi\tau)\,. \tag{46}\] Gaussian quadrature schemes do not generally allow nodes and weights from a previous iteration to be re-used when sequentially moving to a finer grid in order to improve accuracy on the way to satisfying some predetermined convergence criterion. However, the trapezoidal rule can be iterated by doubling the number of grid points at each step. Only half the grid points need to be computed, as the grid points from the previous iteration supply the other half [24]. Similar considerations apply to the iterative mid-point rule, but the grid size needs to be tripled at each step, with a third of the total number of points coming from the previous iteration. The large-\(t\) asymptotics are derived from (36) by \[\begin{array}{c}\bar{F}_{\rm cut}(t)\mathop{\sim}\limits_{t \to\infty}\frac{2\mu_{1}\sqrt{r}}{\pi a}R_{\rm cut}(-4a\sqrt{r})e^{-4a\sqrt{ r}t}\int_{0}^{\infty}du\,\sqrt{u}e^{-4\sqrt{r}u}\\ =\frac{\mu_{1}}{8\sqrt{\pi}a\tau^{1/4}}R_{\rm cut}(-4a\sqrt{r})t^{-3/2}e^{-4a \sqrt{r}t}\,.\end{array} \tag{47}\] We shall now look at the pole contribution to the BP distribution. Poles occur in (22) when \(\psi_{+}(s)-G_{22}(s)=0\). Let us suppose that these are found at \(s=\zeta_{\ell}\), \(\ell=1,2,\ldots\). Since the first term \(G_{11}(s)\) will not contribute to the residue, the pole components of the MGF are given by the residues \[\phi_{1}(s)/\mu_{1}\mathop{\longrightarrow}\limits_{\rm pole}\frac{G_{12}^{2} (\zeta_{\ell})}{\psi_{+}^{\prime}(\zeta_{\ell})-G_{22}^{\prime}(\zeta_{\ell}) }\,, \tag{48}\] Figure 1: where the prime denotes differentiation. Since, for general \(s\), \(\psi^{\prime}_{+}(s)\!=\!\psi_{+}(s)/\sqrt{\Delta(s)}\), we have at a pole \(\psi^{\prime}_{+}(\zeta_{\ell})\!=\!G_{22}(\zeta_{\ell})/\sqrt{\Delta(\zeta_{ \ell})}\). Then, on setting \[J_{22}(s)\!\equiv\!\Pi^{2}(s)G^{\prime}_{22}(s)\!=\!\sum_{k=1}^{N-1}v_{22}^{(k )}\prod_{\begin{subarray}{c}\ell=1\\ \ell\neq k\end{subarray}}^{N-1}(s-\xi_{\ell})^{2}\:, \tag{49}\] we obtain \[\phi_{1}(s)/\mu_{1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! for the SF. Then, we have for the mean \[\langle X\rangle_{X}\equiv\int_{0}^{\infty}dx\,xP(x)=\sum_{\ell=1}^{L}u_{\ell}w_{ \ell}\,. \tag{58}\] For the log-mean, \[\langle\ln X\rangle_{X}\equiv\int_{0}^{\infty}dx\,P(x)\ln x=\sum_{\ell=1}^{L}w_ {\ell}\ln u_{\ell}-\gamma_{\rm e}\,, \tag{59}\] where \(\gamma_{\rm e}\equiv-\psi(1)\simeq 0.5772\) denotes Euler's constant. For the differential entropy, \[\begin{split}\langle-\ln P(X)\rangle_{X}\equiv&- \int_{0}^{\infty}dx\,P(x)\ln P(x)\\ =&-\int_{0}^{\infty}dx\,e^{-x}\sum_{\ell=1}^{L}w_{ \ell}\ln\left[\sum_{k=1}^{L}\left(w_{k}/u_{k}\right)e^{-x(u_{\ell}/u_{k})} \right]\,,\end{split} \tag{60}\] which is easily computed via Gauss-Laguerre quadrature. In the present application to the BP, the nodes should account for the discrete poles as well as the cut. Thus, \(L=L_{\rm cut}+L_{\rm pol}\). The discrete poles contribute nodes \(x_{\ell}=-\zeta_{\ell}\) with weights \(w_{\ell}=-\mu_{1}J(\zeta_{\ell})/\zeta_{\ell}\). From the requirement that \(\bar{F}_{X}(0)=1\), we have that \(\sum_{\ell=1}^{L}w_{\ell}=1\). Thus, the node/weight pairs \((u_{\ell},w_{\ell})\) may be thought of as describing the empirical SF for some RV \(0\leq U<\infty\) according to \[\bar{F}_{U}(u)=\sum_{\ell=1}^{L}w_{\ell}I(u_{\ell}>u)\,, \tag{61}\] which we may refer to as the texture distribution by analogy with compound Gaussian clutter distributions in radar detection theory [27]. This compound representation implies that the BP RV has the product form \(T_{\rm bp}=UV\), where here \(V\) is a unit-mean exponentially distributed RV, which provides a very convenient scheme for generating random numbers drawn from the BP distribution. Figures 2 and 3 display the logarithm of the BP SF for the case of \(N=30\) servers and a variety of traffic intensities. In Figure 3, the horizontal axis records the (dimensionless) time as directly given by \(\bar{F}(t)=\bar{F}_{\rm pol}(t)+\bar{F}_{\rm cut}(t)\), in which case one is observing the short-time behaviour of the distribution. Note that the MGF for the problem formulated in terms of physical units can be recovered from the dimensionless problem via \(\phi_{\rm phys}(s)=\phi_{1}(s/s_{\rm scl})\) with frequency scale \(s_{\rm scl}=N\mu\). For the SF, this implies \(\bar{F}(t)=\bar{F}_{\rm phys}(t_{\rm scl}\cdot t)\) with timescale \(t_{\rm scl}=1/s_{\rm scl}=1/(N\mu)\). In Figure 2, the horizontal axis measures time in units of the mean BP. Thus, the function \(\bar{F}(m_{\rm bp}t)\) is being plotted, and this illustrates the quite distinct long-time behaviour of the distribution. Since the regeneration cycle time is the sum of a BP and an idle period, and these are mutually independent, it follows that the regeneration cycle distribution is the convolution of the BP distribution and the inter-arrival distribution. Therefore, the SF for the regeneration cycle time can formally be expressed as \[\bar{F}_{\rm reg}(t)\simeq\sum_{\ell=1}^{L}w_{\ell}\cdot\frac{re^{-x_{\ell}t}- x_{\ell}e^{-rt}}{r-x_{\ell}}\,. \tag{62}\] This expression should be numerically unproblematic for \(r<1/4\). Above this value, the inter-arrival pole at \(s=-r\) lies within the cut \(\{s\dvtx(1-\sqrt{r})^{2}\leq-s\leq(1+\sqrt{r})^{2}\}\). It should be noted, however, that for appreciable values of the traffic intensity, the idle period gives a negligible contribution to the regeneration cycle. _Note._ BP survival function for various traffic intensities (\(r\)) and \(N_{\rm srv}=30\) servers, showing long-time behaviour. The negative base-10 logarithm of the survival function is plotted. Figure 3. _Note._ BP survival function for various traffic intensities (\(r\)) and \(N_{\rm srv}=30\) servers, showing short-time behaviour. The negative base-10 logarithm of the survival function is plotted. Figure 2. _Note._ BP survival function for various traffic intensities (\(r\)) and \(N_{\rm srv}=30\) servers, showing long-time behaviour. The negative base-10 logarithm of the survival function is plotted. ### Boundary Case: \(r\to 0\) Numerical instabilities arise for small values of the total traffic intensity \(r\). However, in the \(r\to 0^{+}\) limit, the \(v_{12}\) become negligible, in which case we have \[\phi_{1}(s)/\mu_{1}\simeq\sum_{k=1}^{N-1}\frac{|v_{1}^{(k)}|^{2}}{s-\xi_{k}}= \sum_{k=1}^{N-1}\frac{v_{11}^{(k)}}{s-\xi_{k}}\,. \tag{63}\] It follows, for the BP PDF and SD, respectively, that \[P(t)\simeq\mu_{1}\sum_{k=1}^{N-1}v_{11}^{(k)}e^{\xi_{k}t}\,,\quad\bar{F}(t) \simeq\mu_{1}\sum_{k=1}^{N-1}\frac{v_{11}^{(k)}}{|\xi_{k}|}e^{\xi_{k}t}\,, \tag{64}\] recalling that \(\xi_{k}<0\). A criterion can be developed to signal when to use this approximation. We have found that the test \(\max\bigl{\{}|v_{12}^{(k)}|:k=1,2,\ldots,N-1\bigr{\}}<10^{-4}\) works adequately. ### Boundary Case: \(r\to 1^{-}\) Poles \(s=\zeta_{\ell}\) occur when \(\psi_{+}(s)-G_{22}(s)=0\). When the smallest pole is very close to zero, numerical solution of this equation can present difficulties. In this case, we can linearize about \(s=0\) to obtain \[\begin{split}\psi_{+}(s)&=\psi_{+}(0)+s\psi_{+}^{ \prime}(0)+O(s^{2})\\ &=\frac{1}{r}+\frac{1}{r(1-r)}s+O(s^{2})\,,\\ G_{22}(s)&=G_{22}(0)+sG_{22}^{\prime}(0)+O(s^{2})\,, \end{split} \tag{65}\] which leads to the approximate solution \[s\simeq-\frac{1-rG_{22}(0)}{1/(1-r)-rG_{22}^{\prime}(0)}=-\frac{\mu_{1}}{ \sqrt{\Lambda}}\cdot\frac{G_{12}(0)}{1/(1-r)-rG_{22}^{\prime}(0)}\,. \tag{66}\] In order to catch bad numerical behaviour due to arithmetic underflow, when the condition \(|1/r-G_{22}(0)|<10^{-9}\) is detected, the sign check \[\operatorname{sgn}(G_{12}(0))=\operatorname{sgn}(1/r-G_{22}(0)) \tag{67}\] is performed. If this check succeeds, then \(\zeta_{1}\) is replaced by the value obtained from the latter of (66). On the other hand, failure of the test indicates that the lowest pole has missed altogether, in which case \(\zeta_{1}\) obtained from (66) is appended to the collection of poles \(\zeta_{k}\). A variety of diagnostic tests is available to track the validity of the numerical computations. For example, it can be checked (i) whether the expected number of \(\zeta\)-poles is being generated, (ii) whether equation (56) for the mean holds to some desired accuracy, (iii) whether the relationship (33) holds to some desired accuracy and, among others, (iv) whether the MGF as given by (22) evaluates to unity at the origin. ### Boundary Case: \(r=1\) At the boundary of the ergodic region, where \(r=1\), there is no pole contribution. Hence, the BP SF comes entirely from the cut, and reads \[\begin{split}\bar{F}(t)&=\frac{2\mu_{1}}{\pi}\int_{ 0}^{1}\frac{du}{u}\,e^{-4tu}R_{\text{cut}}(-4u)\cdot\sqrt{u(1-u)}\\ &\underset{t\to\infty}{\sim}\frac{\mu_{1}}{\pi\sqrt{t}}R_{\text{ cut}}(0)\int_{0}^{4t}\frac{du}{\sqrt{u}}\,e^{-u}\\ &\underset{t\to\infty}{\sim}\frac{\mu_{1}}{\pi\sqrt{t}}R_{\text{ cut}}(0)\int_{0}^{\infty}\frac{du}{\sqrt{u}}\,e^{-u}\\ &=\mu_{1}R_{\text{cut}}(0)\cdot\frac{1}{\sqrt{\pi t}}\,,\end{split} \tag{68}\] where we may recall that, for \(r=1\), \(\mu_{1}R_{\rm cut}(0)=1/\prod_{n=1}^{N-1}\mu_{n}\). We see that the BP distribution becomes power-law tailed in the non-ergodic region. We can write \[\bar{F}(t)\mathop{\sim}_{t\to\infty}\frac{1}{1+\sqrt{\pi t}/[\mu_{1}R_{\rm cut }(0)]} \tag{69}\] to obtain a form that is unity at the origin, rather than diverging. One can also adopt the Lomax form \[\bar{F}(t)\mathop{\sim}_{t\to\infty}\frac{1}{\left(1+\frac{\pi}{[\mu_{1}R_{\rm cut }(0)]^{2}}t\right)^{1/2}}\,. \tag{70}\] For \(N=1,2\), the Lomax form gives a close fit across the whole domain. ## 4 Algebraic Approach In this section, we exploit the tridiagonal structure of the problem to derive a continued-fraction representation of the BP MGF that can be used to yield explicit expressions on a case-by-case basis for small numbers of servers. We then proceed to analyse the underlying tridiagonal matrix in a different way that allows explicit representation of the cut function for any number of servers in terms of a family of polynomials (to which we refer as the cut polynomials). Finally, we derive an easily solvable two-dimensional system of recurrence relations that enables determination of the coefficients of the cut polynomials which, in turn, allows for the calculation of their roots. As demonstrated in a subsequent section, these roots provide complete information for the construction of closed-form expressions for the BP distribution functions. ### Continued Fraction Let \(M_{0}\equiv\mathop{\rm diag}[s,s,\cdots,s-r\psi_{-}(s)]-M\), where \(M\) is the full tridiagonal birth-death process generator matrix for the BP problem as given in (6), and let \(M_{1}\) be the matrix obtained from \(M_{0}\) by omitting the first row and column. We then define the general matrix \(M_{k}\) recursively as the one obtained by omitting the first row and column from \(M_{k-1}\). This can be continued until one reaches the scalar \[M_{N-2}=s+\mu_{N-1}-r\psi_{-}(s)\,. \tag{71}\] Cramer's rule yields \[\left(M_{0}^{-1}\right)_{11}=\det M_{1}/\det M_{0}\,, \tag{72}\] so that \[\phi_{1}(s)=\left(M_{0}^{-1}\right)_{11}\mu_{1}=\mu_{1}\det M_{1}/\det M_{0}\,. \tag{73}\] By applying the cofactor expansion in the first row, we are led to the recurrence relation \[\det M_{k}=\sigma_{k+1}\det M_{k+1}-r\mu_{k+2}\det M_{k+2}\,, \tag{74}\] subject to \(M_{N-2}\equiv\sigma_{N-1}-r\psi_{-}\), \(M_{N-1}\equiv 1\), where we have introduced the notation \(\sigma_{k}\equiv s+r+\mu_{k}\). By writing \[\phi_{1}(s)=\frac{\mu_{1}}{\sigma_{1}-r\mu_{2}\det M_{2}/\det M_{1}}\,, \tag{75}\] on applying the recurrence once, and then successively applying the recurrence, we arrive at the continued fraction representation \[\phi_{1}(s)=\frac{1}{r}\left[\frac{r\mu_{1}}{\sigma_{1}-},\frac{r\mu_{2}}{ \sigma_{2}-},\cdots,\frac{r\mu_{N-1}}{\sigma_{N-1}-r\psi_{-}}\right]\,. \tag{76}\] One may note that if we set \(\eta_{N}(s)\equiv r\psi_{-}(s)\), and \[\eta_{k}(s)=\frac{r\mu_{k}}{\sigma_{k}-\eta_{k+1}(s)}\,, \tag{77}\] for \(k=N-1,N-2,\ldots,1\), then we have \(\phi_{1}(s)=\eta_{1}(s)/r\). Setting \(b(s)\equiv s+r+1\), we can write \[\psi_{\pm}(s)=\frac{1}{r}\left[b(s)\pm\sqrt{b^{2}(s)-4r}\right]\,. \tag{78}\] Since \[r\psi_{\pm}^{2}(s)-b(s)\psi_{\pm}(s)+1=0\,, \tag{79}\] it follows that \[r\psi_{+}(s)+r\psi_{-}(s)=b(s)\,,\quad r\psi_{+}(s)\cdot r\psi_{-}(s)=r\,, \tag{80}\] from which we also obtain the identity \[(\sigma_{k}-r\psi_{-})(\sigma_{k}-r\psi_{+})=(\mu_{k}-1)\sigma_{k}+r\,. \tag{81}\] With the aid of these relationships, (76) leads directly to the following explicit forms linear in \(\psi_{-}(s)\): For \(N=2\), with \(\mu_{k}=k/2\), \[\phi_{1}(s)=\mu_{1}\frac{\mu_{1}-1+r\psi_{-}(s)}{(\mu_{1}-1)\sigma_{1}+r}\,. \tag{82}\] For \(N=3\), with \(\mu_{k}=k/3\), \[\phi_{1}(s)=\frac{\mu_{1}\sigma_{1}^{2}-(r-1+2\mu_{1}^{2})\sigma_{1}+2\mu_{1} (\mu_{1}r-\mu_{1}+1)-2\mu_{1}r\psi_{-}(s)}{\sigma_{1}^{3}+(\mu_{1}-r)\sigma_{ 1}^{2}-4r^{2}\mu_{1}}\,. \tag{83}\] For \(N=1\), we trivially have \(\phi_{1}(s)=\psi_{-}(s)\). While the continued-fraction representation can be used to derived explicit expressions for the MGF when the number of servers is small, there does not seem to be any obvious way to extract an explicit closed-form result for the general case. A finite continued fraction representation was previously considered by Daley and Servi [9], who make the same observation that 'there is no simple non-recursive closed-form solution'. Prior to this, Conolly [7] had also derived a continued fraction representation. ### Cut Polynomials Further progress for the general case can be made by observing the general structure \[\phi_{1}(s)=\mu_{1}\frac{N_{0}(\sigma_{N-1}-r\psi_{-})+N_{1}}{D_{0}(\sigma_{N -1}-r\psi_{-})+D_{1}}\,, \tag{84}\] where \(N_{0,1}\) and \(D_{0,1}\) are polynomials in \(s\). This representation follows from applying the cofactor expansion for \(\det M_{0}\) and \(\det M_{1}\) to the last rather than the first row. At a pole, which occurs when \(\sigma_{N-1}-r\psi_{-}=-D_{1}/D_{0}\), the numerator becomes rational: \[N_{0}(\sigma_{N-1}-r\psi_{-})+N_{1}=\frac{N_{1}D_{0}-N_{0}D_{1}}{D_{0}}\,. \tag{85}\] It is convenient to introduce the functions \[\begin{array}{c}N_{\pm}\equiv N_{0}(\sigma_{N-1}-r\psi_{\pm})+N_{1}\,,\\ D_{\pm}\equiv D_{0}(\sigma_{N-1}-r\psi_{\pm})+D_{1}\,,\end{array} \tag{86}\] so that \(\phi_{1}(s)=\mu_{1}N_{-}(s)/D_{-}(s)\). One can show that \[D_{+}D_{-}=D_{1}^{2}+D_{0}\left[((\mu_{N-1}-1)\sigma_{N-1}+r)D_{0}+(\sigma_{N -1}+\mu_{N-1}-1)D_{1}\right]\,, \tag{87}\] which is a polynomial in \(s\). We also have \[N_{-}D_{+}=A+(N_{1}D_{0}-N_{0}D_{1})\cdot r\psi_{-}\,, \tag{88}\] for some polynomial \(A(s)\), and one can show that \[N_{1}D_{0}-N_{0}D_{1}=\frac{1}{r\mu_{1}}\prod_{k=1}^{N-1}(r\mu_{k})\,. \tag{89}\] Thus, we see that the representation \[\phi_{1}(s)=\mu_{1}\frac{N_{-}(s)D_{+}(s)}{D_{+}(s)D_{-}(s)} \tag{90}\] has polynomial denominator and is linear in \(\psi_{-}(s)\). On the cut in the \(s\)-plane, that is due to square-root term in \(\psi_{-}(s)=[b(s)-\sqrt{|\Delta(s)|}]/(2r)\), the function \(\phi_{1}(s)\) contributes only the component \[\phi_{\rm cut}(s)=(\pm i)\frac{\prod_{k=1}^{N-1}(r\mu_{k})}{D_{+}(s)D_{-}(S)} \cdot\frac{\sqrt{|\Delta(s)|}}{r}\,, \tag{91}\] where \(|\Delta(s)|=4r-b^{2}(s)\) on the cut. This allows us to identify the cut function introduced previously as \[R_{\rm cut}(s)=\frac{1}{r\mu_{1}}\cdot\frac{\prod_{k=1}^{N-1}(r\mu_{k})}{D_{ +}(s)D_{-}(s)}=-\frac{1}{r\mu_{1}^{2}}\cdot\frac{\prod_{k=1}^{N-1}(r\mu_{k})}{ C_{N}\left(\sigma(s)\right)}\,, \tag{92}\] where we find it convenient to introduce the cut polynomials \(C(\sigma)\), defined as functions of the variable \(\sigma\equiv\sigma_{1}=s+r+\mu_{1}\) via \(C_{N}\left(\sigma(s)\right)\equiv-\mu_{1}D_{+}(s)D_{-}(s)\). The first few of these are given as follows: For \(N=2\), \(C_{2}(\sigma)=\sigma-r/\mu_{1}\). For \(N=3\), \[C_{3}(\sigma)=\sigma^{3}+(\mu_{1}-r)\sigma^{2}-4r^{2}\mu_{1}\,. \tag{93}\] For \(N=4\), \[\begin{array}{l}C_{4}(\sigma)=\sigma^{5}+(1-r)\,\sigma^{4}+(5\mu_{1}-6r)\, \mu_{1}\sigma^{3}\\ +\left(2\mu_{1}^{2}-13\mu_{1}r+r^{2}\right)\mu_{1}\sigma^{2}-2\left(1-7r\right) \mu_{1}^{2}r\sigma+2\mu_{1}^{2}r^{2}-r^{3}\,.\end{array} \tag{94}\] In general, \(C_{N}(\sigma)=\sigma^{2N-3}+\cdots\), provided \(N>1\). For consistency with (92), one may set \(C_{1}(\sigma)\equiv-1/\mu_{1}^{2}\) for the single server case. The residue of \(\phi_{1}(s)\) at a given pole \(s\), defined by \[\phi_{\rm res}(s)\equiv\lim_{s^{\prime}\to s}(s^{\prime}-s)\phi_{1}(s^{ \prime})\,, \tag{95}\] is \[\phi_{\rm res}(s)=\mu_{1}\frac{N_{1}D_{0}-N_{0}D_{1}}{D_{0}}\cdot\frac{D_{+}}{ (D_{+}D_{-})^{\prime}}\,, \tag{96}\] where the prime in \((D_{+}D_{-})^{\prime}\) denotes that the pole factor has been divided out of the polynomial, _i.e._ \[(D_{+}D_{-})^{\prime}(s)\equiv\lim_{s^{\prime}\to s}N_{-}(s^{\prime})D_{+}(s^ {\prime})/(s^{\prime}-s)\,. \tag{97}\] We can derive the general relationship \[D_{+}+D_{-}=D_{0}(\sigma_{N-1}+\mu_{N-1}-1)+2D_{1}\,, \tag{98}\] and note that \(D_{-}=0\) at a pole, to obtain that, at a pole, \[D_{+}=D_{0}(\sigma_{N-1}+\mu_{N-1}-1)+2D_{1}\,. \tag{99}\] Consequently, at a pole \(s\), the residue term is \[\phi_{\rm res}(s)=\frac{1}{r}\prod_{k=1}^{N-1}(r\mu_{k})\cdot\frac{\sigma_{N-1 }+\mu_{N-1}-1+2D_{1}/D_{0}}{(D_{+}D_{-})^{\prime}}\,, \tag{100}\] It may be observed that the polynomials \(N_{0},N_{1}\) have completely dropped out of the calculations. The full BP distribution is obtained from the sum of the pole and cut contributions to the MGF. For the PDF, \[P(t)=\sum_{\ell\in{\rm poles}}\phi_{\rm res}(\zeta_{\ell})e^{\zeta_{\ell}t}+ \int_{\rm cut}\frac{ds}{2\pi}\,\phi_{\rm cut}(s)e^{st}\,. \tag{101}\] ### Recurrence Relations Let \(M_{N}^{(0)}\) denote the the tridiagonal matrix with main diagonal \((\sigma_{1},\sigma_{2},\cdots,\sigma_{N-1})\), lower sub-diagonal \((-\mu_{2},-\mu_{3},\cdots,-\mu_{N-1})\) and constant upper sub-diagonal with common value \(-r\). Then, \[D_{0}=\det M_{N-2}^{(0)}\,,\quad D_{1}=-r\mu_{N-1}\det M_{N-3}^{(0)}\,. \tag{102}\] Let \(M_{N}^{(1)}\) denote the the tridiagonal matrix with main diagonal \((\sigma_{2},\sigma_{3},\cdots,\sigma_{N-1})\), lower sub-diagonal \((-\mu_{3},-\mu_{4},\cdots,-\mu_{N-1})\) and constant upper sub-diagonal with common value \(-r\). Then, \[N_{0}=\det M_{N-2}^{(1)}\,,\quad N_{1}=-r\mu_{N-1}\det M_{N-3}^{(1)}\,. \tag{103}\] In terms of the matrix \(M\) in (6), \(M_{N}^{(0)}=s\mathbb{I}-M\) and \(M_{N}^{(1)}\) is \(M_{N}^{(0)}\) with the first row and column removed. We have the recurrences \[\det M_{N}^{(j)}=\sigma_{N-1}\det M_{N-1}^{(j)}-r\mu_{N-1}\det M_{N-2}^{(j)}\,, \tag{104}\] for each \(j=0,1\). Only the \(X_{N}\equiv\det M_{N}^{(0)}\) are significant for general computation of the MGF. Setting \(\sigma_{n}=\sigma_{1}+\hat{\mu}_{n}\) with \(\hat{\mu}_{n}\equiv\mu_{n}-\mu_{1}\), we see that \(X_{N}\) is obtained from the recurrence \[X_{n}=(\sigma_{1}+\hat{\mu}_{n-1})X_{n-1}-r\mu_{n-1}X_{n-2}\,, \tag{105}\] for \(n=2,3,\ldots,N\), subject to \(X_{0}=0\), \(X_{1}=1\). To obtain an explicit expansion in powers of \(\sigma_{1}\), we write \[X_{n}=\sum_{k=0}^{n-1}x_{n}^{(k)}\sigma_{1}^{k}\,,\quad x_{n}^{(n-1)}\equiv 1- \delta_{n0}\,,\quad x_{n}^{(-1)}\equiv 0 \tag{106}\] for \(n=0,1,2,\ldots\). This yields the simple two-dimensional recursion \[x_{n}^{(\ell)}=\hat{\mu}_{n-1}x_{n-1}^{(\ell)}+x_{n-1}^{(\ell-1)}-r\mu_{n-1}x_ {n-2}^{(\ell)}\,, \tag{107}\] for \(0\leq\ell\leq n-3\). It is straightforward to show that \[x_{n}^{(n-2)}=\sum_{k=0}^{n-1}\hat{\mu}_{k}\,, \tag{108}\] for \(n=1,2,\ldots\), which is required to seed the recursion. Alternatively, one may achieve this by introducing \(x_{n}^{(n)}\equiv 0\), \(n\geq 0\). This scheme enables us, in principle, to calculate all the coefficients of the general cut polynomial by invoking the identities \(D_{0}=X_{N-2}\), \(D_{1}=-r\mu_{N-1}X_{N-3}\). Table 1 provides expressions for \(N_{0,1}\), \(D_{0,1}\) for server numbers \(N=2,3,4\). The critical value of the total traffic intensity is the value \(r=r_{\rm c}\) at which the pole contribution disappears. This occurs when \(D_{0}(\sigma_{N-1}-\sqrt{r})+D_{1}=0\) at the cut extremity \(s=-(1-\sqrt{r})\)2. For \(N=2\), it is \(r_{\rm c}=1/2\). For \(N=3\), \(y\equiv\sqrt{r_{\rm c}}\) solves the quadratic Footnote 2: The normalization chosen in (2) is not appropriate here. We take \(\mu=1\), rather than \(\mu=1/N\), so that \(\lambda=\rho\) and adjust the matrix \(M\) accordingly. \[2y^{2}-2y+\mu_{1}=0\,, \tag{109}\] and for \(N=4\), it solves the cubic \[2y^{3}-45\mu_{1}^{2}y^{2}+y-6\mu_{1}^{3}=0\,. \tag{110}\] In Table 2, we present numerical values for \(r_{\rm c}\) corresponding to the first few values of \(N\). ## 5 Asymptotic Limits There are two natural ways to scale the problem in the limit of a large number of servers \(N\to\infty\). The first is to keep the total traffic intensity \(r\) constant as \(N\to\infty\). The second is to keep the mean arrival and service rates (namely \(\lambda\), \(\mu\), respectively) constant. As either one of the two time-dimensional parameters simply serves to calibrate the clock in the model, we can choose to set \(\mu=1\). This is equivalent to measuring model time in units of the mean service time, and physical time can be recovered by appropriately rescaling the time axis of the resulting distributions. Thus, the model essentially depend on only two parameters, which can be taken to be the number of servers, \(N\), and either \(r\) or the partial traffic intensity \(\rho\equiv\lambda/\mu\)[12]. An advantage of the former parametrization is that the ergodic region of interest here is characterized by a common bounded interval (_viz._\(0\leq r<1\)) independent of \(N\). The limit of an infinite number of servers with given constant \(\rho=r/N\) is referred to as the M/M/\(\infty\) queue, and corresponds to our second scaling option. It has been previously extensively studied by Guillemin and Simonian [13], and also earlier by Morrison et al. [20]. However, while this model is useful in some applications, it is also rather trivial as a queueing model given that it does not actually contain a queue. We shall proceed to study the BP distributions of both scaling options. Having explicit results for the \(N\to\infty\) limits, and comparing these with full calculations for finite given values of \(N\), enables one to decide, in creating and optimizing numerical algorithms, how far one needs to go before simply being able to invoke asymptotic results. ### Constant-\(\rho\) Model To study this limit, it is convenient to consider the transpose problem. Let \(A\equiv s\mathbb{I}-M\), where \(M\) is given by (6)2. In the large-\(N\) limit, \(M\) becomes countably infinite \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \(N\) & \(N_{0}\) & \(N_{1}\) & \(D_{0}\) & \(D_{1}\) \\ \hline \hline 2 & 0 & 1 & 1 & 0 \\ \hline 3 & 1 & 0 & \(\sigma_{1}\) & \(-r\mu_{2}\) \\ \hline 4 & \(\sigma_{2}\) & \(-r\mu_{3}\) & \(\sigma_{1}\sigma_{2}-r\mu_{2}\) & \(-r\mu_{3}\sigma_{1}\) \\ \hline \end{tabular} \end{table} Table 1: MGF polynomials \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline \(N\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \(r_{\rm c}\) & 0.0000 & 0.5000 & 0.6220 & 0.8400 & 0.9352 & 0.9740 \\ \hline \end{tabular} \end{table} Table 2: Critical traffic intensities dimensional and the boundary condition for (3), _viz._\(\phi_{N}=\psi_{-}(s)\phi_{N-1}\), disappears as it gets pushed to infinity. In Section 3, we considered a linear system which, in the large-\(N\) limit, now reads \(A|\phi\rangle=|e_{1}\rangle\) since we can set \(|w\rangle=|e_{1}\rangle\), noting that \(\mu_{1}=1\) here. This can be inverted to yield the BP MGF \(\phi_{1}(s)=\langle e_{1}|A^{-1}|e_{1}\rangle\). However, we have \[\langle e_{1}|A^{-1}|e_{1}\rangle=\langle e_{1}|(A^{\mathsf{T}})^{-1}|e_{1} \rangle\,. \tag{111}\] Thus, we can also consider the transpose problem and compute \(\varphi_{1}(s)\equiv\langle e_{1}|(A^{\mathsf{T}})^{-1}|e_{1}\rangle=\phi_{1}(s)\). Clearly \(A^{\mathsf{T}}=s\mathbb{I}-M^{\mathsf{T}}\) and, in the \(N\to\infty\) limit, the infinite tridiagonal matrix \(M^{\mathsf{T}}\) is given by \[M^{\mathsf{T}}=\begin{bmatrix}-(\rho+\mu_{1})&\mu_{2}&0&\cdots&\cdots\\ \rho&-(\rho+\mu_{2})&\mu_{3}&\cdots&\cdots\\ 0&\rho&-(\rho+\mu_{3})&\mu_{4}&\cdots\\ \vdots&&\ddots&\ddots&\ddots\\ \vdots&&&&\ddots\end{bmatrix}\,. \tag{112}\] with \(\mu_{n}\equiv n\) being adopted here. On setting \(\varphi_{0}\equiv 1/\rho\), the implied recurrence relation for \(\varphi_{n}\) reads \[-\rho\varphi_{n-1}+(s+\rho+n)\varphi_{n}-(n+1)\varphi_{n+1}=0\,, \tag{113}\] for \(n=1,2,\ldots\). Following [13], we next introduce the generating function \[g(z)\equiv\sum_{n=1}^{\infty}z^{n-1}\varphi_{n}\,, \tag{114}\] so that \(g(0)=\varphi_{1}=\phi_{1}\). Summation over the recurrence (113) followed by some standard manipulations gives rise to the differential equation \[g^{\prime}(z)+\left[\frac{s}{z-1}+\frac{1}{z}-\rho\right]g(z)=\frac{z-g(0)}{z (z-1)}\,. \tag{115}\] Solution via the integrating factor method yields \[g(z)=\frac{e^{\rho z}}{(1-z)^{s}}\int_{0}^{1}d\xi\,e^{-\rho z\xi}(1-z\xi)^{s-1 }\left(g(0)-z\xi\right)\,. \tag{116}\] The vanishing of the residue at \(z=1\) implies that \[g(0)\cdot\int_{0}^{1}d\xi\,e^{-\rho\xi}(1-\xi)^{s-1}=\int_{0}^{1}d\xi\,e^{- \rho\xi}\xi(1-\xi)^{s-1}\,. \tag{117}\] Guillemin and Simonian [13] have defined the integrals \[\mathcal{I}_{\alpha}(s,\rho)\equiv\int_{0}^{1}d\xi\,e^{-\rho\xi}\xi^{\alpha}(1 -\xi)^{s}\,. \tag{118}\] In terms of these, we can write the result for the BP MGF as \[\phi_{1}(s)=\mathcal{I}_{1}(s-1,\rho)/\mathcal{I}_{0}(s-1,\rho)\,. \tag{119}\] This result coincides with the MGF of [13] for their congestion duration (denoted \(\theta\)) in the case \(C=0\). These authors, however, did not proceed to recover the distribution from the MGF. We shall also introduce the functions \[\phi_{1}^{(\alpha)}(s)\equiv\mathcal{I}_{\alpha+1}(s-1,\rho)/\mathcal{I}_{ \alpha}(s-1,\rho)\,, \tag{120}\] so that \(\phi_{1}(s)=\phi_{1}^{(0)}(s)\). We note from the defining matrix equation \(A|\phi\rangle=|e_{1}\rangle\), that \(\phi_{1}(s)\) has poles at the eigenvalues of the matrix \(M^{\mathsf{T}}\). Similarly, \(\phi_{1}^{(1)}(s)\) has poles at the eigenvalues of the truncated problem obtained by removing the first row and column from the matrix \(M^{\mathsf{T}}\), and these correspond to the zeros of \(\mathcal{I}_{1}(s-1,\rho)\). At this point, it is useful to recall that the Kummer hypergeometric function [23], also known as the hypergeometric function of the first kind, \(M(a,b,z)={}_{1}F_{1}\left(a;b;z\right)\) has integral representation \[M(a,b,z)=\frac{1}{B(a,b-a)}\int_{0}^{1}du\,e^{zu}u^{a-1}(1-u)^{b-a-1}\,. \tag{121}\] In terms of this function, we can write \[\phi_{1}(s)=1-\frac{s}{s+1}\cdot\frac{M(s+1,s+2,\rho)}{M(s,s+1,\rho)}=1-s\cdot \frac{M(1,2,\rho)}{M(0,1,\rho)}+O(s^{2})\,. \tag{122}\] Thus, the requirement that \(\phi_{1}(0)=1\) is recovered and, noting that \[M(1,2,\rho)=(e^{\rho}-1)/\rho\,,\quad M(0,b,\rho)=1\,, \tag{123}\] we see that the mean BP for \(N\to\infty\) is \((e^{\rho}-1)/\rho\) as expected from the explicit exact result (56). Furthermore, a function that is entire in all arguments is obtained via \[\mathbb{M}(a,b,z)\equiv M(a,b,z)/\Gamma(b)\,. \tag{124}\] With this definition and the relationship \[\mathbb{M}(a,b,z)=e^{z}\mathbb{M}(b-a,b,-z)\,, \tag{125}\] we arrive at the result \[\phi_{1}^{(\alpha)}(s)=(\alpha+1)\frac{\mathbb{M}(s,s+\alpha+2,\rho)}{ \mathbb{M}(s,s+\alpha+1,\rho)}\,. \tag{126}\] In this expression, both numerator and denominator are entire functions, and we may consequently observe that the zeros of \(\mathbb{M}(s,s+1,\rho)\) are identified with the eigenvalues of the full problem, while the zeros of \(\mathbb{M}(s,s+2,\rho)\) are identified with the eigenvalues of the truncated problem. In fact, it follows directly from Cramer's rule that \[\phi_{1}(s)=\langle e_{1}|A^{-1}(s)|e_{1}\rangle=\det A_{11}(s)/\det A(s) \tag{127}\] where \(A_{11}\) denotes \(A\) with the first row and column removed. In view of the foregoing discussion, we may write the BP MGF as \[\phi_{1}(s)\simeq\frac{1}{1-s/\chi_{1}}\prod_{\ell=2}^{L}\frac{1-s/\chi_{\ell -1}^{(1)}}{1-s/\chi_{\ell}}\,, \tag{128}\] where an exact expression is produced when \(L=\infty\). This product form of linear factors is directly implied by (126), but we also know that the \(\chi_{\ell}\), \(\chi_{\ell}^{(1)}\) correspond to the eigenvalues of the matrices \(A\), \(A_{11}\), respectively. The two sequences of eigenvalues (ordered according to increasing modulus) are interleaved, and both tend to negative integral values as \(\ell\) increases. Moreover, they can be paired up in such a way that \(\chi_{\ell+1}-\chi_{\ell}^{(1)}\to 0\) as \(\ell\to\infty\). The following inequalities hold: \[0<-\chi_{\ell}^{(1)}<-\chi_{\ell+1}<\ell\,, \tag{129}\] for \(\ell=1,2,\ldots\). Therefore, the individual ratios in the product above eventually become indistinguishable from unity, implying that only a finite number, \(L\), needs to be retained. The residue theorem may be applied to recover the SF as an exponential mixture \[\bar{F}(t)\simeq\sum_{k=1}^{L}W_{k}e^{\chi_{k}t}\,, \tag{130}\] with the weights being given by \[W_{1}=-\chi_{1}\prod_{\ell=2}^{L}\frac{1-\chi_{1}/\chi_{\ell-1}^{(1)}}{1-\chi_ {1}/\chi_{\ell}}\,,\quad W_{k}=-\chi_{k}\frac{1-\chi_{k}/\chi_{k-1}^{(1)}}{1- \chi_{1}/\chi_{k}}\prod_{\ell=2\atop\ell\neq k}^{L}\frac{1-\chi_{k}/\chi_{ \ell-1}^{(1)}}{1-\chi_{k}/\chi_{\ell}}\,, \tag{131}\] for \(k=2,3,\ldots\). The weights \(W_{k}\) are positive, sum to unity and tend to zero for large \(k\). It follows that, if we set \(U_{k}\equiv-1/\chi_{k}\), then the node/weight pairs \((U_{k},W_{k})\) define a texture distribution of the sort discussed in Section 3. The \(\chi_{\ell}\) are bracketed according to \(\ell-1<-\chi_{\ell}<\ell\) for all \(\ell=1,2,\ldots\) while, for the \(\chi_{\ell}^{(1)}\), we may write \[\begin{array}{rcl}\ell-1<-\chi_{\ell}^{(1)}\leq\ell&\mbox{for}&\ell=1, \ldots,\mbox{floor}(\rho)\,,\\ \ell\leq-\chi_{\ell}^{(1)}<\ell+1&\mbox{for}&\ell=\mbox{ceil}(\rho),\ldots\,, \\ -\chi_{\rho}^{(1)}=\rho&\mbox{for}&\mbox{integral $\rho\geq 1$}\,.\end{array} \tag{132}\] This behaviour can be observed in Figure 4 for the case \(\rho=1.9\). The function values on the vertical axis have been logarithmically compressed such the values indicate the exponent of the order of magnitude. The bijective compression function that was employed is given by \[f(x)\equiv\mbox{sgn}(x)\log_{10}(1+|x|)=x\cdot\frac{\log_{10}(1+|x|)}{|x|}\,, \tag{133}\] where \(\mbox{sgn}(x)\) denotes the sign function. Thus, \(y=f(x)\) can be inverted according to \(x=\mbox{sgn}(y)(10^{|y|}-1)\). A bracketed root-finding method, such as bisection or a secant method, can be used to compute the desired number of zeros of the relevant Kummer functions. Alternatively, an eigenvalue problem can be solved for the matrices \(A\), \(A_{11}\) truncated to a sufficiently large finite dimension (which will be greater than \(L\)). An initial dimension can be estimated and subsequently iterated until convergence is achieved. The eigenvalue of least modulus will dominate the large-t behaviour of the distribution and provide a lower bound to the hazard function \(H(t)\equiv P(t)/\bar{F}(t)>|\chi_{1}|\). This asymptotic level is indicated in Figure 5 as the dotted line. The hazard function for the M/M/\(\infty\) model corresponds to the dashed black curve. It is clear that this curve is approached rapidly as the number of servers increases. ### Constant-r Model We shall first consider the boundary case \(r=1\) when \(N\gg 1\). Let \(m\equiv(\Lambda/\mu_{1})^{2}\) with \(\Lambda\) given by (17), noting that \(m\rightarrow\infty\) as \(N\rightarrow\infty\) since, by Stirling's formula, \(m\sim_{N\gg 1}e^{2N}/(2\pi N)\). We already know that \(\bar{F}(mt)\sim_{t\rightarrow\infty}1/\sqrt{\pi t}\) for any \(N\), and we can write \[\bar{F}_{\rm cut}(mt)=\frac{2\mu_{1}}{\pi}\int_{0}^{4m}\frac{du}{4m}\,e^{-ut} R_{\rm cut}(-u/m)\sqrt{\frac{4m-u}{u}}\,. \tag{134}\] Thus, \[\bar{F}(mt)\sim_{N\gg 1}\frac{\mu_{1}}{\pi}\int_{0}^{\infty}\frac{du}{\sqrt{u}}\, e^{-ut}\frac{1}{\sqrt{m}}R_{\rm cut}(-u/m)\,. \tag{135}\] Figure 4: _Note._ Zeros of the Kummer M-functions in the numerator and denominator of the M/M/\(\infty\) BP MGF for partial traffic intensity \(\rho=1.9\). Denominator zeros are indicated by crosses. Numerator zeros are indicated by dots. Figure 5: _Note._ BP hazard function for various numbers of servers (\(N_{\rm{srv}}\)) and partial traffic intensity \(\rho=0.75\). The dashed black curve is the hazard function for the M/M/\(\infty\) model. The dotted black line is large-\(t\) limit and lower bound of the asymptotic hazard function. But also \[\frac{\mu_{1}}{\sqrt{m}}R_{\rm cut}(-u/m)\underset{u/m\ll 1}{\sim}\frac{1}{1+u}\,, \tag{136}\] which leads to the result \[\bar{F}(mt)\underset{N\gg 1}{\sim}\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}\frac{ dx}{\sqrt{x}}\,e^{-x}\frac{1}{\sqrt{1+t/x}}={\rm erfcx}(\sqrt{t})\,, \tag{137}\] where \({\rm erfcx}(t)\!\equiv\!e^{t^{2}}\!\cdot\!{\rm erfc}(t)\) is the scaled complementary error function, and \({\rm erfc}(t)\) denotes the standard complementary error function. In completing this derivation, we have used the identity for the incomplete gamma function \(\Gamma(\frac{1}{2},t)\!=\!\sqrt{\pi}\,{\rm erfc}(\sqrt{t})\). The former integral expression is amenable to efficient numeral evaluation via Gauss-Laguerre quadrature. The asymptotic BP PDF is given by \[P(t)\underset{N\gg 1}{\sim}\frac{1}{\sqrt{\pi mt}}-{\rm erfcx}(\sqrt{t/m})\,. \tag{138}\] The log-mean is given by \[\langle\ln T_{\rm bp}\rangle_{T_{\rm bp}}\underset{N\gg 1}{\sim}\ln m- \gamma_{\rm e}\,. \tag{139}\] This result follows easily from direct integration of the representation \[m\!\cdot\!P(mt)\underset{N\gg 1}{\sim}\frac{2}{\pi}\int_{0}^{\infty}dv\,\frac{v ^{2}}{1+v^{2}}e^{-tv^{2}}\,. \tag{140}\] In the ergodic region \(0\!<\!r\!<\!1\), the BP distribution tends to a mixture of two exponentials as \(N\) become large: one that dominates in the tail for large times and one that dominates for very small times. For a fixed number of servers \(N\), there is always a critical total traffic intensity Figure 6: beyond which there is no contribution to the distribution from discrete poles. On the other hand, for any fixed traffic intensity \(r\), there exists at least one discrete pole when the number of servers is sufficiently large, and the position of smallest-magnitude pole \(\zeta_{1}\) tends to zero as the number of servers increases. The exponential term generated by smallest-magnitude pole also dominates the large-\(t\) behaviour of the distribution. Given that \(\zeta_{1}\) approaches zero for \(N\gg 1\), it follows that this exponential asymptotic behaviour will set in ever earlier in \(t\) as \(N\) continues to increase. Hence, we must have that \[\bar{F}(m_{\rm bp}\!\cdot\!t)\mathop{\sim}_{N\gg 1}\nu e^{-t} \tag{141}\] for all \(t\) bounded away from zero, where \(m_{\rm bp}\equiv\langle T_{\rm bp}\rangle_{T_{\rm bp}}\) denotes the mean BP. A corollary of this is that \(\zeta_{1}\sim_{N\gg 1}-1/m_{\rm bp}\), which is easily verified numerically by computing \(\zeta_{1}\) from (66) and comparing with the exact result for \(m_{\rm bp}\). The constant \(\nu\) is derived from the residue of the pole at \(s=\zeta_{1}\), namely \(\nu=-\mu_{1}J(\zeta_{1})/\zeta_{1}\), where the function \(J(s)\) has been given in (52). It also holds that \(\nu\to 1^{-}\) as \(N\to\infty\). On the timescale that corresponds to measuring \(t\) in units of the mean BP, we have a PDF of the mixture form \[m_{\rm bp}P(m_{\rm bp}\!\cdot\!t)\mathop{\sim}_{N\gg 1}(1-\nu)\delta(t)+\nu e^{ -t}\,, \tag{142}\] where \(\delta(t)\) denotes the Dirac delta function. The SF for the two-exponential mixture that leads to this PDF is given by \[\bar{F}(t)\mathop{\sim}_{N\gg 1}(1-\nu)e^{-t/m^{\prime\prime}}+\nu e^{-\nu t/m^ {\prime}}\,, \tag{143}\] with \(m^{\prime}=m_{\rm bp}-(1-\nu)m^{\prime\prime}\) chosen to reproduce the correct mean, and \(m^{\prime\prime}\) is a constant of order unity estimated as \(m^{\prime\prime}\simeq(1-\nu)/P(0)=(1-\nu)N\). It follows that \(\nu/m^{\prime}\sim_{N\gg 1}-\zeta_{1}\), which strengthens the relationship given above. It is easily confirmed numerically that this result remains accurate down to quite small values of \(N\). In Figure 7, we have plotted the SF exponents \(-\log_{10}\bar{F}(t)\) for an increasing sequence of server numbers \(N\), where values on the time axis correspond directly to the model definition in the introductory section with the choice \(\mu=1/N\) adopted there. Thus, time is being measured in units of \(1/N\) times the mean treatment time. If we were to consider a situation where both \(r\) and \(\mu\) were to remain constant with respect to \(N\), in which case \(\lambda\) would be linear in \(N\), then any given value on the time axis would be a different point in physical time for each curve. It can be seen from the figure that, in the short-time regime being plotted, agreement between the numerical exact curves and the analytical asymptotic curves improves rapidly with increasing \(N\). The long-time behaviour is presented in Figure 8, where values on the time axis are measured in units of the mean BP. The black dashed curve is the analytical asymptotic curve evaluated here for \(N=80\), and is seen to coincide with its numerical exact counterpart to within the line-width of the graph. The black dotted line is the limiting exponential that represents the strict \(N\to\infty\) limit \(\bar{F}(m_{\rm bp}\!\cdot\!t)\sim_{N\to\infty}e^{-t}\). It constitutes a lower bound to the tail of the SFs for all \(N\), and is approached from above as \(N\to\infty\). ## 6 Complex-Pole Method The explicit expression for the cut function developed in the previous section allows us to bypass the spectral decomposition and derive explicit closed-form results for the cut contribution to the BP PDF for an arbitrary number of servers. These comprise sums of Bessel and Marcum Q-functions where the individual terms are generated by poles in a complex plane that are induced by the zeros of the cut polynomial. In fact, it will become clear that knowledge of the cut polynomial is all that is required to completely determine the BP distribution for any given number of servers. While this method is less robust numerically than the spectral approach, it provides more insight into the analytic structure of the problem. _Note._ BP survival functions for various numbers of servers (\(N\)) and traffic intensity \(r=0.9\), compared with their respective large-\(N\) asymptotic forms. These curves elucidate the very short-time behaviour. The asymptotic forms are evaluated at the corresponding values of \(N\). Figure 8: _Note._ BP survival functions for various numbers of servers (\(N_{\rm{svv}}\)) and traffic intensity \(r=0.75\), showing long-time behaviour. The dashed black line is the asymptotic form evaluated for \(N=80\). The black dotted line is the strict \(N\to\infty\) limiting exponential. Figure 7: The general form of the cut contribution to the BP PDF can be expressed as \[P_{\rm cut}(t)\!=\!-\frac{r\mu_{1}}{2}e^{-(1+r)t}\oint_{\cal C}\frac{dz}{2\pi iz} \,\left(z-\frac{1}{z}\right)^{2}e^{\sqrt{r}t(z+1/z)}\!\cdot\!R_{\rm cut}(s)\,, \tag{144}\] where the contour \({\cal C}\) denotes the anti-clockwise traversed unit circle about the origin, and \(R_{\rm cut}(s)\) is the cut function, with \[s\!=\!\sigma-r-\mu_{1}\,,\quad\sigma=\!\sqrt{r}(z+1/z)-1+\mu_{1}\,. \tag{145}\] This follows from setting \(v\!=\!\cos\theta\) in (38), and then writing \(\cos\theta\!=\!-(z\!+\!1/z)/2\) where \(z\) lies on the unit circle in the complex \(z\)-plane3. For \(N\!=\!1\) (\(\mu_{1}\!=\!1\)), the cut function is the trivial constant \(R_{\rm cut}(s)\!=\!1/r\). For \(N\!=\!2\) (\(\mu_{1}\!=\!1/2\)), Footnote 3: The minus has been included in order to reproduce the standard sign in the exponential term of the Schaefli formula for the modified Bessel function (_e.f._ (149)). \[R_{\rm cut}(s)\!=\!\frac{1}{r-\mu_{1}\sigma}\,, \tag{146}\] while, for \(N\!=\!3\) (\(\mu_{1}\!=\!1/3\)), \[R_{\rm cut}(s)\!=\!-\frac{2r}{\sigma^{3}+(\mu_{1}-r)\sigma^{2}-4r^{2}\mu_{1}}\,. \tag{147}\] By writing the cut polynomial in terms of its roots, \(C_{N}(\sigma)\!=\!\prod_{\ell=1}^{2N-3}(\sigma-\sigma^{(\ell)})\), we have the general formula \[R_{\rm cut}(s)\!=\!-\frac{1}{r\mu_{1}^{2}}\!\cdot\prod_{k=1}^{N-1}(r\mu_{k}) \!\cdot\!\frac{1}{\prod_{\ell=1}^{2N-3}(\sigma-\sigma^{(\ell)})}\,. \tag{148}\] The Schlaefli formula for the modified Bessel function of the first kind is given by \[I_{n}(t)\!=\oint\frac{dz}{2\pi i}\,\frac{1}{z^{n+1}}\exp\left\{\frac{t}{2} \left(z+\frac{1}{z}\right)\right\}\,, \tag{149}\] where the closed contour is taken around the origin. This may be used to derive the generating function \[\sum_{n=-\infty}^{+\infty}z^{n}I_{n}(t)\!=\!\exp\left\{\frac{t}{2}\left(z+ \frac{1}{z}\right)\right\}\,, \tag{150}\] from which it follows that \[\exp\left\{\frac{t}{2}\left(z+\frac{1}{z}\right)\right\}\!=\!I_{0}(t)\!+\! \sum_{n=1}^{\infty}\left(z^{n}+z^{-n}\right)I_{n}(t)\,. \tag{151}\] Thus, suppose that \(g(z)\) is a meromorphic function such that \(g(z)\!=\!g(1/z)\) and that \({\cal C}\) denotes the anti-clockwise unit circle. Then we have the integral \[\begin{array}{l}{\cal I}\!\equiv\!\oint_{\cal C}\frac{dz}{2\pi iz}\,g(z)\exp \left\{\frac{t}{2}\left(z+\frac{1}{z}\right)\right\}\\ \phantom{\sum_{n=1}^{\infty}}=\oint_{\cal C}\frac{dz}{2\pi iz}\,g(z)\!\cdot\! I_{0}(t)+2\sum_{n=1}^{\infty}I_{n}(t)\oint_{\cal C}\frac{dz}{2\pi iz}\,g(z)z^{n}\,.\end{array} \tag{152}\] Suppose now that \(G(z)\equiv g(z)/z\) has only simple poles within the unit circle at \(z=\beta_{1},\beta_{2},\ldots,\beta_{M}\). Then we have \[{\cal I}=c_{0}\cdot I_{0}(t)+2\sum_{n=1}^{\infty}c_{n}I_{n}(t)\,, \tag{153}\] where the coefficients, for \(n=0,1,\ldots\), are the residue sums \[c_{n}\equiv\oint_{\cal C}\frac{dz}{2\pi i}\,z^{n}G(z)=\sum_{m=1}^{M}\beta_{m}^ {n}\,{\rm Res}[G(z),\beta_{m}]\,. \tag{154}\] For \(N=1\), (144) and the Schaefli formula (149) reproduce the well-known result [4] \[P_{\rm cut}(t)=e^{-(1+r)t}\left[I_{0}(2\sqrt{r}t)-I_{2}(2\sqrt{r}t)\right]= \frac{1}{\sqrt{r}t}e^{-(1+r)t}I_{1}(2\sqrt{r}t)\,. \tag{155}\] In the general case, for \(N>1\), the relationships \[C_{N}(\sigma)=\prod_{\ell=1}^{2N-3}\left(\sigma-\sigma^{(\ell)}\right)\,, \quad\sigma=\sqrt{r}(z+1/z)+\mu_{1}-1\,, \tag{156}\] lead to the form \[C_{N}(\sigma)=\left(\frac{\sqrt{r}}{z}\right)^{2N-3}\prod_{\ell=1}^{2N-3} \left(z^{2}+2\alpha_{\ell}z+1\right)\,, \tag{157}\] where \(\alpha_{\ell}\equiv(\mu_{1}-1-\sigma^{(\ell)})/(2\sqrt{r})\). Hence, (144) becomes \[P_{\rm cut}(t)=\frac{\sqrt{r}}{2}\left(\prod_{k=2}^{N-1}\mu_{k}\right)e^{-(1+ r)t}\oint\frac{dz}{2\pi i}\,e^{\sqrt{r}t(z+1/z)}G(z)\,, \tag{158}\] where (152) applies with \[G(z)=\frac{g(z)}{z}=z^{2(N-3)}\cdot\frac{\left(1-z^{2}\right)^{2}}{\prod_{ \ell=1}^{2N-3}\left(z^{2}+2\alpha_{\ell}z+1\right)}\,, \tag{159}\] which allows us to the identify the \(c_{n}\) in the relevant instance of (153). We note that for \(N\geq 3\) there is no pole at the origin. Next, we write \[z^{2}+2\alpha_{\ell}z+1=(z-\beta_{\ell})(z-1/\beta_{\ell})\,, \tag{160}\] where \(\beta_{\ell}\) is chosen to be the unique root of the pair of roots \(\beta_{\ell}=-\alpha_{\ell}\pm\sqrt{\alpha_{\ell}^{2}-1}\) that lies within the unit circle \(|\beta_{\ell}|<1\). In each pair, one always lies inside and one outside the unit circle, and they are complex conjugates. With this in mind, we can express (154) as \[c_{n}=-\sum_{k=1}^{2N-3}\beta_{k}^{n}\gamma_{k}\,,\quad\gamma_{k}\equiv\beta_ {k}^{2N-5}(1-\beta_{k}^{2})/D_{k}\,, \tag{161}\] with \[D_{k}\equiv\prod_{\ell=1\atop\ell\neq k}^{2N-3}(\beta_{k}-\beta_{\ell})(\beta _{k}-1/\beta_{\ell})\,. \tag{162}\] In terms of these quantities, the cut PDF for \(N\geq 3\) reads \[P_{\rm cut}(t)=\frac{\sqrt{r}}{2}\left(\prod_{k=2}^{N-1}\mu_{k}\right)e^{-(1+ r)t}\left[-c_{0}\cdot I_{0}(\sqrt{4r}t)+2\sum_{k=1}^{2N-3}\gamma_{k}\cdot \sum_{n=1}^{\infty}\beta_{k}^{n}I_{n}(\sqrt{4r}t)\right]\,. \tag{163}\] It is useful to recall that \(c_{0}=-\sum_{k=1}^{2N-3}\gamma_{k}\). The generalized Marcum Q-function for order \(\nu\) is defined by the integral \[Q_{\nu}(a,b)\equiv\frac{1}{a^{\nu-1}}\int_{0}^{b}dx\,x^{\nu}\exp\left\{{ \frac{1}{2}}(x^{2}+a^{2})\right\}I_{\nu-1}(x)\,, \tag{164}\] where \(I_{\nu}(x)\) is the modified Bessel function of the first kind of order \(\nu\). It constitutes a cumulative distribution function (CDF) in the variable \(b\). The original Marcum Q-function \(Q(a,b)\) is the special case of the generalized Q-function \(Q_{\nu}(a,b)\) for \(\nu=1\). Its complementary function \(\bar{Q}(a,b)\equiv 1-Q(a,b)\) can be represented by the infinite Neumann expansion \[\bar{Q}(a,b)=e^{-(a^{2}+b^{2})/2}\sum_{\alpha=1}^{\infty}\left(\frac{b}{a} \right)^{\alpha}I_{\alpha}(ab)\,, \tag{165}\] which allows it to be extended to complex-valued arguments \(a,b\). The foregoing series is useful for numerical computation when \(|b/a|<1\). Otherwise, one may appeal to the alternative form \[\bar{Q}(a,b)=1-e^{-(a^{2}+b^{2})/2}\sum_{\alpha=0}^{\infty}\left(\frac{a}{b} \right)^{\alpha}I_{\alpha}(ab)\,. \tag{166}\] The Marcum Q-function was originally introduced in radar detection theory for non-fluctuating targets [19], and has subsequently found applications in communications and signal processing. Abate and Whitt [1] were first to use it in the context of queueing theory. Here we adopt it to write (163) in the more compact form \[P_{\rm cut}(t)=\frac{\sqrt{r}}{2}\left(\prod_{k=2}^{N-1}\mu_{k}\right)\left[- c_{0}\cdot e^{-(r+1)t}I_{0}(\sqrt{4r}t)+2e^{-(r+\mu_{1})t}\sum_{k=1}^{2N-3} \gamma_{k}\cdot e^{\sigma_{k}t}\bar{Q}\left(a_{k}(t),b_{k}(t)\right)\right] \tag{167}\] where \[a_{k}(t)=\left(\sqrt{4r}t/\beta_{k}\right)^{1/2}\,,\quad b_{k}(t)=\left(\sqrt{ 4r}\beta_{k}t\right)^{1/2}\,, \tag{168}\] and these are in general complex valued. A robust numerical algorithm for the computation of the Marcum Q-function4 that allows for complex arguments has been given in [11]. Footnote 4: Direct implementation of (165), (166) also yields goods results. For \(N=2\), there is an additional pole at the origin. In this case, we have \[c_{n}=\oint_{\cal C}\frac{dz}{2\pi i}\,z^{n-2}\frac{\left(1-z^{2}\right)^{2}} {z^{2}+2\alpha_{1}z+1}\,, \tag{169}\] from which it is evident that a pole at the origin contributes when \(n=0,1\). We find that \(c_{0}=2\beta_{1}\), \(c_{1}=\beta_{1}^{2}\) and, for \(n\geq 2\), \(c_{n}=-\beta_{1}^{n-1}(1-\beta_{1}^{2})\). It follows that \[P_{\rm cut}(t)=\sqrt{r}e^{-(r+1)t}\left[\beta_{1}I_{0}(\sqrt{4r}t)+I_{1}( \sqrt{4r}t)-\left(\frac{1}{\beta_{1}}-\beta_{1}\right)\sum_{n=1}^{\infty}\beta _{1}^{n}I_{n}(\sqrt{4r}t)\right]\,, \tag{170}\] and it turns out that \(\beta_{1}=\min(\sqrt{4r},1/\sqrt{4r})\), so that \[\frac{1}{\beta_{1}}-\beta_{1}=\left|\sqrt{4r}-\frac{1}{\sqrt{4r}}\right|\,. \tag{171}\] This is similar to the expression for the BP distribution as an infinite series of Bessel functions that was previously derived by Arora [3] for the two-server case. In terms of the complementary Marcum Q-function, the two-server result becomes \[P_{\rm cut}(t)=2e^{-(r+1)t}\left[\min\left(r,\tfrac{1}{4}\right)\cdot I_{0}( \sqrt{4r}t)+I_{1}(\sqrt{4r}t)-\left|r-\tfrac{1}{4}\right|\cdot e^{2(r+1/4)t} \bar{Q}\left(a_{1}(t),b_{1}(t)\right)\right]\,, \tag{172}\] and we may note that \[a_{1}(t)=\left\{\begin{array}{ll}\sqrt{t}&\quad\mbox{for $r\leq 1/4$}\;,\\ \sqrt{4rt}&\quad\mbox{for $r>1/4$}\;,\end{array}\right.\quad b_{1}(t)=\left\{ \begin{array}{ll}\sqrt{4rt}&\quad\mbox{for $r\leq 1/4$}\;,\\ \sqrt{t}&\quad\mbox{for $r>1/4$}\;.\end{array}\right. \tag{173}\] In this instance, the arguments of the Marcum Q-function are real. ## 7 Summary Statistics ### Analytical In order to test how the numerical algorithms for computing the BP distribution, developed in this work, perform across a wide range of input parameters \((r,N)\) in the ergodic region, we plot various summary statistics against a grid of these parameters. We can then ascertain (i) whether the observed variations are sufficiently regular and continuous, and (ii) how well the values approach the analytically calculated asymptotic limits as the number of servers grows. The results presented here are also useful in determining the values of \(N\) beyond which the asymptotic limit can serve as a proxy for the exact distribution, given a desired level of accuracy. Since the mean and higher moment diverge rapidly as the traffic intensity \(r\) approaches unity, where heavy-tailed behaviour sets in, we seek summary statistics that remain finite at \(r=1\) Figure 9: BP distribution via the complex-pole method for \(r=0.9\), \(N=15\). The NW graph is the cut function \(R_{\rm cut}(\cdot)\) on a logarithmic scale as a function of \(\tau\). The NE graph is the unit circle in the complex \(z\)-plane displaying the locations of the \(\alpha\)-poles (blue) and \(\beta\)-poles (red) within the unit circle. The single real \(\alpha\)-pole lies just outside the unit circle. The SW graph compares the survival function from the spectral method with that from the complex-pole method. The SE graph is the same comparison for the negative base-10 logarithm of the survival function and emphasizes agreement in the tail. Figure 10: _Note._ BP entropy versus number of servers (\(N\)) for various values of the traffic intensity (r). Each computed coloured dashed curve overlays a black solid curve that represents the analytical asymptotic form evaluated with the corresponding parameters \((r,N)\). The dotted black lines are the strict \(N\to\infty\) limiting curves for each value of \(r\). Figure 11: _Note._ BP log-mean versus number of servers (\(N\)) for various values of the traffic intensity (r). Each computed coloured dashed curve overlays a black solid curve that represents the analytical asymptotic form evaluated with the corresponding parameters \((r,N)\). The dotted black lines are the strict \(N\to\infty\) limiting curves for each value of \(r\). and can be computed conveniently. For this purpose, we have chosen the differential entropy \(H\equiv\langle-\ln(P(T))\rangle_{T}\) and the log-mean \(L\equiv\langle\ln(T)\rangle_{T}\). The log-mean is widely used in radar detection theory to estimate the shape parameters of candidate heavy-tailed distributions for high-resolution radar clutter from collected experimental data [27]. Figure 10 presents the BP differential entropy plotted against number of servers \(N\) for various values of the traffic intensity \(r\). It includes a comparison with asymptotic limits as detailed in the caption. Figure 11 presents the BP log-mean plotted against number of servers \(N\) for various values of the traffic intensity \(r\). It also includes a comparison with asymptotic limits as detailed in the caption. In Figure 12, the BP mean is plotted against traffic intensity \(r\) as its base-10 logarithm, for various numbers of servers \(N\), and is compared with the known exact results. All these graphs demonstrate consistent behaviour for the numerical computations, agreement with exact results and the expected approach to asymptotic limits. ### Simulation We have implemented a discrete event simulation (DES) for the M/M/\(c\) system. Adhering to the ergodic region (\(0<r<1\)), we run the simulation for a long time period as a steady-state simulation that is initialized to the empty state, and collect the times (epochs) at which an empty system becomes non-empty and at which a non-empty system becomes empty. These data are used to generate the empirical partial BP distribution from which summary statistics can be estimated. The goal is to compare the analytical results for various summary statistics with their empirical estimates across a wide range of the input-parameter pairs \((r,N)\) in order to verify the theoretical results derived in this work. A simulation is run for a given parameter pair whenever the expected number of BPs generated in the maximum tolerated simulation time \(T_{\rm stop}\) is at least 10. The maximum simulation time was taken to be \(10^{6}\) multiples of the mean treatment time. According to the theory of regenerative processes [8] the expected number of regeneration cycles (empty periods followed by a BP) in time \(T_{\rm stop}\) is simply given by \(n_{\rm reg}=T_{\rm stop}/[(m_{\rm bp}+1/r)\cdot T_{\rm scl}]\) Figure 12: [MISSING_PAGE_POST] 19: Figure 19: where \(T_{\rm scl}\) is a timescale needed to ensure that time in both the simulation and the analysis are measured in the same units. In the simulation, we set the mean treatment time to unity so that the simulation clock runs in units of the mean treatment time. In this case we must set \(T_{\rm scl}=1/N\). The nearest-neighbour estimator [17; 30] is used for the empirical differential entropy. This choice avoids calculation of a kernel density estimator (KDE) for the empirical PDF that is required by other empirical entropy estimators [5]. The KDE is problematic for distributions with semi-infinite support because undesirable artefacts are inevitably produced at the boundary. For a sample of size \(M\)\(\{T_{i}:1\leq i\leq M\}\), the nearest-neighbour estimator is given by \[\hat{H}\equiv\frac{1}{M}\sum_{i=1}^{M}\ln[(M-1)\rho_{i}]+\ln 2+\gamma_{\rm e}\,, \tag{174}\] where \(\rho_{i}\equiv\min_{j\neq i}\|T_{i}-T_{j}\|\) is the nearest-neighbour Euclidean distance of \(T_{i}\) from all other members of the sample. Due to the small sample numbers in many cases (_viz._\(r\) close to unity, or large \(N\)) and lack of alternatives for the entropy, the bootstrap method [14] has been used to generate the confidence intervals that provide the error bars in the graphs. The significance level is taken to be \(\alpha=0.01\). Thus, the displayed error bars in Figures 13 and 14, for the differential entropy and log-mean, respectively, indicate the central 99% confidence intervals. In cases where only a dot appears, the length of the confidence interval is less than the dot size. The nearest-neighbour entropy estimator is somewhat problematic for the bootstrap method as it generates duplicates, for which the nearest-neighbour distance is zero, leading to logarithmic divergence. We have adopted two strategies to address this difficulty: (i) removal of duplicates after bootstrap resampling but prior to evaluating the estimator, and (ii) constructing an ensemble of nearest-neighbour distances from the entire sample and bootstrapping on the nearest-neighbour ensemble. While these mitigations have different draw-backs, they are observed to perform equally well when tested on a large variety of simple distributions and compared with accurate results for the confidence intervals obtained from repeated Monte Carlo simulation trials. Figure 13 presents the BP differential entropy versus traffic intensity \(r\) for various server numbers \(N\), with overlaid simulation data as indicated by the dots with error bars. The green dashed curve is the extreme large-\(N\) limit based on a single exponential given by \(\bar{F}(t)\sim_{N\gg 1}e^{-t/m_{\rm bp}}\) with the mean \(m_{\rm bp}\) computed for \(N=60\). The dashed black curve is the finer large-\(N\) asymptotic form based on the two-exponential mixture given in (143) and computed for \(N=60\). It aligns very closely with the exactly calculated result. Figure 14 presents the BP log-mean versus traffic intensity \(r\) for various server numbers \(N\), with overlaid simulation data as indicated by the dots with error bars. The green dashed curve is the extreme large-\(N\) limit based on a single exponential given by \(\bar{F}(t)\sim_{N\gg 1}e^{-t/m_{\rm bp}}\) with the mean \(m_{\rm bp}\) computed for \(N=50\). The dashed black curve is the finer large-\(N\) asymptotic form based on the two-exponential mixture given in (143) and computed for \(N=50\). It aligns very closely with the exactly calculated result. ## 8 Conclusions This paper has developed two distinct methods for generating explicit exact results for the distribution of the partial BP pertaining to the M/M/\(c\) queue and a variety of models that generalize it with priority levels and distinct arrival classes. The spectral method allows for a robust and efficient numerical implementation. The algebraic method furnishes closed-form results for the PDF and SF that elucidate the analytical structure of the problem. In particular, for any given number of servers, it has identified a unique polynomial that completely characterizes the distribution. The present discussion has also served to connect previous diverse approaches to the problem. Let the RV \({\cal N}_{\rm b}(t)=0,1,\ldots,N\) denote the number of servers that are busy at time \(t\). Let the RVs \(T_{n}\), \(n=1,2,\ldots,N\), denote the unit descent times \[T_{n}\equiv\min\left\{t:{\cal N}_{\rm b}(t)=k-1|{\cal N}_{\rm b}(0^{+})=k\right\}\,, \tag{175}\] with their MGFs denoted by \(\eta_{n}(s)\equiv\langle e^{-sT_{n}}\rangle_{T_{n}}\). Omahen and Marathe [22] have shown that, for \(n=1,2,\ldots,N-1\), \[\eta_{n}(s)=\frac{n\mu}{s+n\mu+\lambda-\lambda\eta_{n+1}(s)}\,,\quad\eta_{N}(s )=\frac{N\mu}{s+N\mu+\lambda-\lambda\eta_{N}(s)}\,. \tag{176}\] This may be rearranged as \[\mu_{n}-(s+\lambda+\mu_{n})\eta_{n}(s)+\lambda\eta_{n}(s)\eta_{n+1}(s)=0\,, \tag{177}\] where \(\mu_{n}\equiv n\mu\). Upon multiplying both sides by the product \(\eta_{1}(s)\cdots\eta_{n-1}(s)\), and introducing the RV convolution \[\phi_{n}(s)=\prod_{k=1}^{n}\eta_{k}(s)\,, \tag{178}\] we obtain \[\mu_{n}\phi_{n-1}(s)-(s+\lambda+\mu_{n})\phi_{n}(s)+\lambda\phi_{n+1}(s)=0\,, \tag{179}\] consistent with (3). Also, we have from (178) that \(\phi_{N}(s)=\phi_{N-1}(s)\eta_{N}(s)\), noting that \(\eta_{N}(s)\) is a solution of the quadratic equation \[\lambda\eta_{N}^{2}(s)-(s+\mu_{N}+\lambda)\eta_{N}(s)+\mu_{N}=0\,, \tag{180}\] and is given by \(\eta_{N}(s)=\psi_{-}(s)\), with the minus branch chosen because \(\psi_{-}(0)=1\), whereas \(\psi_{+}(0)=1/r>1\). This completes the connection with (3). By construction, \(\eta_{1}(s)\) is the MGF of the partial BP. From the identification \(\phi_{1}(s)=\eta_{1}(s)\), it follows that, independent of the interpretation of the quantities \(\phi_{k}(s)\), \(k=2,3,\ldots\), the quantity \(\phi_{1}(s)\), derived as the solution of the system (179), also represents the MGF of the partial BP. The authors gratefully acknowledge useful discussions with Dr. Stephen Bocquet.
2305.01579
Why So Gullible? Enhancing the Robustness of Retrieval-Augmented Models against Counterfactual Noise
Most existing retrieval-augmented language models (LMs) assume a naive dichotomy within a retrieved document set: query-relevance and irrelevance. Our work investigates a more challenging scenario in which even the "relevant" documents may contain misleading or incorrect information, causing conflict among the retrieved documents and thereby negatively influencing model decisions as noise. We observe that existing LMs are highly brittle to the presence of conflicting information in both the fine-tuning and in-context few-shot learning scenarios. We propose approaches for handling knowledge conflicts among retrieved documents by explicitly fine-tuning a discriminator or prompting GPT-3.5 to elicit its discriminative capability. Our empirical results on open-domain QA show that these approaches significantly enhance model robustness. We also provide our findings on incorporating the fine-tuned discriminator's decision into the in-context learning process, proposing a way to exploit the benefits of two disparate learning schemes. Alongside our findings, we provide MacNoise, a machine-generated, conflict-induced dataset to further encourage research in this direction.
Giwon Hong, Jeonghwan Kim, Junmo Kang, Sung-Hyon Myaeng, Joyce Jiyoung Whang
2023-05-02T16:28:10Z
http://arxiv.org/abs/2305.01579v3
Discern and Answer: Mitigating the Impact of Misinformation in Retrieval-Augmented Models with Discriminators ###### Abstract Most existing retrieval-augmented language models (LMs) for question answering assume all retrieved information is factually correct. In this work, we study a more realistic scenario in which retrieved documents may contain misinformation, causing conflicts among them. We observe that the existing models are highly brittle to such information in both fine-tuning and in-context few-shot learning settings. We propose approaches to make retrieval-augmented LMs robust to misinformation by explicitly fine-tuning a discriminator or prompting to elicit discrimination capability in GPT-3. Our empirical results on open-domain question answering show that these approaches significantly improve LMs' robustness to knowledge conflicts. We also provide our findings on interleaving the fine-tuned model's decision with the in-context learning process, paving a new path to leverage the best of both worlds. ## 1 Introduction The general framework of retrieval-augmented language models (LMs) for question answering (QA) consists of retrieving documents relevant to a question using a sparse Robertson et al. (2009) or a dense Karpukhin et al. (2020) retriever, and processing the retrieved documents using encoder Devlin et al. (2019) or decoder Raffel et al. (2020) models to derive an answer. Despite being used in many practical applications, most retrieval-augmented LMs Guu et al. (2020); Lewis et al. (2020); Izacard and Grave (2021); Lewis et al. (2021) are predicated on a naive assumption: the documents in the retrieved set are factual and contain consistent information. Such an assumption undermines the reliability of the LMs and invalidates the trustfulness of the generated answers. Inconsistencies caused by conflicting information among retrieved documents may occur for different reasons such as updated/outdated or counterfactual information. With the ever-increasing _misinformation_ on the Web Vicario et al. (2016); Hossain et al. (2020); Zheng et al. (2022), this paper focuses on a subset of the problem: handling misinformation in a set of retrieved documents. We study how robust models are in the presence of misinformation and the ensuing knowledge conflict in open-domain question answering (ODQA). To emulate the conflict among the retrieved documents, we adopt the entity replacement framework from Longpre et al. (2021) to deliberately perturb the documents. This allows us to study a hypothetical yet conceivable scenario in which certain entities in the retrieved texts are altered, causing the documents to contain conflicting information and QA models to generate a wrong answer even when a gold document is present (Figure 1). Our empirical results reveal that existing models such as FiD Izacard and Grave (2021) and GPT-3 Brown et al. (2020) are highly susceptible to misinformation. To alleviate this problem, we propose inducing the discrimination capabilities and exploiting them in the fine-tuned (FiD; SS2) and in-context learned (GPT-3; SS3) models to let them focus on trustworthy information. We combine the Figure 1: In an ODQA setting, (a) a question is used to retrieve a set of (b) relevant documents which may contain conflict-causing documents that render (c) the retrieval-augmented LMs unreliable. Misinformation in a document is deemed a major cause of conflicts. strengths of fine-tuning and prompting based on our findings that the (i) fine-tuned LM demonstrates high precision in discerning authentic from counterfactual documents, and (ii) large language models (LLMs) leverage their rich parametric knowledge to perform tasks with limited training data. Our approach highlights the potential benefits of leveraging lightweight fine-tuned LMs to assist LLMs. ## 2 Improving Robustness of Fine-Tuned Models with Discriminator Training We hypothesize that infusing a retrieval-augmented LM with an inductive bias about which document is perturbed or not (i.e., counterfactual or authentic) enables the model to be robust to misinformation when generating an answer to a given question. We equip a QA system with a discriminator learned jointly with a QA task, to enhance the discriminative information in the encoder embeddings so that the decoder can capture such information when deriving an answer (Figure 2 (a)). Our model builds upon FiD (Izacard and Grave, 2021), a retrieval-augmented encoder-decoder LM that leverages DPR (Karpukhin et al., 2020) to retrieve a set of \(M\) documents from a text corpus \(\{d_{1},d_{2},...,d_{N}\}\in D\), where \(d_{i}\) is retrieved by a similarity search with a question embedding along a document index of size \(N\) encoded by a pre-trained BERT (Devlin et al., 2019). Each document \(d_{m}\) is prepended with a question \(q\) to be processed independently by a T5 (Raffel et al., 2020) encoder. Then, the resulting encoder representations are concatenated along the sequence dimension as follows: \(H=\big{\|}_{m=1}^{M}\,Encoder(q,d_{m})\), \(H\in\mathbb{R}^{M\times T}\), where \(T\) is the maximum sequence length per document. Since each document is either perturbed or original, we add a discriminator on top of the encoder to be jointly learned with the decoder's answer generation by optimizing the following objective: \[L=\text{-}log\,p_{dec}(y|H)-\sum_{m=1}^{M}BCE(log\,p_{disc}(t_{m}|h_{m}),t_{m})\] \(p_{dec}\) and \(p_{disc}\) denote the decoder and discriminator probability distribution, respectively. \(y\) is the ground-truth answer sequence, \(h_{m}\in H\) is an encoder representation for the \(m\)-th document, \(t_{m}\in\{0,1\}\) is the perturbation label, and \(BCE\) indicates the binary cross entropy loss. ## 3 In-Context Learning Scheme for Model Robustness against Misinformation In response to the recent surge of interest in prompt learning using LLMs like GPT-3 (Brown et al., 2020), we investigate the effectiveness of prompting an LLM to figure out the perturbed documents before answering an open-domain question. Our input prompt consists of (i) a set of retrieved documents partly perturbed by our perturbation scheme in SS4.1, followed by (ii) a task-specific instruction (Figure 2 (b)) that prompts the model to explicitly find the perturbed documents to ignore and generate a correct answer to (iii) the question that follows afterwards (details are in Figure 4, Appendix B). Figure 2: Illustration of our approaches to enhancing robustness to misinformation. (a) Along with the decoder, the discriminator is jointly trained with the downstream task (QA), making the encoder produce corrupt-aware embeddings. (b) GPT-3 is prompted to find the perturbed documents before generating an answer. A zero-shot example is shown for brevity. (c) Fine-tuned discriminator output is injected into the prompt for GPT-3. As an extension, we incorporate the discriminator (SS2) to the prompt-based approach. Instead of making GPT-3 find the perturbed documents, we insert FiD's discriminator output into the prompt. This way, we combine the GPT-3's rich parametric knowledge and the FiD's fine-tuned, task-specific discriminator of high precision (Figure 2 (c)). ## 4 Experiments We measure the performance of FiD and GPT-3 in the following settings. The **Parametric (w/o Retrieval)** setting uses only parametric knowledge to answer a question. The **Semi-Parametric** setting uses retrieved documents and parametric knowledge; we measure how the infused misinformation affects the models' performance. Our methods with discrimination (**Disc**) capabilities are denoted as **Semi-Parametric + Disc**: the fine-tuned discriminator is superscripted as \(\texttt{Disc}^{\texttt{FiD}}\) and the purely prompt-based discrimination as \(\texttt{Disc}^{\texttt{Inst}}\). To fit the maximum input length of GPT-3, we use the top 5 documents from the retrieved set for both GPT-3 and FiD for a fair comparison. To deal with the high cost of using OpenAI's GPT-3 davinci-003 as in Le et al. (2022), we fix the number of data instances to 256 for the dev set. The generated outputs from GPT-3 are ensembled over the \(k\) instances to mitigate the in-context sample sensitivity observed in Zhao et al. (2021). Details are available in Appendix B and C. ### Generating Adversarial Documents To emulate a scenario in which misinformation is present among retrieved documents, we generate counterfactual documents by adopting an entity-centric perturbation strategy (Longpre et al., 2021). This involves taking a document and substituting a gold answer with a randomly sampled named entity of the same type (e.g., Michael Jordan (PER) is replaced with Kobe Bryant (PER)). We measure the LMs' performance by controlling the probability of perturbing the retrieved documents (**0%, 25%, 50%, 75%**). Details are in Appendix A. ### Brittleness of Retrieval-Augmented Language Models to Misinformation We analyze how brittle the retrieval-augmented LMs are in the presence of perturbed documents for the NQ-Open task (Kwiatkowski et al., 2019). In Table 1, we show that the performances of **Semi-Parametric** for both FiD and GPT-3 degrade significantly as the perturbation percentage increases, even when the gold documents are provided. We also note that in highly perturbed settings (\(\geq\)50%), GPT-3's **Semi-Parametric** becomes worse than its **Parametric (w/o Retrieval)** counterpart. Our results demonstrate that these seemingly strong models are easily affected by misinformation. ### Improved Robustness via Discriminators For FiD, we see that in **0%**, the original **Semi-Parametric** outperforms **Semi-Parametric w/ \(\texttt{Disc}^{\texttt{FiD}}\)** (Table 1). Note that our discriminator is built under the assumption that counterfactual documents always exist among the retrieved set. Such train-test discrepancy in **0%** and the inherent limitations of the multi-task learning framework (Ruder, 2017) are attributable to the performance drop. In **25%**, marginal performance drop is shown for **Semi-Parametric w/ \(\texttt{Disc}^{\texttt{FiD}}\)**. Nevertheless, it exhibits robust retention of performance when transitioning from **0%** to **25%**, unlike the huge drop \begin{table} \begin{tabular}{l l c c c c c} \hline \hline & & \multicolumn{3}{c}{**Dev Set**} & \multicolumn{2}{c}{**Test Set**} \\ \cline{3-6} & \multicolumn{1}{c}{**Perturbation \%**} & **0\%** & **25\%** & **50\%** & **75\%** & **50\%** \\ \hline \multirow{3}{*}{FiD} & Parametric (w/o Retrieval) & 12.11 & 12.11 & 12.11 & 12.11 & 13.22 \\ & Semi-Parametric & **62.89** & **50.00** & 31.64 & 19.92 & 34.26 \\ & Semi-Parametric w/ \(\texttt{Disc}^{\texttt{FiD}}\) & 50.78 & 49.61 & **42.97** & **31.25** & **42.84** \\ \hline \hline \multirow{3}{*}{GPT-3} & Parametric (w/o Retrieval) & 32.03 & 32.03 & 32.03 & **32.03** & 36.83 \\ & Semi-Parametric & 50.39 & 41.41 & 31.25 & 22.66 & 37.76 \\ \cline{1-1} & Semi-Parametric w/ \(\texttt{Disc}^{\texttt{Inst}}\) & 48.83 & 39.45 & 28.91 & 21.48 & 38.41 \\ \cline{1-1} & Semi-parametric w/ \(\texttt{Disc}^{\texttt{ribD}}\) & **51.56** & **45.70** & **33.98** & 26.95 & **42.19** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance in Exact Match (EM) scores on our sampled NQ-Open **dev** set (256 instances) and **test** set (full), according to the perturbation % of _perturbable_1 retrieved documents. For the **test** set, we evaluate for the **50%** case only due to budget constraints. The results for GPT-3 are ensembled over \(k=5\) instances (§4). in performance for **Semi-Parametric**. Results in both the **50%** and **75%** settings also denote the improved robustness of our FiD under the **Semi-Parametric w/ DiscFiD** scheme. For GPT-3, we observe that our instruction approach (**Disc\({}^{\texttt{Inst}}\)**) to elicit discriminability does not incite improvement. In Table 2, we show **Disc\({}^{\texttt{Inst}}\)**'s classification performance, where the GPT-3's prompt-based few-shot discriminator approach substantially underperforms its fine-tuned counterpart **Disc\({}^{\texttt{FiD}}\)**. This motivated us to provide **Disc\({}^{\texttt{FiD}}\)**'s output to GPT-3, which enhances the LLMs robustness in the **25%** and **50%** settings. The performance degradation for GPT-3 (**Semi-Parametric w/ DiscFiD**) in the **75%** setting when compared to **Parametric (w/o Retrieval)** can be partly attributed to the increased portion of noisy, irrelevant documents that GPT-3 was not instructed to disregard and GPT-3's propensity to reflect the information given in its in-context samples. A crucial inquiry here is _Why do we need to combine GPT-3 and **Disc\({}^{\texttt{FiD}}\)** despite its worse performance than the FiD counterpart?_ Note that our discriminator is easily trainable with our scalable perturbation framework (SS4.1). In a low-resource setting, where downstream task instances are scarce, GPT-3's few-shot learning capability shines. The lightweight fine-tuned LMs trained on an easily accessible subtask (e.g., perturbation classification) can, therefore, maximize GPT-3's capability. ### Enhanced In-Context Learning Stability In Figure 3, we plot the best, average and worst EM scores of GPT-3 over 5 different in-context samples. In-context learning is known for its high instability (Zhao et al., 2021; Min et al., 2022), and we discover that injecting the fine-tuned discriminator into the in-context learning process (**GPT-3 (Semi-parametric w/ DiscFiD**)) greatly improves the stability. This new facet along with the result in SS4.3 highlights the potential of leveraging both strengths of fine-tuning and in-context learning paradigms. ## 5 Related Work Retrieval-Augmented LMsBy using explicit retrievers, retrieval-augmented LMs aimed to efficiently capture world knowledge in a more interpretable manner (Guu et al., 2020). Retrieve-and-generate models followed to address the hallucination and knowledge update issues (Lewis et al., 2020; Izacard and Grave, 2021). Some scaled the size of retrieved documents (Lakhotia et al., 2021), while others adopted retrieval to reduce their parameter sizes with external memory (Borgeaud et al., 2022). While promising, most prior works disregard that in the wild, fabricated misinformation may be prevalent. Knowledge Conflicts and Answer CalibrationChen et al. (2022) investigated model behaviors in a knowledge conflict setting and used calibration (Kamath et al., 2020; Zhang et al., 2021) to abstain from answering when a conflict occurs. Our work, on the contrary, deals with improving the model's ability to distinguish gold from counterfactual information when confronted with knowledge conflict caused by misinformation, providing a correct answer rather than remaining silent. ## 6 Conclusion This work investigates the robustness of retrieval-augmented LMs when the retrieved documents include conflicting misinformation. We show that (i) both the fine-tuned LMs and in-context learned LLMs are brittle to the presence of perturbed information, and (ii) our perturbation discriminating approach significantly enhances the LMs' ability to recognize the gold information. Furthermore, (iii) we find that combining the fine-tuned discriminator's output with in-context samples substantially \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{**FiD**} & \multicolumn{2}{c}{**GPT-3**} \\ \cline{2-7} & **Prec.** & **Rec.** & **F1** & **Prec.** & **Rec.** & **F1** \\ \hline **25\%** & 88.18 & 65.10 & 74.90 & 15.41 & 45.37 & 23.01 \\ **50\%** & 94.31 & 63.78 & 76.10 & 30.29 & 48.59 & 37.32 \\ **75\%** & 96.37 & 64.46 & 77.25 & 42.03 & 49.14 & 45.31 \\ \hline \hline \end{tabular} \end{table} Table 2: Discriminator performance on our full NQ-Open dev set. Each row corresponds to perturbation %. Figure 3: Given the fine-tuned discriminator’s output, prompting on GPT-3 shows improved stability (green), shown by the decrease in performance variance. improves the LLMs' stability, creating a new avenue for future work to utilize the advantages of both learning paradigms. ## Limitations Using GPT-3 davinci-003 for in-context learning incurs substantial cost because of its price ($0.02 per 1,000 tokens), we base our study on the NQ-Open dataset only. We also limit the results of our full NQ-Open test set evaluation on the **50%** perturbation case for GPT-3, and limit the number of training instances (\(k\)) used for ensembling the results to \(k=5\) due to budget constraints. While there are baselines such as RAG Lewis et al. (2020) available in the field of generative retrieval-augmented LMs, most of them were either not open-sourced or took extremely long time to train. We also follow the settings in the original FiD paper Izacard and Grave (2021), dealing with the Wikipedia documents (NQ-Open) instead of the real-world Web-scale documents. We acknowledge that the adoption of the entity-perturbation framework Longpre et al. (2021) in this paper may appear artificial. Thus, it would be beneficial for future works to explore the application of our approach on a larger scale, Web environment, since the problems that occur due to misinformation in such a realistic setting require further investigation. ## Ethics Statement Our work deals with improving the robustness of retrieval-augmented LMs when misinformation is present among the retrieved documents. To emulate the scenario, our work purposefully, without any ill-intention, perturbed the retrieved documents with the entity-perturbation framework adopted from a previous work Longpre et al. (2021). We propose to address the issue of misinformation-infused documents in the ODQA setting. ## Acknowledgements This research was supported by IITP grants funded by the Korean government MSIT 2022-0-00369, 2020-0-00153 (Penetration Security Testing of ML Model Vulnerabilities and Defense) and NRF of Korea funded by the Korean Government MSIT 2018R1A5A1059921, 2022R1A2C4001594.
2308.10488
Enhancing Medical Image Segmentation: Optimizing Cross-Entropy Weights and Post-Processing with Autoencoders
The task of medical image segmentation presents unique challenges, necessitating both localized and holistic semantic understanding to accurately delineate areas of interest, such as critical tissues or aberrant features. This complexity is heightened in medical image segmentation due to the high degree of inter-class similarities, intra-class variations, and possible image obfuscation. The segmentation task further diversifies when considering the study of histopathology slides for autoimmune diseases like dermatomyositis. The analysis of cell inflammation and interaction in these cases has been less studied due to constraints in data acquisition pipelines. Despite the progressive strides in medical science, we lack a comprehensive collection of autoimmune diseases. As autoimmune diseases globally escalate in prevalence and exhibit associations with COVID-19, their study becomes increasingly essential. While there is existing research that integrates artificial intelligence in the analysis of various autoimmune diseases, the exploration of dermatomyositis remains relatively underrepresented. In this paper, we present a deep-learning approach tailored for Medical image segmentation. Our proposed method outperforms the current state-of-the-art techniques by an average of 12.26% for U-Net and 12.04% for U-Net++ across the ResNet family of encoders on the dermatomyositis dataset. Furthermore, we probe the importance of optimizing loss function weights and benchmark our methodology on three challenging medical image segmentation tasks
Pranav Singh, Luoyao Chen, Mei Chen, Jinqian Pan, Raviteja Chukkapalli, Shravan Chaudhari, Jacopo Cirrone
2023-08-21T06:09:00Z
http://arxiv.org/abs/2308.10488v1
Enhancing Medical Image Segmentation: Optimizing Cross-Entropy Weights and Post-Processing with Autoencoders ###### Abstract The task of medical image segmentation presents unique challenges, necessitating both localized and holistic semantic understanding to accurately delineate areas of interest, such as critical tissues or aberrant features. This complexity is heightened in medical image segmentation due to the high degree of inter-class similarities, intra-class variations, and possible image obfuscation. The segmentation task further diversifies when considering the study of histopathology slides for autoimmune diseases like dermatomyositis. The analysis of cell inflammation and interaction in these cases has been less studied due to constraints in data acquisition pipelines. Despite the progressive strides in medical science, we lack a comprehensive collection of autoimmune diseases. As autoimmune diseases globally escalate in prevalence and exhibit associations with COVID-19, their study becomes increasingly essential. While there is existing research that integrates artificial intelligence in the analysis of various autoimmune diseases, the exploration of dermatomyositis remains relatively underrepresented. In this paper, we present a deep-learning approach tailored for Medical image segmentation. Our proposed method outperforms the current state-of-the-art techniques by an average of 12.26% for U-Net and 12.04% for U-Net++ across the ResNet family of encoders on the dermatomyositis dataset. Furthermore, we probe the importance of optimizing loss function weights and benchmark our methodology on three challenging medical image segmentation tasks. ## 1 Introduction The development of potent CAD (Computer Aided Diagnosis) strategies has been aided by advances in computational power and image analysis algorithms over the past decade. Medical imaging is fundamental to these CAD methods. Obtaining accurate results from CAD techniques relies on acquiring high-quality medical imaging and corresponding annotation. These CAD approaches facilitate various tasks such as image classification, segmentation, spatial mapping, and tracking. Out of these, medical image segmentation is a particularly challenging task due to several complexities. For example, in skin lesion image segmentation, there exists significant intra-class variability and inter-class similarity. This issue is exacerbated by the presence of obscuration and low contrast, which makes the task of separating the affected area from the surrounding image more challenging. On the other hand, sometimes the data required to segment is very complex, with multiple fine-grained and hard-to-segment objects, for example, in the case of histopathology data of dermatomyositis (a kind of autoimmune disease). We provide a few examples of the large variability and low contrast in Figure 1, obscuration in Figure 2, and multiple hard-to-segment small objects in Figure 3. In addition to these modality-specific complexities, medical imaging datasets are considerably smaller than natural datasets. The main reason for this is the significant expenses and time involved in gathering, annotating medical datasets and privacy concerns. Medical imaging datasets can only be labeled by highly specialized clinicians instead of the possibility of crowdsourced labeling in the case of natural datasets. Privacy concerns pose significant challenges in the open sourcing of medical datasets, particularly for rare or emerging diseases. Medical datasets are typically restricted to institutional use, even when made available [16]. Despite significant progress in medical science, some diseases have not yet been fully comprehended [4]. Autoimmune diseases are a notable category in this context. The lack of a comprehensive catalog of autoimmune diseases, unlike other diseases, is attributed to the diverse nature of their onset and progression [24]. There are still important research questions for autoimmune diseases regarding environmental triggers, pathogenesis, cell inflammation, and interaction. Currently, there are over 80 classified autoimmune diseases. Immune-modulatory drugs are commonly employed for the treatment of autoimmune diseases. However, these drugs have a wide range of effects and lack specificity for autoimmune diseases. Unfortunately, their usage is often linked to other infections and malignant diseases as undesirable side effects. Patients often have limited or no response to these treatments due to the variability within these disorders. So, there is a pressing need for more advanced, fast, and accurate ways to find novel relationships and pathologies that can lead to more effective treatments for autoimmune diseases. To accomplish this, it is imperative to develop precise and adaptable techniques for analyzing autoimmune diseases related medical images. Implementing AI-based Computer-Aided Diagnosis (CAD) is a potential strategy for achieving this objective. In contrast to other diseases, however, lacking a definitive list and a limited understanding of autoimmune disorders presents a challenge. Consequently, there are few established data collection mechanisms for autoimmune diseases. These factors contribute to the paucity of research on the intersection of autoimmune diseases and CAD approaches. Most extant research in this field is either outdated or lacks open-source methodologies. The study of autoimmune diseases is paramount due to their increasing prevalence[5; 8; 11]. Autoimmune diseases impact a significant portion of the global population, ranging from 5% to 8%. These conditions cause considerable distress to patients and have been found to Figure 1: Samples from the demofit dataset for segmentation, we observe a large lesion color variability from left to right. Interestingly, the background color remains relatively consistent throughout the samples. We also observe a change in contrast from left to right. These confounding factors make the segmentation of skin lesions difficult. Figure 3: Semantic segmentation task as defined in Section 3 with input image on the left and the corresponding ground truth on the right. On top, we have a sample image (on the left) and the corresponding ground truth (on the right) from the dermatomyositis dataset. Similarly, a sample from the dermatofit dataset is in the middle, and a sample from the ISIC-2017 dataset is at the bottom. Unlike a single blob in the skin lesion dataset samples from dermofit and ISIC 2017, we observe that the histopathology whole slide image has many more fine-grained objects with hard-to-segment boundaries. For all the images on the right, the yellow area represents the region of interest (foreground), and the rest is the area other than the region of interest (background). Figure 2: This figure contains samples from the ISIC 2017 dataset. The ISIC-2017 dataset exhibits obscuration and significant intra-class variability, resembling the Dermofit dataset (from Figure 1) regarding inter-class similarity and low contrast (rightmost image). have connections with COVID-19, the primary cause of the recent worldwide pandemic [21, 23]. To bridge the divide, Van Buren [26] and Singh & Cirrone [22] have made attempts. The main focus of these studies is dermatomyositis. This rare autoimmune disease has received limited attention at the intersection of medical imaging and the application of AI (Artificial Intelligence) for medical image analysis. With this paper, * We improve upon the existing state-of-the-art approach [22] for dermatomyositis segmentation by an average of 12.26% for U-Net and 12.04% for Unet++ in Section 5.1. Additionally, we benchmark our approach on two other challenging skin-related datasets. * We study the impact of adding a post-processing autoencoder in addition to U-Net and U-Net++ on three medical imaging datasets in Section 5.1.1. * We investigate the significance of cross-entropy loss function weights on three challenging medical imaging datasets in Section 5.1.2. ## 2 Background Medical image segmentation separates the region of interest, usually a lesion, cells, or other anatomical region of interest, from the slide background. Traditional segmentation processes use pixel-level classification to group pixels into different categories; in the case of semantic segmentation, these would be background and foreground. But with the maturity of Convolutional Neural Networks(CNNs), Ronneberger [20] introduced U-Net - an autoencoder-based architecture for biomedical segmentation. The U-Net consists of an encoder and a decoder architecture, where the encoder acts as a feature extractor, and the decoder learns the mask by using the extracted features as input. In addition, the decoder also incorporates the feature maps from the encoder to improve scaling up the representation to the image mask; these connections are called "Skip-connections." Following U-Net, a wealth of architectures have spawned: U-Net++[27], DeepLab [1], DeepLabV3+ [2], and Feature Pyramid networks (FPN) [12]. All of these architectures build on the autoencoder architecture of U-Net with skip connections. To increase the receptive field of these architectures, various techniques have been introduced, such as dilated networks [18] and nesting architecture, as in the case of U-Net++[27]. Despite these advancements and complex architectures, U-Net remains the choice of architecture for medical image segmentation [17, 22, 26]. ### Application of Segmentation Techniques for Autoimmune diseases. Stafford [24] conducted a comprehensive survey to examine the application of AI in the context of autoimmune diseases. They observed the median size of autoimmune datasets is much smaller (99-540 samples per dataset) as compared to datasets pertaining to other medical modalities. The scarce available data poses a significant challenge in acquiring informative priors for artificial intelligence-based CAD approaches on these datasets resulting in sub-par performance. Furthermore, most methodologies for analyzing these datasets are antiquated and lack open-source availability[22]. To overcome these shortcomings Van Buren [26] proposed the use of U-Net for segmentation of whole slide images of dermatomyositis histopathology data and open-sourced their approach. Given the considerable size of the whole slide images (WSI) at 1408 \(\times\) 1876, a tiling approach was employed to partition the WSI into smaller 256 \(\times\) 256 images, with padding. They also used a combination of Dice and Binary cross entropy loss to attenuate the problem of pixel distribution imbalance between the area surrounding the region of interest (background pixels) and the region of interest (foreground pixels). Due to this imbalance, the segmentation architecture tends to focus more on the area surrounding the region of interest for segmenta Figure 4: DEDL Architecture[22] with the APP (Autoencoder Post Processing) for dermatomyositis image segmentation as described in Section 2.1. We use this architecture as our baseline and propose changes to it to improve by around 12.26% for U-Net and 12.04% for U-Net++ - this is a considerable improvement over DEDL, as DEDL improved over the previous state-of-the-art approach Van Buren [26] by around 5% for segmentation. tion if unattended. Singh & Cirrone [22] further improved on this benchmark by using U-Net and introduced an "Autoencoder Post Processing" (APP) technique. The APP consists of stacked linear layers for the encoder and decoder. This makes the autoencoder much simpler than the convolution and skip connection based U-Net and U-Net++ architectures. After obtaining the mask from a U-Net or U-Net++, it is passed through the APP. Since, the autoencoder consists of only stacked linear layers it creates a noised version of the segmentation output from U-Net and U-Net++. A mean squared error is then calculated between the autoencoder's output and ground truth. During training, the model trained with the help of the MSE loss (calculated between the autoencoder output and the ground truth) and the cross entropy loss (calculated between the U-Net/U-Net++ output and the ground truth). This helps the model learn a more diverse set of features. The autoencoder is only used during the training process. Hence there is only a marginal increase in training time while the inference time remains constant. They stud \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Technique} & \multicolumn{3}{c}{U-Net} \\ & & Baseline(w/o APP) & w/ Relu APP & w/ Gelu APP \\ \hline \multirow{2}{*}{ResNet-18} & DEDL & 0.4347 & 0.4608 & 0.4788 \\ & Ours & **0.5618** & **0.5479** & **0.5582** \\ \hline \multirow{2}{*}{ResNet-34} & DEDL & 0.4774 & 0.4467 & 0.4983 \\ & Ours & **0.5306** & **0.5571** & **0.5606** \\ \hline \multirow{2}{*}{ResNet-50} & DEDL & 0.3798 & 0.4187 & 0.3827 \\ & Ours & **0.5556** & **0.5495** & **0.5597** \\ \hline \multirow{2}{*}{ResNet-101} & DEDL & 0.3718 & 0.4074 & 0.4402 \\ & Ours & **0.5502** & **0.5678** & **0.5497** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparison of DEDL [22] and our approach on the dermatomyositis dataset for U-Net. We repeat all experiments five times with different seed values and report the IoU on the test in the 95% CI (confidence interval). \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Technique} & \multicolumn{3}{c}{U-Net++} \\ & & Baseline(w/o APP) & w/ Relu APP & w/ Gelu APP \\ \hline \multirow{2}{*}{ResNet-18} & DEDL & 0.5274 & 0.4177 & 0.4707 \\ & Ours & **0.5622** & **0.5679** & **0.5683** \\ \hline \multirow{2}{*}{ResNet-34} & DEDL & 0.3745 & 0.4535 & 0.4678 \\ & Ours & **0.5536** & **0.5685** & **0.5633** \\ \hline \multirow{2}{*}{ResNet-50} & DEDL & 0.4236 & 0.4685 & 0.4422 \\ & Ours & **0.5742** & **0.5698** & **0.5514** \\ \hline \multirow{2}{*}{ResNet-101} & DEDL & 0.4311 & 0.4265 & 0.4467 \\ & Ours & **0.57** & **0.5727** & **0.5692** \\ \hline \hline \end{tabular} \end{table} Table 2: Similar to Table 1, in this table, we compare the performance comparison of DEDL [22] and our approach on U-Net++. We report IoU scores averaged over five seed values in the 95% confidence interval (CI) over the test set of the Dermatomyositis dataset in this table. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**ResNet**} & \multicolumn{3}{c}{**U-Net**} \\ & & Baseline(w/o APP) & w/ Relu APP & w/ Gelu APP \\ \hline \multirow{4}{*}{**Dermofit**} & ResNet18 & 0.7388 & **0.7477** & 0.7467 \\ & ResNet34 & 0.7576 & **0.7633** & 0.7525 \\ & ResNet50 & 0.7364 & 0.7338 & **0.7401** \\ & ResNet101 & 0.7252 & 0.7213 & **0.7258** \\ \hline \multirow{4}{*}{**Dermatomyositis**} & ResNet18 & **0.5618** & 0.5479 & 0.5582 \\ & ResNet34 & 0.5306 & 0.5571 & **0.5606** \\ \cline{1-1} & ResNet50 & 0.5556 & 0.5495 & **0.5597** \\ \cline{1-1} & ResNet101 & 0.5502 & **0.5678** & 0.5497 \\ \hline \multirow{4}{*}{**ISIC2017**} & ResNet18 & **0.6458** & 0.6252 & 0.6357 \\ \cline{1-1} & ResNet34 & **0.6518** & 0.6227 & 0.6306 \\ \cline{1-1} & ResNet50 & 0.605 & 0.5984 & **0.6207** \\ \cline{1-1} \cline{2-4} & ResNet101 & 0.6267 & **0.6325** & 0.5884 \\ \hline \hline \end{tabular} \end{table} Table 3: In this table we present the IoU (Intersection over Union) on the test set averaged over five seed values (in 95% CI) for U-Net trained with our proposed technique as mentioned in Section 3. ied ReLU and GELU as activation functions for the linear layers of APP and found that ReLU activations work better than GELU. For their choice of architecture, they used U-Net and U-Net++ (nested U-net) with Squeeze and Excitation [10] in the decoder for channel-level attention. To navigate the problem of pixel distribution imbalance between the area surrounding the region of interest (background pixels) and the region of interest (foreground pixels), they used pixel-distribution ratio weights in the cross-entropy loss. Wherein the background pixels used the ratio of background pixels to total pixels as weights, and similarly, the foreground pixels used the ratio of foreground pixels to total pixels as weights. With these changes, they were able to improve on state of the art on the dermatocystis segmentation task [26] by around 5%. They also suggested a change in evaluation metric from pixel accuracy to IoU (Intersection over Union) as pixel accuracy does not correctly represent the quality of the learned mask as opposed to the ground truth mask. Our study builds upon the foundation established by Singh & Cirrone [22] as a baseline. Our proposed methodology demonstrates significant improvement in performance as compared to the state-of-the-art approach [22]. We achieve an average improvement of 12.26% for U-Net and 12.04% for U-Net++, as elaborated in Section 5.1. Next, we benchmark our methodology on two complex skin lesion datasets in Section 5.1.1. Furthermore, we investigate the impact of autoencoder for post-processing in Section 5.1.1 and the significance of loss function weights in Section 5.1.2. ## 3 Methodology We start with Singh & Cirrone's [22] approach on the dermatomyositis dataset. They use U-Net and U-Net++ as the choice of segmentation architecture with Squeeze and Excitation [10] in the decoder for channel-level attention. Similar to previous studies, our work focuses on semantic segmentation, where the goal is to categorize each pixel in an image into a class. For semantic segmentation, these classes would be - a region of interest (for example, cells in the case of the dermatomyositis dataset) and an area other than the region of interest or background (region other than the cells). For the encoder, we study the performance of the entire ResNet family of CNNs[9]. We use an encoder depth of three, increasing the convolution filter size from 128, 256, to 512. We initialize the encoder with ImageNet pretrained weights. In the decoder part of U-Net and U-Net++, we use a convolution channel scheme of 256, 128, and 64. For each decoder block, we also use batch normalization as well as squeeze-and-excitation channel excitation after the convolutional layer. As discussed in [22, 26] and Section 2.1, the Dermatomyositis whole slide images contain a lot \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**ResNet**} & \multicolumn{3}{c}{**UNet++**} \\ & & Baseline(w/o APP) & w/ Relu APP & w/ Gelu APP \\ \hline \multirow{4}{*}{**Dermofit**} & ResNet18 & **0.744** & 0.7408 & 0.7366 \\ & ResNet34 & 0.754 & 0.7553 & **0.7599** \\ & ResNet50 & 0.737 & **0.7408** & 0.7379 \\ & ResNet101 & 0.7232 & **0.7264** & 0.7229 \\ \hline \multirow{4}{*}{**Dermatomyositis**} & ResNet18 & 0.5622 & 0.5679 & **0.5683** \\ & ResNet34 & 0.5536 & **0.5685** & 0.5633 \\ & ResNet50 & **0.5742** & 0.5698 & 0.5514 \\ & ResNet101 & 0.57 & **0.5727** & 0.5692 \\ \hline \multirow{4}{*}{**ISIC2017**} & ResNet18 & 0.6096 & **0.6232** & 0.6005 \\ & ResNet34 & **0.6583** & 0.6423 & 0.6548 \\ \cline{1-1} & ResNet50 & 0.6103 & **0.6355** & 0.619 \\ \cline{1-1} & ResNet101 & 0.6018 & **0.6164** & 0.6041 \\ \hline \hline \end{tabular} \end{table} Table 4: Similar, to table 3 in this table we present IoU averaged over five seed values (in 95% CI) for U-Net++. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Supervision Level** & **Dataset** & **Baseline(w/o APP)** & **w/ Relu APP** & **w/ Gelu APP** \\ \hline \multirow{3}{*}{**U-Net**} & Dermofit & 0.7395 & **0.7415** & 0.7413 \\ & Dermatomyositis & 0.5496 & **0.5556** & 0.5551 \\ & ISIC2017 & **0.6323** & 0.6197 & 0.6189 \\ \hline \multirow{3}{*}{**U-Net++**} & Dermofit & 0.7396 & **0.7408** & 0.7393 \\ & Dermatomyositis & 0.565 & **0.5697** & 0.5630 \\ \cline{1-1} & ISIC2017 & 0.6200 & **0.62935** & 0.6196 \\ \hline \hline \end{tabular} \end{table} Table 5: Mean IoU in the 95% confidence interval when averaged over the entire ResNet family for U-Net and U-Net++ on the dermofit, dermatomyositis, and the ISIC 2017 dataset. We observe that adding ReLU autoencoder as a post-processing unit improves performance for U-Net++ and, in almost all cases, for U-Net. more pixels without cells (background) as opposed to with cells (foreground). To attenuate this imbalance in the distribution of pixels, we use **C**ross **D**istribution **W**eights (**CDW**) in the cross-entropy loss. Van Buren _et al_. [26] used random weights. In contrast, Singh & Cirrone [22] used a ratio of the pixel with cells to total pixels and a ratio of non-cell pixels to total pixels as weights for foreground and background, respectively, in the cross-entropy loss. We propose to swap the weights and instead use the ratio of the number of pixels not containing the cell to the total number of pixels as the weight for the foreground. Similarly, the weight of the background is the ratio of the number of pixels containing cells (foreground/object of interest) to the total number of pixels. This alternative weight assignment method aims to enhance the foreground representation. This intuition is very similar to focal loss [13], wherein the misclassification of the minority class is penalized more than that of the minority class. To ensure our results are statistically significant, we conduct all experiments over five different seed values and report the mean values in the 95% confidence interval (C.I) over the five runs. We present these results in Tables 3 and 4. Based on our proposal, we observe an average improvement of 12.26% for U-Net and 12.04% for U-Net++. We further discuss these in Section 5.1. Additionally, we benchmark our approach and study the impact of autoencoder post-processing on two additional challenging dermatology-related datasets - ISIC 2017 and the dermato dataset in Section 5.1.1. Both datasets are challenging due to large intra-class variations and inter-class similarities as depicted in Figure 1 and obscuration in 2. Finally, we study the impact of using mean-frequency weights and compare the results with distribution-swapped weights for U-Net and U-Net++ over the ResNet family of encoders and three datasets in Section 5.1.2. ## 4 Experimental Details ### Datasets We use a 70-10-20 split for the dermatomyositis and dermato datasets. For the ISIC-2017 dataset, we use the same splits as used in the 2017 ISIC competition. Dermatomyositis:We use the same dataset as used in previous works on dermatomyositis segmentation [22, 26]. To give an idea about the modality of the dataset, we show a random sample from the test set in Figure 3. The Dermatomyositis dataset is collected from 198 muscle biopsies collected from seven dermatomyositis patients. These files are then stored in TIFF format. Each TIFF image contains eight slides that indicate the presence or absence of phenotypic markers by setting binary thresholds for each channel (1-DAPI, 2-CXCR3, 3-CD19, 4-CXCR5, 5-PD1, 6-CD4, 7-CD27, 8-Autofluorescence). For segmentation, we used the DAPI-stained image. Each whole slide image was tiled into 480x480. We further expand on this in Section 4.1.2. This is a particularly challenging dataset due to the large number of fine-grained objects (cells) to be segmented per image, as discussed in Section 1. Dermofit [7]:As shown in Figure 3, the Dermofit dataset contains 1300 skin lesion RGB images. These data are taken with a high-quality SLR camera in controlled (ring flash) indoor illumination. The Dermofit dataset contains ten categories; each includes a different number of instances: Actinic Keratosis (AK): 45, Basal Cell Carcinoma (BCC): 239, Melanocytic Nevus / Mole (ML): 331, Squamous Cell Carcinoma (SCC) sample 88, Seborrhoeic Keratosis (SK): 257, Intraepithelial carcinoma (IEC): 78, Pyogenic Granuloma (PYO): 24, Haemangioma (VASC): 96, Dermatofibroma (DF): 65, Melanoma (MEL): 76. No two images in this dataset are of the same size, as a preprocessing step we interpolate all images to 480x480 and then resize to 224x224 to ensure uniformity. ISIC Challenge 2017 Dataset, Lesion Segmentation Task [3]:The International Skin Imaging Collaboration (ISIC) is a large publicly accessible dataset. We show a sample from the test set in Figure 3. In our case, we use the segmentation dataset from 2017 and use the original splits wherein 2,000 images were used as training, 150 images as validation, and 600 images as the test set. The ISIC 2017 and the Dermofit datasets described above are skin lesion datasets with high intra-class variability and inter-class similarities with obscuration areas of interest, as discussed in Section 1. #### 4.1.1 Common Implementation Details We implemented all models in Pytorch [19] using a single NVIDIA RTX-8000 GPU with 64 GB RAM and 3 CPU cores. All models are trained with an Adam optimizer with an initial learning rate (lr) of 3.6e-4 and a weight decay 1e-5. We use a cosine annealing scheduler with a maximum of 50 iterations and a minimum learning rate of 3.4e-4 to adjust the learning rate based on each epoch. We train all architectures for 50 epochs with batch size 16, followed by testing on a held-out set. We use IoU (Intersection over Union) as our evaluation metric on the test set. This aligns with previous work by Singh & Cirrone [22]. We repeat all experiments with five different seed values and report the mean value in the 95% confidence interval in all tables. #### 4.1.2 Data-Preprocessing Images of the dermatomyositis dataset have a uniform size of 1408 \(\times\) 1876; we tiled each image into 12 sub-images of size 480 \(\times\) 480 inline with previous work [22]. In contrast, the Dermofit and the ISIC2017 datasets contain images of different sizes, i.e., no two images in the dataset are the same size. Additionally, since the other two datasets (dermofit and ISIC-2017) contain skin lesions, they have significantly denser and larger mask labels than the dermatocystis dataset. Thus, a different image preprocessing step is applied to the latter two datasets: bilinear interpolation to 480 \(\times\) 480 followed by a resize to 224 \(\times\)224. For augmentation, we use the same set of augmentation as used in Singh & Cirrone's work [22], along with Red channel normalization or "Rnorm" [25] for all of our experiments. ## 5 Results and Discussion ### Improvement over the current state-of-the-art for Dermatomyositis WSI Segmentation [22] Following the methodology (Section 3) and experimentation setup (Section 4), we present the IoU averaged over five runs in the 95% confidence interval on the test set for the Dermatomyositis dataset in Table 1 for U-Net and in Table 2 for U-Net++. We observe that our approach improves over Singh & Cirrone's approach (DEDL) [22] consistently over the entire ResNet family for baseline as well as with APP (both ReLU and GELU based) for both U-Net and U-Net++. When averaged over the ResNet family of encoders and the three paradigms (baseline approach without using autoencoders, ReLU autoencoders, and GELU-based autoencoders), we observed that our approach improves over the previous state-of-art [22] for Dermatomyositis segmentation by 12.26% and 12.04% for U-Net and U-Net++ respectively. #### 5.1.1 Impact of Incorporating Autoencoder Post-Processing. As described in Section 2.1, Singh and Cirrone [22] introduced an "Autoencoder Post Processing" unit or APP after the main segmentation architecture. The purpose of this autoencoder was to provide a more noised version of the prediction from the U-Net or U-Net++. The mean square error loss between the noised output and the ground truth mask, along with the weighted-cross entropy loss between the output of the U-Net or U-Net++ and the ground truth, is optimized during training. This is depicted in Figure 4. They studied the impact of using APP with ReLU and GELU activations only on the Dermatomyositis dataset. In this section, with our improved approach as presented in Section 3, we study the impact of adding APP on two ad \begin{table} \begin{tabular}{l l|l|l|l|l|l|l} \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Technique} & \multicolumn{4}{c|}{U-Net} & \multicolumn{4}{c}{U-Net++} \\ & & Baseline* & w/ ReLU APP & w/ GELU APP & Baseline* & w/ ReLU APP & w/ GELU APP \\ \hline \multirow{2}{*}{ResNet-18} & CDW & 0.5618 & 0.5479 & **0.5582** & 0.5622 & **0.5679** & 0.5683 \\ & Mean Frequency & **0.5645** & **0.5592** & 0.5405 & **0.5852** & 0.5603 & **0.5814** \\ \hline \multirow{2}{*}{ResNet-34} & CDW & 0.5306 & **0.5571** & 0.5606 & 0.5536 & 0.5685 & 0.5633 \\ & Mean Frequency & **0.5555** & 0.5551 & **0.5616** & **0.5729** & **0.5763** & **0.57** \\ \hline \multirow{2}{*}{ResNet-50} & CDW & **0.5556** & 0.5495 & **0.5597** & **0.5742** & 0.5698 & 0.5514 \\ & Mean Frequency & 0.5512 & **0.5652** & 0.5585 & 0.57 & **0.5723** & **0.5929** \\ \hline \multirow{2}{*}{ResNet-101} & CDW & 0.5502 & **0.5678** & 0.5497 & 0.57 & **0.5727** & 0.5692 \\ & Mean Frequency & **0.5506** & 0.5537 & **0.5596** & **0.5892** & 0.5678 & **0.5773** \\ \hline \end{tabular} \end{table} Table 6: This table displays the average Intersection over Union (IoU) values obtained from five separate runs on the dermatomyositis test set, with their corresponding 95% confidence intervals (CI). In this context, CDW refers to the utilization of cross-distribution weights in the calculation of cross-entropy loss. Here, Baseline* represents the use of the segmentation architecture with autoencoder for post-processing (APP). \begin{table} \begin{tabular}{l l|l|l|l|l|l|l} \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Technique} & \multicolumn{4}{c|}{U-Net} & \multicolumn{4}{c}{U-Net++} \\ & & Baseline* & w/ ReLU APP & w/ GELU APP & Baseline* & w/ ReLU APP & w/ GELU APP \\ \hline \multirow{2}{*}{ResNet-18} & CDW & 0.7388 & 0.7477 & **0.7467** & 0.744 & 0.7408 & 0.7366 \\ & Mean Frequency & **0.75** & **0.7498** & 0.7377 & **0.7469** & **0.7413** & **0.7449** \\ \hline \multirow{2}{*}{ResNet-34} & CDW & **0.7576** & **0.7633** & 0.7525 & 0.754 & 0.7553 & **0.7599** \\ & Mean Frequency & 0.7533 & **0.7633** & **0.7535** & **0.7602** & **0.7635** & 0.7547 \\ \hline \multirow{2}{*}{ResNet-50} & CDW & 0.7364 & **0.7338** & **0.7401** & **0.737** & **0.7408** & 0.7379 \\ & Mean Frequency & **0.7379** & 0.731 & 0.7385 & 0.7362 & 0.7358 & **0.7411** \\ \hline \multirow{2}{*}{ResNet-101} & CDW & **0.7252** & 0.7213 & 0.7258 & **0.7232** & **0.7264** & 0.7229 \\ & Mean Frequency & 0.7212 & **0.7242** & **0.7236** & 0.7156 & 0.7247 & **0.7234** \\ \hline \end{tabular} \end{table} Table 7: Similar to Table 6, in this table, we present the IoU averaged over five runs in 95% confidence interval on the dermofit test set for U-Net and U-Net++. Like Table 6, CDW represents the scenario when cross-distribution weights are used for the cross-entropy loss and Baseline* represents the use of the segmentation architecture with autoencoder for post-processing (APP). ditional challenging dermatology datasets. We present the IoU over the test set in the 95% confidence interval over the ResNet family of encoders in Tables 3 and 4 for ISIC 2017 and the Dermofit dataset, respectively. Adding ReLu and GeLU-based APP improves performance over the baseline architecture (with no APP) for U-Net and U-Net++ in most cases for the Dermofit. To better understand the result, we average the IoU on the test set over the entire ResNet family for U-Net and U-Net++ and present the results in Table 5. From Table 5, we observe that the addition of APP, especially ReLU-based APP, does improve performance over the baseline (not using APP) in almost all cases for U-Net++ and U-Net. The addition of APP is did not improve performance only in the case of the ISIC-2017 dataset for U-Net. #### 5.1.2 Impact of Cross-entropy loss weights In section 3, we explained our rationale for switching from distribution-based weights to cross-distribution-based weights for the cross-entropy loss. In this section, we study the impact of changing the cross-entropy weights from cross-distribution-based weights to mean frequency weights[6]. The median frequency weight received by each class is derived from the reciprocal of the pixel ratio of a particular class, normalized by the median frequency of the class[14, 15]. The median frequency and the cross-distribution weights, calculated over our three datasets, are mentioned in Table 3. Mathematically, median frequency weights (\(w_{c}\)) are defined as follows: \(w_{c}=\frac{\text{med\_freq}}{n_{c}}\). Here, \(n_{c}\) is the number of pixels belonging to class \(c\) in the training dataset, and med_freq is the median of the frequency of pixels belonging to each class in the dataset.1 Where \(n_{c}=\frac{\alpha}{\beta}\), here, \(\alpha\) represents the number of pixels of a class, and \(\beta\) represents the total number of pixels in images where the given class is present. We compare the weights calculated by cross-distribution and median frequency in Table 3. We provide the full comparative result of using cross-distribution weights and mean frequency over the three datasets in the 95% confidence interval averaged over five seed values Tables 6, 7 and 8 for the Dermatomyositis, Dermofit, and the ISIC-2017 datasets, respectively. Additionally, to summarize these results, we present the average over the ResNet family and the three training paradigms (baseline without APP and APP with ReLU and GELU layers) in Tables 9 and 10 for U-Net and U-Net++, respectively. From these tables, we observe that median-frequency weights for cross-entropy loss improve performance over cross-distribution \begin{table} \begin{tabular}{c c c} \hline **Dataset** & **CDW** & **Median Frequency** \\ \hline Dermofit & 0.7396 & **0.7397** \\ DM* & 0.565 & **0.5793** \\ ISIC2017 & 0.62 & **0.6248** \\ \hline \end{tabular} \end{table} Table 10: Similar to Table 9, In this table, we provide the average Intersection over Union (IoU) values (in 95% CI) for the ResNet family for U-Net++ architecture on three different test sets: Dermatomyositis, Dermofit, and ISIC 2017. \begin{table} \begin{tabular}{c l|l|l|l|l|l|l} \hline \multirow{2}{*}{Encoder} & \multirow{2}{*}{Technique} & \multicolumn{4}{c|}{U-Net} & \multicolumn{4}{c}{U-Net++} \\ & & Baseline* & w/ ReLU APP & w/ GELU APP & Baseline* & w/ ReLU APP & w/ GELU APP \\ \hline \multirow{2}{*}{ResNet-18} & CDW & **0.6458** & 0.6252 & **0.6357** & 0.6096 & **0.6232** & 0.6005 \\ & Mean Frequency & 0.6257 & **0.6394** & 0.6307 & **0.6177** & 0.6198 & **0.6074** \\ \hline \multirow{2}{*}{ResNet-34} & CDW & **0.6518** & 0.6227 & **0.6306** & **0.6583** & 0.6423 & **0.6548** \\ & Mean Frequency & 0.6314 & **0.6409** & 0.6322 & 0.6412 & **0.6454** & 0.6513 \\ \hline \multirow{2}{*}{ResNet-50} & CDW & 0.605 & 0.5984 & 0.6207 & 0.6103 & **0.6355** & 0.619 \\ & Mean Frequency & **0.6396** & **0.6223** & **0.6337** & **0.6354** & 0.628 & **0.646** \\ \hline \multirow{2}{*}{ResNet-101} & CDW & **0.6267** & **0.6325** & 0.5884 & 0.6018 & **0.6164** & **0.6041** \\ & Mean Frequency & 0.6137 & 0.6283 & **0.6175** & **0.6049** & 0.6112 & 0.6016 \\ \hline \end{tabular} \end{table} Table 8: In this table, we showcase the IoU results for U-Net and U-Net++ on the ISIC 2017 test set. The values represent the average IoU over five runs in the 95% confidence interval. CDW, in this context, refers to the utilization of cross-distribution weights for cross-entropy loss, as demonstrated in Tables 6 and 7. Baseline* represents the use of the segmentation architecture with autoencoder for post-processing. \begin{table} \begin{tabular}{c c c} \hline **Dataset** & **CDW** & **Median Frequency** \\ \hline Dermofit & 0.7395 & **0.7406** \\ DM* & 0.5496 & **0.5564** \\ ISIC2017 & 0.6197 & **0.6276** \\ \hline \end{tabular} \end{table} Table 9: In this table, we present the average IoU (in the 95% confidence interval) over the ResNet family of encoders and the three paradigms (Baseline, with ReLU and GELU APP) for U-Net over the dermatomyositis, the dermofit, and ISIC 2017 test sets from Tables 6, 7 and 8 respectively. Here, DM* represents the Dermatomyositis dataset, and CDW represents Cross Distribution Weight. weights, although the improvement is marginal in almost all cases. ## 6 Conclusion We observed that our approach of using Cross Distribution Weights (CDW) improved segmentation performance over the previous state-of-the-art approach for dermatumyositis segmentation [22] by 12.26% for U-Net and by 12.04% for U-Net++ averaged over the ResNet family. Furthermore, adding APP (Autoencoder Post Processing) improves segmentation performance marginally in the case of dermatomyositis and dermofit datasets. In the case of the ISIC 2017 dataset, the addition of APP is only useful in the case of U-Net++. We have open-sourced our approach at [https://github.com/pranavsinghps1/Enhancing-Medical-Image-Segmentation](https://github.com/pranavsinghps1/Enhancing-Medical-Image-Segmentation). We hope that our study and open-sourced approach will catalyze further research at the intersection of autoimmune diseases like dermatomyositis and the application of AI as well as for other dermatology-related datasets. This would help us better understand the immunology of autoimmune diseases and answer some of the critical research questions to develop improved healthcare solutions.2 Footnote 2: Potential negative societal impact: Autoimmune diseases are extremely heterogeneous; the dermatomyositis dataset used in our experiments is geographically restricted. Hence, this is a study of a particular variant. This study might or not be generalizable for other variants. Hence application on a wider scale for real-life scenarios should only be trusted after clearance from the concerned health and safety governing bodies. AcknowledgementsWe would like to thank NYU HPC team for assisting us with our computational needs.
2301.03699
The cosmic radio background from 150 MHz--8.4 GHz, and its division into AGN and star-forming galaxy flux
We present a revised measurement of the extra-galactic background light (EBL) at radio frequencies based on a near complete compendium of radio source counts. We present the radio-EBL at 150 MHz, 325 MHz, 610 MHz, 1.4 GHz, 3 GHz, 5 GHz, and 8.4 GHz. In all cases the contribution to the radio-EBL, per decade of flux, exhibits a two-humped distribution well matched to the AGN and star-forming galaxy (SFG) populations, and with each population contributing roughly equal energy. Only at 3 GHz are the source count contributions to the EBL fully convergent, and hence we report empirical lower limits to the radio-EBL in the remaining bands. Adopting predictions from the SHARK semi-analytic model for the form of the SFG population, we can fit the fainter source counts providing measurements of the total contribution to the radio-EBL for the SFG and the AGN populations separately. This constitutes an empirically constrained model-dependent measurement for the SFG contribution, but a fully empirical measurement of the AGN contribution. Using the {\sc ProSpect} spectral energy distribution code we can model the UV-optical-infrared-mm-radio SFG EBL at all frequencies from the cosmic star-formation history and the adoption of a Chabrier initial mass function. However, significant discrepancy remains ($5\times$) between our source-count estimates of the radio-EBL and the direct measurements reported from the ARCADE-2 experiment. We can rule out a significant missing discrete source radio population and suggest that the cause of the high ARCADE-2 radio-EBL values may need to be sought either in the foreground subtraction or as a yet unknown diffuse component in the radio sky.
Scott A. Tompkins, Simon P. Driver, Aaron S. G. Robotham, Rogier A. Windhorst, Claudia del P. Lagos, T. Vernstrom, Andrew M. Hopkins
2023-01-09T22:12:24Z
http://arxiv.org/abs/2301.03699v1
The cosmic radio background from 150 MHz-8.4 GHz, and its division into AGN and star-forming galaxy flux ###### Abstract We present a revised measurement of the extra-galactic background light (EBL) at radio frequencies based on a near complete compendium of radio source counts. We present the radio-EBL at 150 MHz, 325 MHz, 610 MHz, 1.4 GHz, 3 GHz, 5 GHz, and 8.4 GHz. In all cases the contribution to the radio-EBL, per decade of flux, exhibits a two-humped distribution well matched to the AGN and star-forming galaxy (SFG) populations, and with each population contributing roughly equal energy. Only at 3 GHz are the source count contributions to the EBL fully convergent, and hence we report empirical lower limits to the radio-EBL in the remaining bands. Adopting predictions from the SHARK semi-analytic model for the form of the SFG population, we can fit the fainter source counts providing measurements of the total contribution to the radio-EBL for the SFG and the AGN populations separately. This constitutes an empirically constrained model-dependent measurement for the SFG contribution, but a fully empirical measurement of the AGN contribution. Using the ProSpect spectral energy distribution code we can model the UV-optical-infrared-mm-radio SFG EBL at all frequencies from the cosmic star-formation history and the adoption of a Chabrier initial mass function. However, significant discrepancy remains (\(5\times\)) between our source-count estimates of the radio-EBL and the direct measurements reported from the ARCADE-2 experiment. We can rule out a significant missing discrete source radio population and suggest that the cause of the high ARCADE-2 radio-EBL values may need to be sought either in the foreground subtraction or as a yet unknown diffuse component in the radio sky. keywords: surveys, radio continuum: galaxies, galaxies:active, cosmology: cosmic background radiation, cosmological parameters, catalogues ## 1 Introduction In recent years there has been a resurgence of interest in measurements and studies of the Extra-galactic Background Light or EBL, (e.g., Gervasi et al., 2008; Vernstrom et al., 2011; Driver et al., 2016; Abdallah et al., 2018; Lauer et al., 2022; Saldana-Lopez et al., 2022, submitted). The EBL is the term used to describe all radiation incident on the Earth of extra-galactic origin from a steradian of sky, i.e., it should exclude any sky-glow, Zodiacal and Diffuse Galactic Light components, as well as any light from the Milky-Way group. In terms of the origin of the EBL it can be divided into radiation arising from two oras (epochs): the Cosmic Microwave Background which represents relic radiation from the hot early Universe at the time of recombination and redshifted to the present day; and the remainder which is all radiation produced from all eras since recombination. The latter predominantly arises from star-formation, accretion onto supermassive black holes (i.e., Active Galactic Nuclei; AGN), and dust reprocessing which typically transfers ultraviolet and optical radiation to the mid and far infrared as thermal emission. For convenience the EBL is also often broken into distinct wavelength regimes that cover the cosmic \(\gamma\)-ray (CGB), X-ray (CXB), ultraviolet (CUB), optical (COB), infrared (CIB), the CMB, and radio (CRB) backgrounds. Of these backgrounds the CMB is the strongest, in terms of its integrated energy density, and approximately \(5\times\) the sum of the other backgrounds combined (Hill et al., 2018). Of these other backgrounds the COB and CIB are roughly equal (Driver et al., 2016), and together comprise most of the non-CMB energy contribution. The recent resurgence of interest in the EBL, is in part due to technological breakthroughs that have allowed the measurement of the various backgrounds to much lower flux limits within each waveband. This has allowed measurements to evolve from upper or lower limits to credible measurements. This is possible because in almost all bands we are now able to resolve the peak contribution to the EBL by construct
2304.11443
Oscillatory large-scale circulation in liquid-metal thermal convection and its structural unit
In Rayleigh-B\'enard convection (RBC), the size of a flow domain and its aspect ratio $\varGamma$ (a ratio between the spatial length and height of the domain) affect the shape of the large-scale circulation (LSC). For some aspect ratios, the flow dynamics include a three-dimensional oscillatory mode known as a jump-rope vortex (JRV), however, the effects of varying aspect ratios on this mode are not well investigated. In this paper, we study these aspect-ratio effects in liquid metals, for a low Prandtl number $Pr=0.03$. Direct numerical simulations and experiments are carried out for a Rayleigh number range $2.9 \times 10^4 \leq Ra \leq 1.6 \times 10^6$ and square cuboid domains with $\varGamma=2$, $2.5$, $3$ and $5$. Our study demonstrates that a repeating pattern of a JRV encountered at an aspect ratio $\varGamma \approx 2.5$ is the basic structural unit that builds up to a lattice of interlaced JRVs at the largest aspect ratio. The size of the domain determines how many structural units are self-organized within the domain; the number of the realized units is expected to scale as $\varGamma^2$ with sufficiently large and growing $\varGamma$. We find the oscillatory modes for all investigated $\varGamma$, however, they are more pronounced for $\varGamma=2.5$ and $\varGamma=5$. Future studies for large-aspect ratio domains of different shapes would enhance our understanding of how the JRVs adjust and reorganize at such scaled-up geometries, and answer the question of whether they are indeed the smallest superstructure units.
Andrei Teimurazov, Sanjay Singh, Sylvie Su, Sven Eckert, Olga Shishkina, Tobias Vogt
2023-04-22T16:54:18Z
http://arxiv.org/abs/2304.11443v1
# Oscillatory large-scale circulation in liquid-metal thermal convection and its structural unit ###### Abstract In Rayleigh-Benard convection (RBC), the size of a flow domain and its aspect ratio \(\Gamma\) (a ratio between the spatial length and height of the domain) affect the shape of the large-scale circulation (LSC). For some aspect ratios, the flow dynamics include a three-dimensional oscillatory mode known as a jump-rope vortex (JRV), however, the effects of varying aspect ratios on this mode are not well investigated. In this paper, we study these aspect-ratio effects in liquid metals, for a low Prandtl number \(Pr=0.03\). Direct numerical simulations and experiments are carried out for a Rayleigh number range \(2.9\times 10^{4}\leq Ra\leq 1.6\times 10^{6}\) and square cuboid domains with \(\Gamma=2\), \(2.5\), \(3\) and \(5\). Our study demonstrates that a repeating pattern of a JRV encountered at an aspect ratio \(\Gamma\approx 2.5\) is the basic structural unit that builds up to a lattice of interlaced JRVs at the largest aspect ratio. The size of the domain determines how many structural units are self-organized within the domain; the number of the realized units is expected to scale as \(\Gamma^{2}\) with sufficiently large and growing \(\Gamma\). We find the oscillatory modes for all investigated \(\Gamma\), however, they are more pronounced for \(\Gamma=2.5\) and \(\Gamma=5\). Future studies for large-aspect ratio domains of different shapes would enhance our understanding of how the JRVs adjust and reorganize at such scaled-up geometries, and answer the question of whether they are indeed the smallest superstructure units. ## 1 Introduction Thermal convection manifests not only in various geo- and astrophysical systems, but also in smaller-scale phenomena ranging from industrial processes to our daily lives such as household heating. Given its importance, natural thermal convection has been the subject of intensive research for over a century (Benard 1900; Lord Rayleigh 1916). Investigation of thermal convection in low-Prandtl-number fluids (Prandtl numbers \(Pr\ll 1\)) is of particular importance for a better understanding of convection on surfaces of stars, where \(Pr\) can be as low as \(10^{-8}\)(Spiegel 1962; Hanasoge _et al._ 2016), and, in case of liquid metals, for numerous technical applications, e.g. the advancement of cooling technology (see, e.g., Scheel _et al._ 2013; Frick _et al._, 2015; Schumacher _et al._, 2015; Scheel & Schumacher, 2016; Heinzel _et al._, 2017; Teimurazov _et al._, 2017; Zurner _et al._, 2019; Pandey _et al._, 2022; Zwirner _et al._, 2022). Natural thermal convection occurs in a fluid layer due to a temperature difference imposed at its surfaces. Here, the orientation of the fluid layer surfaces with respect to the gravity vector plays an important role (see, e.g., Shishkina & Horn, 2016; Teimurazov & Frick, 2017; Zurner _et al._, 2019; Zwirner _et al._, 2020\(a\); Teimurazov _et al._, 2021). One of the classical and probably the most intensively investigated configurations of natural thermal convection is Rayleigh-Benard convection (RBC) (Benard, 1900; Lord Rayleigh, 1916; Bodenschatz _et al._, 2000; Ahlers _et al._, 2009; Chilla & Schumacher, 2012). In RBC, the heated and cooled surfaces are placed orthogonal to the gravity vector, and the fluid layer is heated from below and cooled from above. Thermal expansion causes warm fluid to rise and cool fluid to sink. At sufficiently large Rayleigh number, \(Ra\equiv\alpha g\Delta H^{3}/(\kappa\nu)\), the resulting turbulent convective flow self-organises through an inverse energy cascade from small-scale thermal turbulence to large flow structures. Here, \(\alpha\) is the isobaric thermal expansion coefficient, \(\nu\) is the kinematic viscosity, \(\kappa\) is the thermal diffusivity, \(\Delta\) is the temperature difference between the heated and cooled surfaces, \(H\) is the distance between these surfaces (i.e., the height of the container) and \(g\) denotes the acceleration due to gravity. The energy of small scales is directly transferred to large scales via three-dimensional modes, and is different from the classical two-dimensional inverse energy cascade (Ecke & Shishkina, 2023; Boffetta & Ecke, 2012). At sufficiently large \(Ra\), the flow is self-organised into a large-scale circulation (LSC), or a turbulent thermal wind, the concept of which is an important ingredient in the heat and momentum transport theory (Grossmann & Lohse, 2000, 2001, 2011), and boundary-layer theory for natural thermal convection (Shishkina _et al._, 2015; Ching _et al._, 2019; Tai _et al._, 2021). The resulting flow structures strongly depend on the Rayleigh number \(Ra\), which is a measure of the thermal forcing that drives convection in the system, and on the Prandtl number \(Pr\equiv\nu/\kappa\), which describes the diffusive properties of the considered fluid (Ahlers _et al._, 2009). In addition, the geometric characteristics of the container, especially the shape of the container and, in particular, the aspect ratio \(\Gamma\) of its spatial length \(L\) and height \(H\), \(\Gamma\equiv\ L/H\), influence the global flow structure and the mean characteristics of the flow (Shishkina, 2021; Ahlers _et al._, 2022). Turbulent RBC in a cylindrical container with equal height and diameter (aspect ratio \(\Gamma=1\)) is the most extensively studied. For containers with \(\Gamma\approx 1\), the principle structure of the LSC can be delineated as follows. There exists a vertical plane (called the LSC-plane), in which the LSC is observed as a big domain-filling roll with two smaller secondary rolls in the corners (Sun _et al._, 2005\(a\); Ahlers _et al._, 2009; Chilla & Schumacher, 2012), while in the orthogonal vertical plane, the LSC for this geometry of the container is seen as a four-roll structure, with an inflow at mid-height (Shishkina _et al._, 2014). Not only the LSC is generally unsteady, but also the LSC path can exhibit dynamic behaviour. Thus in containers with \(\Gamma\approx 1\), the LSC can display various modes of periodic or chaotic oscillations which can take the form of sloshing, precession, and torsion (Cioni _et al._, 1997; Xi _et al._, 2004; Funfschilling & Ahlers, 2004; Sun _et al._, 2005\(b\); Xi _et al._, 2006; Brown & Ahlers, 2006, 2007; Xi & Xia, 2007, 2008; Funfschilling _et al._, 2008; Zhou _et al._, 2009; Brown & Ahlers, 2009; Sugiyama _et al._, 2010; Assaf _et al._, 2011; Stevens _et al._, 2011; Wagner _et al._, 2012; Sakievich _et al._, 2016, 2020; Cheng _et al._, 2022). The sloshing mode is associated with the motion of the LSC-plane in the radial direction, while the precession and torsion modes are related to the azimuthal motion of the LSC-plane (Cheng _et al._, 2022; Horn _et al._, 2022). In the precession mode, the entire LSC-plane drifts in the azimuthal direction, while in the torsion mode, the azimuthal motion of the LSC-plane in the upper half of the container is generally in the opposite direction to the motion of the LSC-plane in the lower half of the container. In slender containers with the aspect ratio \(\Gamma<1\), a single big-roll structure of the LSC is not as stable as in the case of \(\Gamma=1\)(Xi & Xia, 2008; Weiss & Ahlers, 2011, 2013; Zwirner _et al._, 2020\(b\); Schindler _et al._, 2022). For \(\Gamma<1\), the turbulent LSC can be formed of several dynamically changing convective rolls that are stacked on top of each other (van der Poel _et al._, 2011, 2012; Zwirner _et al._, 2020\(b\)). The mechanism which causes the twisting and breaking of a single-roll LSC into multiple rolls is the elliptical instability (Zwirner _et al._, 2020\(b\)). In the case of \(\Gamma<1\), the heat and momentum transports, which are represented by the Nusselt number \(Nu\) and Reynolds number \(Re\), are always stronger for a smaller number of the rolls that form the LSC. This was proven in experiments for \(\Gamma=1/2\)(Weiss & Ahlers, 2011, 2013; Xi & Xia, 2008), and simulations for \(\Gamma=1/5\)(Zwirner & Shishkina, 2018; Zwirner _et al._, 2020\(b\)). By contrast, for wide containers with \(\Gamma>1\), the more rolls of the LSC mean the more efficient heat transport (van der Poel _et al._, 2011, 2012; Wang _et al._, 2020), also in highly turbulent cases. For \(\Gamma>1\), the rolls or roll-like structures are attached to each other and Figure 1: Phase-averaged streamlines in Rayleigh–Bénard convection for \(Pr=0.03\), \(Ra=10^{6}\), as obtained in direct numerical simulations for different aspect ratios (\(a\)) \(\Gamma=5\), (\(b\)) \(\Gamma=3\), (\(c\)) \(\Gamma=2.5\) and (\(d\)) \(\Gamma=2\) for square cuboid domains (new simulations) and (\(e\), adapted from Vogt _et al._ (2018), available under a CC BY-NC-ND 4.0) a cylindrical domain with \(\Gamma=2\). These streamlines envelope the oscillating vortex, and the colour scale is according to the vertical velocity component \(u_{z}\). Blue (red) colour corresponds to a negative (positive) value of \(u_{z}\), indicating a downward (upward) direction of the flow. The structures in the lower aspect ratio cases (\(\Gamma=2\), \(2.5\) and \(3\)) are building units of the structure formed within the largest aspect ratio case (\(\Gamma=5\)). aligned in horizontal directions (Hartlep _et al._, 2003; von Hardenberg _et al._, 2008; Emran & Schumacher, 2015). In the two-dimensional case, the range of possible aspect ratios of particular convective rolls and, hence, the total number of the rolls in any confined domain, are restricted, and there exist quite accurate theoretical estimates for the lower and upper bounds of possible aspect ratios of the rolls (Wang _et al._, 2020; Shishkina, 2021). For three-dimensional domains, the typical length scales of the self-organised coherent turbulent flow structures are not yet well-studied and their accurate prediction remains an unsolved problem so far. These flow structures can be identified as turbulent superstructures (Stevens _et al._, 2018; Pandey _et al._, 2018; Green _et al._, 2020; Krug _et al._, 2020; Berghout _et al._, 2021), since their lifetime is much larger than the free-fall time, and their length scales are generally larger than the typical length scale in RBC, which is the height of the container \(H\). Several studies suggest that the characteristic length scale of these coherent turbulent large-scale flow structures increases with growing \(Ra\), see, e.g., Fitzjarrald (1976); Hartlep _et al._ (2003); Pandey _et al._ (2018); Akashi _et al._ (2019); Krug _et al._ (2020). Depending on the considered parametric space of the \(Ra\)-, \(Pr\)- and \(\Gamma\) in different studies, different preferable length scales of the turbulent superstructures are reported, which are always larger that the container height \(H\). Thus the values of order \(10H\)(Busse, 1994), or between \(6H\) and \(7H\)(Hartlep _et al._, 2003; Pandey _et al._, 2018; Stevens _et al._, 2018) were proposed. Although the typical horizontal wavelengths of the turbulent superstructures generally grow with \(Ra\), they tend to decrease with decreasing Prandtl number (Pandey _et al._, 2018). This fact is pretty remarkable, since decreasing \(Pr\) is usually associated with even stronger turbulence and therefore one might expect a certain similarity to the situation when \(Ra\) is increased. Recent laboratory and numerical experiments show that in an intermediate range of moderate aspect ratios, \(\Gamma\gtrsim 1.4\), the LSC displays a low-frequency oscillatory dynamics (Vogt _et al._, 2018; Horn _et al._, 2022; Akashi _et al._, 2022; Cheng _et al._, 2022). The precession, torsional and sloshing dynamics of the LSC, which dominates at \(\Gamma=1\), is replaced by a mode which can be described as a jump rope vortex (JRV). In this flow pattern, a curved vortex is formed, which swirls around the cell centre in the direction opposite to the LSC direction, resembling the motion of a swirling jump rope, see figure 1e. This phenomenon was first demonstrated for liquid metal convection in a cylinder with aspect ratio \(\Gamma=2\)(Vogt _et al._, 2018). Numerical simulations showed that the JRV exists also for a cylindrical container of the aspect ratio \(\Gamma=\sqrt{2}\) and that the JRV structure is present not only in low-\(Pr\) liquid metal convection, but also in water at \(Pr=4.8\). This has been confirmed in several other experiments and simulations of comparable aspect ratios for both water and liquid metal (Horn _et al._, 2022; Cheng _et al._, 2022; Li _et al._, 2022). Flow measurements in containers of different shapes such as a cuboid domain with \(\Gamma=5\)(Akashi _et al._, 2022) showed that the strongly oscillating velocity and temperature fields could also be attributed to the presence of the JRV-like structures. However, instead of only one vortex, four JRVs interlaced in that case. The ends of the JRVs cross perpendicularly at a certain point in space (see figure 1_a_). Here, the detached (opposite) JRVs oscillate \(\pi\) out of phase, whereas adjacent JRVs do so with a lag of \(\pi/2\). Akashi _et al._ (2022) demonstrated that the JRVs can form a lattice structure of different vortices, which determines a fundamental flow mode that for the considered combinations of \(Ra\) and \(Pr\) can dominate the dynamics at moderate, and possibly also at very large aspect ratios. Although JRVs have also been detected in water with moderate \(Pr\approx 5\), the liquid metal offers a number of advantages for such studies. The velocity field in liquid metal convection is strongly inertia dominated due to its low viscosity and high density. As a result, the JRV induced oscillations reach much stronger amplitudes than in water or air. While the velocity field in low-\(Pr\) liquid metal at comparable temperature gradients is significantly more turbulent than that of water or air, the temperature field exhibits considerably more coherence than the velocity field due to the large thermal diffusivity. Thus, the JRV-induced oscillations can be detected both in the velocity field and in the temperature field very well. As such, liquid metals are well suited to investigate the JRV-like flow dynamics. The objective of the present work is to investigate in more detail the aspect ratio and geometry dependence of the three-dimensional oscillatory JRV-like large-scale circulation in liquid-metal thermal convection. In particular, how increasing aspect ratios result in a lattice of oscillatory flow pattern via the formation of JRVs, starting from the smallest structural building block to that of the more interlaced JRVs at a higher aspect ratio. To this end, we study the LSC dynamics in RBC of liquid metal with \(Pr=0.03\) in square cuboids with different aspect ratios, which vary from 2 to 5, using both experimental and numerical approaches. ## 2 Methods ### Direct numerical simulations Thermal convection under the assumption of the Oberbeck-Boussinesq approximation is described by the following Navier-Stokes, energy, and continuity equations: \[D_{t}\mathbf{u} = \nu\mathbf{\nabla}^{2}\mathbf{u}-\mathbf{\nabla}p+\alpha g(T-T_{0})\mathbf{e}_{z}, \tag{1}\] \[D_{t}T = \kappa\mathbf{\nabla}^{2}T,\] (2) \[\mathbf{\nabla}\cdot\mathbf{u} = 0. \tag{3}\] Here, \(D_{t}\) denotes the substantial derivative, \(\mathbf{u}=(u_{x},u_{y},u_{z})\) is the velocity vector field, \(p\) is the reduced kinematic pressure, \(T\) the temperature, \(T_{0}=(T_{+}+T_{-})/2\) is the arithmetic mean of the top (\(T_{-}\)) and bottom (\(T_{+}\)) temperatures, \(\mathbf{e}_{z}\) is the unit vector that points upward. The considered domain is a square cuboid with the height \(H\) and equal width \(W\) and length \(L\), \(W=L\), so that the domain aspect ratio equals \(\Gamma\equiv L/H\). The system (1)-(3) is closed by the following boundary conditions: no-slip for the velocity at all boundaries, \(\mathbf{u}=0\), constant temperatures at the end-face of the box, i.e., \(T=T_{+}\) at the bottom plate at \(z=0\) and \(T=T_{-}\) at \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \(\Gamma\) & \(Pr\) & \(Ra\) & \(N_{x}\) & \(N_{y}\) & \(N_{z}\) & \(\mathcal{N}_{\theta}\) & \(\mathcal{N}_{\mathrm{v}}\) & \(\delta_{\theta}/H\) & \(\delta_{\mathrm{v}}/H\) & \(h_{\mathrm{K}}\) & \(h_{\mathrm{DNS}}/h_{\mathrm{K}}\) \\ \hline 2 & 0.03 & \(1.0\times 10^{6}\) & 600 & 600 & 300 & 35 & 10 & \(9.9\times 10^{-2}\) & \(2.5\times 10^{-2}\) & \(3.9\times 10^{-3}\) & 0.94 \\ \hline 2.5 & 0.03 & \(1.2\times 10^{5}\) & 750 & 750 & 300 & 58 & 16 & \(1.7\times 10^{-1}\) & \(4.4\times 10^{-2}\) & \(7.9\times 10^{-3}\) & 0.47 \\ & & \(1.0\times 10^{6}\) & 750 & 750 & 300 & 33 & 9 & \(9.4\times 10^{-2}\) & \(2.4\times 10^{-2}\) & \(3.8\times 10^{-3}\) & 0.97 \\ \hline 3 & 0.03 & \(1.0\times 10^{5}\) & 720 & 720 & 240 & 47 & 13 & \(1.7\times 10^{-1}\) & \(4.4\times 10^{-2}\) & \(8.3\times 10^{-3}\) & 0.56 \\ & & \(4.05\times 10^{5}\) & 780 & 780 & 260 & 46 & 15 & \(1.2\times 10^{-1}\) & \(3.2\times 10^{-2}\) & \(5.2\times 10^{-3}\) & 0.95 \\ & & \(1.0\times 10^{6}\) & 900 & 900 & 300 & 34 & 9 & \(9.6\times 10^{-2}\) & \(2.4\times 10^{-2}\) & \(3.8\times 10^{-3}\) & 0.96 \\ \hline 5 & 0.03 & \(1.2\times 10^{5}\) & 1500 & 1500 & 300 & 56 & 16 & \(1.7\times 10^{-1}\) & \(4.2\times 10^{-2}\) & \(7.8\times 10^{-3}\) & 0.47 \\ & & \(1.0\times 10^{6}\) & 1500 & 1500 & 300 & 33 & 9 & \(9.3\times 10^{-2}\) & \(2.4\times 10^{-2}\) & \(3.8\times 10^{-3}\) & 0.97 \\ \hline \hline \end{tabular} \end{table} Table 1: Details on the conducted DNS, including the number of nodes \(N_{x}\), \(N_{y}\), \(N_{z}\) in the directions \(x\), \(y\) and \(z\), respectively; the number of the nodes within the thermal boundary layer, \(\mathcal{N}_{\theta}\), and within the viscous boundary layer, \(\mathcal{N}_{\mathrm{v}}\), the relative thickness of the thermal boundary layer, \(\delta_{\theta}/H\), and the viscous boundary layer \(\delta_{\mathrm{v}}/H\); the Kolmogorov microscale, \(h_{\mathrm{K}}\), and the relative mean grid stepping, \(h_{\mathrm{DNS}}/h_{\mathrm{K}}\). the top plate at \(z=H\), and adiabatic boundary condition at the side walls, \(\partial T/\partial\mathbf{n}=0\), where \(\mathbf{n}\) is the vector orthogonal to the surface. Equations (1)-(3) are non-dimensionalised by using the height \(H\), the free-fall velocity \(u_{ff}\), the free-fall time \(t_{ff}\), and the temperature difference between the heated plate and the cooled plate, \(\Delta\), \[u_{ff}\equiv(\alpha gH\Delta)^{1/2},\qquad t_{ff}\equiv H/u_{ff}\,,\qquad \Delta\equiv T_{+}-T_{-}, \tag{4}\] as units of length, velocity, time and temperature, respectively. The resulting dimensionless equations are solved numerically using the latest version (Reiter _et al._, 2022, 2021) of the direct numerical solver goldfish(Shishkina _et al._, 2015; Kooij _et al._, 2018), which applies a fourth-order finite-volume discretisation on staggered grids. Three-dimensional direct numerical simulations (DNS) were performed for square cuboid domains with the aspect ratios \(\Gamma=2\), \(2.5\), \(3\), and \(5\). The utilised staggered computational grids, which are clustered near all rigid walls, are sufficiently fine to resolve the Kolmogorov microscales (Shishkina _et al._, 2010), see Tables 1 and 2. ### Experimental set-up A schematic of the experimental set-up is presented in figure 2 along with the measuring positions of ultrasound probes. The set-up consists of a cuboid vessel with a base area of \(L\times L=200\times 200\) mm\({}^{2}\) and a height \(H=66\) mm, resulting in an aspect ratio \(\Gamma\approx 3\). The top and bottom plates of this vessel are made of copper plates, whereas the side walls are made of polyvinyl chloride (PVC) of 30 mm thickness. This vessel is filled with an eutectic liquid metal alloy GaInSn of Gallium, Indium, and Tin, that serves as the working fluid in the experiment. Thermophysical properties of GaInSn are reported in Plevachuk _et al._ (2014). In particular, the melting point of GaInSn is \(10.5^{\circ}\)C and the Prandtl number equals \(Pr\approx 0.03\). The liquid layer enclosed within the vessel is heated from the bottom and cooled from the top by adjusting the temperature of water flowing through channels in the copper plates. The temperature of water in these channels is held constant at set temperatures via two external thermostats. To minimise heat losses, the tubes transporting the hot and cold water and the entire vessel are wrapped in about 30 mm thick insulating foam tubes and additional envelope. Two platinum resistance thermometers (Pt-100) (accuracy of \(\pm 0.005\)) have been utilised to accurately monitor the temperatures of water entering (\(T_{in}\)) and leaving (\(T_{out}\)) the hot and cold plates, respectively. These temperature readings are essential in measuring the non-dimensional convective heat transport, the Nusselt number \(Nu\), expressed as \(Nu=\Phi/\dot{\Phi}_{cond}\). Here, \(\dot{\Phi}_{cond}=\lambda L^{2}\Delta/H\) is the conductive heat flux, with \(\lambda\) being the thermal conductivity of the liquid metal. \(\dot{\Phi}=\rho c_{p}\dot{V}(T_{in}-T_{out})\) is the total heat flux exchanged in the set-up, whereas \(c_{p}\) is the isobaric heat capacity of water, and \(\dot{V}\) is the flow rate of the circulating water determined via an axial turbine flow sensor at the cooling outlet of the set-up. Prior to measurements, calibrations are performed. To account for the measurement uncertainty and heat losses of the set-up; hose split valves are used to split cold and hot water outlets respectively. One set of a cold and hot pair is used to feed the top plate and the other set to that of the bottom plate while ensuring that the temperature of both the plates remained at a set temperature of \(20^{\circ}\)C using the external thermostats. Once the temperature in the plates reaches an equilibrium; an hour-long time series of temperature readings from both sets of thermocouples are recorded. Using the least square method, offsets from each of these thermocouples are extracted which are then used to correct the temperature measurements. This procedure gives a lower threshold of temperature difference attainable for the set-up, measurements below \(\Delta\leq 0.22^{\circ}\)C are untenable. The range of measured temperature difference realized in this set-up varied from \(0.27^{\circ}\)C \(\leq\Delta\leq 16^{\circ}\)C, with Rayleigh number in the range of \(2.9\times 10^{4}\leq Ra\leq 1.6\times 10^{6}\). Experimental results presented here, see table 2, are recorded after the temperature difference between the hot and the cold plates reached a constant value, when the system attains thermal equilibrium. Principles of Ultrasound Doppler Velocimetry (UDV), a technique widely used for opaque flow diagnostics, are implemented to determine the fluid velocity (Tsuji _et al._, 2005; Eckert Figure 2: Schematics of the experimental set-up: (_a–c_) three projections showing (_a_) the top view and (_b,c_) two side views and (_d_) a three-dimensional sketch, illustrating the positions of all ultrasound transducers. Each ultrasound transducer is marked with a letter that indicates the distance to the bottom (“T” – close to the top, “M” – matching the middle plane, “B” – close to the bottom) followed by a number. All dimensional distances in (_a–c_) are given in mm. Blue and red colours indicate the cooled and heated plates respectively. _et al._, 2007). Nine UDV transducers (TR0805SS, Signal Processing SA) are installed in a direct contact with the fluid. Each of these transducers acquire an instantaneous velocity profiles sequentially along the measuring lines as shown in figure 2 using multiplexing. The velocity measurements are performed with a resolution of about 0.5 mm/s and a sampling frequency of 1 Hz. For the numerical results, statistical equilibrium or convergence is reached after several hundreds of free-fall time units. Throughout this paper, the length, velocity, and time are made non-dimensional using the cell height \(H\), the free-fall velocity \(u_{ff}\), and the free-fall time unit \(t_{ff}\equiv H/u_{ff}\), respectively, see equations (4). ### Phase averaging procedure To analyse the 3D flow dynamics from the experimental data, the whole field mapping of the velocity flow field is required, which is currently not possible using the UDV techniques. However, this sort of flow-field measurements can be assessed via the numerical techniques. The flow pattern consists of oscillatory coherent structures over several range of scales. To visualise the coherent structures, it is advisable to remove the background turbulent fluctuations using statistical means. Pandey _et al._ (2018) implemented an averaging method, which was later adopted by Akashi _et al._ (2022) in the form of a phase averaging algorithm. In this algorithm, one complete oscillation period, \(\tau_{OS}=1/f_{OS}\), is equally divided into certain (e.g. 16) intervals or phases. Averaging of the temperature and velocity field data is carried out within each of these phases. This method reveals the underlying coherent structures in a flow field with high oscillations, such as that encountered in the three-dimensional cellular regime by Akashi _et al._ (2022). Vogt _et al._ (2018) used conditional averaging to showcase the 3D structures of the JRVs in a cylinder. The method of conditional averaging is similar to that of the phase-average process, with the only difference in the choice of the conditioning intervals. In the conditional averaging approach, the intervals for one complete cycle are divided into seven intervals bounded by multiples of standard deviations of the average temperature of the fluid. In the present study, the phase averaging method is applied to the simulation data, which cover 16 oscillation periods for the cases \(\Gamma=2.5\), \(\Gamma=3\), \(\Gamma=5\), and 8 oscillation periods for the case \(\Gamma=2\). Every oscillation period is divided into 16 phases and each phase is represented by 20 snapshots of all flow fields. Then the corresponding snapshots are averaged within each phase. Finally, the conditional averaging is applied to the flow fields within each phase and for all oscillation periods, which gives a phase-averaged temporal evolution of all flow fields during the period. ## 3 Results The results of all conducted DNS and experiments are summarised in Table 2. For all experimental and numerical data for sufficiently large \(Ra\), an oscillatory behaviour of the LSC was identified. Like in the case of \(\Gamma=\sqrt{2}\) and \(\Gamma=2\) of a cylindrical container (Vogt _et al._, 2018), the JRV-like oscillatory structures leave imprints on almost all flow characteristics, for the considered ranges of \(Ra\) and \(\Gamma\) of a cuboid domain. The oscillatory behaviour of the LSC is reflected in temporal evolution of the temperature and particular components of the velocity fields and is also seen in the vertical heat flux temporal evolution. Once the dominating frequency \(f_{0}\) is evaluated (we will discuss this in more detail later), one can analyse the mean flow dynamics within the time period that lasts \(\tau_{OS}=1/f_{0}\). For that, the temporal evolution of the flow fields, which are obtained in the DNS, are split into separate periods, according to the dominating frequency \(f_{0}\), and then a phase-averaged temporal evolution of all flow fields during the period is calculated. Our DNS for \(Ra=10^{6}\) and \(Pr=0.03\) and two different aspect ratios, \(\Gamma=5\) and \(\Gamma=2.5\), show a very remarkable similarity of the global flow structure and its dynamics. In figure 3, phase-averaged instantaneous temperature distributions in horizontal cross-sections are presented, which are considered at distances \(z=0.5H\) (figure 3_a, b_ and _e, f_) and \(z=0.85H\) (figure 3_c, d_ and _g, h_) from the bottom plate, and at the times \(t=0\) and \(t=0.5\tau_{OS}\). This figure shows patches of upwelling (hot) and downwelling (cold) fluid with the hot patches connected by a diagonal ridge of upwelling fluid. These patches rotate \begin{table} \begin{tabular}{l c c c c c} \hline \(\Gamma\) & & \(Pr\) & \(Ra\) & \(f_{0}H^{2}/\kappa\) & \(f_{0}(H+L^{\prime})^{2}/\kappa\) & \(Nu\) \\ \hline 2 & DNS & \(0.03\) & \(1.0\times 10^{6}\) & 9.69 & 87.22 & 5.07 \\ \hline 2.5 & DNS & \(0.03\) & \(1.2\times 10^{5}\) & 2.60 & 31.81 & 2.92 \\ & DNS & & \(1.0\times 10^{6}\) & 7.21 & 88.32 & 5.31 \\ \hline 3 & DNS & \(0.03\) & \(1.0\times 10^{5}\) & & & 2.92 \\ & DNS & \(4.05\times 10^{5}\) & 3.32 & 53.10 & 4.04 \\ & DNS & \(1.0\times 10^{6}\) & 5.22 & 83.52 & 5.18 \\ \hline 3.03 & Exp. & \(0.03\) & \(2.9\times 10^{4}\) & & & 2.14 \\ & Exp. & \(3.5\times 10^{4}\) & & & 2.37 \\ & Exp. & \(6.2\times 10^{4}\) & & & 2.67 \\ & Exp. & \(6.4\times 10^{4}\) & & & 2.55 \\ & Exp. & \(6.8\times 10^{4}\) & & & 2.73 \\ & Exp. & \(6.9\times 10^{4}\) & & & 2.71 \\ & Exp. & \(9.4\times 10^{4}\) & & & 2.96 \\ & Exp. & \(1.0\times 10^{5}\) & & & 2.95 \\ & Exp. & \(1.1\times 10^{5}\) & & & 3.02 \\ & Exp. & \(1.2\times 10^{5}\) & & & 3.12 \\ & Exp. & \(1.6\times 10^{5}\) & & & 3.37 \\ & Exp. & \(2.7\times 10^{5}\) & & & 3.60 \\ & Exp. & \(3.2\times 10^{5}\) & 3.62 & 58.81 & 3.70 \\ & Exp. & \(4.1\times 10^{5}\) & 4.61 & 74.93 & 3.89 \\ & Exp. & \(5.1\times 10^{5}\) & 5.09 & 82.61 & 4.19 \\ & Exp. & \(6.3\times 10^{5}\) & 5.73 & 93.13 & 4.43 \\ & Exp. & \(7.7\times 10^{5}\) & 6.21 & 100.81 & 4.61 \\ & Exp. & \(8.6\times 10^{5}\) & 6.56 & 106.54 & 4.79 \\ & Exp. & \(9.4\times 10^{5}\) & 6.72 & 109.18 & 4.85 \\ & Exp. & \(1.0\times 10^{6}\) & 7.22 & 117.24 & 4.99 \\ & Exp. & \(1.2\times 10^{6}\) & 7.57 & 123.00 & 5.18 \\ & Exp. & \(1.3\times 10^{6}\) & 8.02 & 130.25 & 5.27 \\ & Exp. & \(1.6\times 10^{6}\) & 8.53 & 138.60 & 5.51 \\ \hline 5 & DNS & \(0.03\) & \(1.2\times 10^{5}\) & 3.21 & 39.33 & 3.04 \\ & DNS & \(1.0\times 10^{6}\) & 8.65 & 105.95 & 5.40 \\ \end{tabular} \end{table} Table 2: Details on the conducted DNS and experiments. counterclockwise in the time interval [0, \(0.5\tau_{OS}\)] (see supplementary movies), suggesting the presence of oscillatory flow dynamics which periodically changes the flow topology. For fixed values of \(Ra\) and the cell height \(H\), the spatial length of the convection cell in the case \(\Gamma=5\) is twice larger than in the case \(\Gamma=2.5\) for the same \(Ra\) and \(H\). Therefore, for any fixed \(z\), one can expect a similarity of the flow pattern in the horizontal cross-section at the height \(z\) in the case \(\Gamma=2.5\) with the flow pattern in the 1/4 of the area of the horizontal cross-section at the same height \(z\) in the case \(\Gamma=5\). Indeed, figure 3 shows that the temperature distribution in the region marked with black dashed lines for \(\Gamma=5\) (figure 3 \(a\)-\(d\)) is very similar to the temperature distribution in the corresponding cross-sections for \(\Gamma=2.5\) (figure 3 \(e\)-\(h\)) if considered at the same phase. To gain more evidence for this similarity, we evaluate the horizontal components of the velocity, \(u_{y}\) and \(u_{x}\), along the lines marked T1 and T2 in figure 3 (\(c\), \(d\)) (\(\Gamma=5\)) and compare them with the corresponding horizontal components of the velocity along the lines marked T1 and T2 in figure 3 (\(g\), \(h\)) (\(\Gamma=2.5\)). The temporal evolutions of these velocity components for \(\Gamma=5\) and \(\Gamma=2.5\) are compared in figure 4 for \(Ra=10^{6}\) and \(z=0.85H\). One can see that the lower halves of the spatio-temporal velocity maps in figure 4 (\(a\), \(b\)), which correspond to the measurements along the lines T1 and T2 within the 1/4 area that is marked in figure 3 (\(c\), \(d\)) with the black dashed lines, mimic the spatio-temporal velocity maps in figure 4 (\(c\), \(d\)), which correspond to the measurements along the lines T1 and T2 in figure 3 (\(g\), \(h\)). Qualitatively, the signals for \(\Gamma=5\) and \(\Gamma=2.5\) are similar, however, the frequency of the oscillations in the latter case is slightly lower than that in the former case, with six versus five oscillations during the same time interval. Also at \(\Gamma=5\) the signal seems to be less stable than in the case \(\Gamma=2.5\). Figure 3: Phase-averaged snapshots of the temperature at a distance \(z\) from the bottom: (\(a\), \(b\), \(e\), \(f\)) \(z=0.5H\) and (\(c\), \(d\), \(g\), \(h\)) \(z=0.85H\), for different container aspect ratios (\(a\)–\(d\)) \(\Gamma=5\) and (\(e\)–\(h\)) \(\Gamma=2.5\), as obtained in the simulations for \(Ra=10^{6}\) and \(Pr=0.03\) (see supplementary movies). The virtual probe lines T1 and T2 (see also figure 4) are indicated with dashed white lines. The black squares indicate the areas that correspond to the areas of the container with \(\Gamma=2.5\). Figure 5 shows a comparison of the experimental and simulation results for the same \(\Gamma=3\) and \(Ra=10^{6}\). Here a comparison of the temporal evolution of the horizontal components of the velocity is made exactly at the same locations in the DNS and in the experiment. One can see a good qualitative agreement between the experimental and simulation data. However, the dominant frequency obtained in the experiment is slightly higher compared to the frequency evaluated from the simulation data. More precisely, we obtained on average 11 oscillations in the experiment versus 9 oscillations in the DNS for the same time interval. ### Three-dimensional cellular flow dynamics Figure 6 shows phase-averaged streamlines in Rayleigh-Benard convection for \(Pr=0.03\), \(Ra=10^{6}\), as obtained in the direct numerical simulations for cuboid domains for all considered aspect ratios \(\Gamma\) at the beginning (\(t=0\)) and at the middle (\(t=0.5\tau_{OS}\)) of the oscillation period. There are four interlacing JRVs in the case \(\Gamma=5\) (figure 6_a, b_). The flow structure resembles a cellular structure which previously was observed in Akashi _et al._ (2022) for \(\Gamma=5\) and \(Ra\approx 1.2\times 10^{5}\). There are only two JRVs for the aspect ratios \(\Gamma=3\) (figure 6_c, d_), and \(\Gamma=2.5\) (figure 6_e, f_), which is in contrast to a lattice of four JRVs in the \(\Gamma=5\) case. What is more striking is that the JRVs in the \(\Gamma=3\) and 2.5 cells represent a quadrant of the JRV lattice of the \(\Gamma=5\) cell (see also figure 1). Only one vortex is observed in the convection cell with \(\Gamma=2\). This dynamic interplay between changing aspect ratio and organisation of the JRVs highlights the influence of shape and size of the geometry of the container. It also rises an important question as to whether there is a certain hierarchy in these systems when it comes to the reorganisation of JRVs within a container. A closer look at the 3D flow structure for \(\Gamma=2.5\) shows how the two vortices connect to each other in the central part of the box (see supplementary movies). In this case, the JRVs are connected with two vortices in the upper part and two vortices in the lower part of the Figure 4: Spatio-temporal velocity maps for \(Ra=10^{6}\) at \(z=0.85H\), as obtained in the direct numerical simulations at the virtual probe lines T1 (_a, c_) and T2 (_b, d_), for the aspect ratios \(\Gamma=5\) (_a, b_) and \(\Gamma=2.5\) (_c, d_). The black dashed lines in (_a, b_) correspond to the measurements in the cuboid with \(\Gamma=2.5\) (_c, d_), respectively. Figure 5: Spatio-temporal velocity maps for \(Ra=10^{6}\), as obtained in the direct numerical simulations (left column) and in the experimental measurements (right column), for the aspect ratio \(\Gamma=3\). The numerical data are probed exactly at the same locations where the UDV sensors are located in the experiment: T3 (\(a\), \(b\)), B3 (\(c\), \(d\)), T2 (\(e\), \(f\)), B2 (\(g\), \(h\)), T1 (\(i\), \(j\)), B4 (\(k\), \(l\)) and M3 (\(m\), \(n\)). Figure 6: Phase-averaged streamlines in Rayleigh–Bénard convection for \(Pr=0.03\), \(Ra=10^{6}\), as obtained in the direct numerical simulations for parallelepiped domains with different aspect ratios: \(\Gamma=5\) (\(a\), \(b\)), \(\Gamma=3\) (\(c\), \(d\)), \(\Gamma=2.5\) (\(e\), \(f\)) and \(\Gamma=2\) (\(g\), \(h\)). Blue (red) colour corresponds to a negative (positive) value of the vertical velocity component \(u_{z}\). domain. Figure 7 shows the phase-averaged streamlines for the same \(t=0.25\tau_{OS}\), but the connecting vortices in the upper central part of the domain are highlighted with red colour in figure 7\(a\), while the connecting vortices in the lower part of the domain are highlighted with blue colour in figure 7\((b)\). Thus colours here reflect the distance \(z\) from the bottom plate, i.e., the vertical coordinate of the structure. For clarity, all other streamlines are shown transparently. It is also worth noting that for the same \(Ra\), the JRVs are more stable and better pronounced for \(\Gamma=2.5\) compared to \(\Gamma=3\). Figure 8 shows in detail the specifics of the spatial-temporal velocity and temperature maps for \(\Gamma=2\). In this case (see figure 6\(e,f\)) there is only one vortex that rotates in the direction opposite to the LSC direction. Note that for the considered \(Ra=10^{6}\), the LSC is oriented along a vertical wall of the container rather than diagonally. The flow pattern is similar to that obtained for the \(\Gamma=2\) cylinder (Vogt _et al._, 2018). To examine this similarity, we evaluate at the mid-height, at \(z=0.5H\) (figure 8\(a\)), the horizontal component of the velocity \(u_{x}\) and the temperature along the straight line marked in the figure "M" and along the circle marked "C", which correspond to the lines along the diameter and along the midplane circumference of an inscribed cylinder (as it was considered in Vogt _et al._, 2018), respectively. The dominant frequency \(f_{0}\) is visible in the spatio-temporal maps of both velocity (figure 8\(b\)) and temperature (figure 8\(c\)). Figure 8\(b\) resembles figure 2\(c\) inVogt _et al._ (2018). The spatio-temporal maps of the temperature along the circle "C" (figure 8\(d\)) also look similar to those in figure 4\((a)\) in Vogt _et al._ (2018) and indicate the presence of a dominant frequency. It is worth noting that the oscillations of the LSC orientation, which are characterised by the azimuthal angle \(\xi_{LSC}\), and computed here using the single-sinusoidal fitting method by Cioni _et al._ (1997) (indicated with the green line in figure 8\(d\)) are strong. Although, in cuboid domain, the LSC direction is expected to be more stable compared to that of a cylindrical domain. This is clearly demonstrated by longer time series, which are presented in figure 9. In contrast to the relatively short time interval with only 9 oscillation periods, when the LSC orientation was mostly stable (figure 8), the longer time series reveal quite strong temporal oscillations of the azimuthal angle \(\xi_{LSC}\) (figure 9\(c\)). Spatio-temporal maps for both velocity and temperature (figure 9\(a\), \(b\)) show that intervals with relatively stable regular oscillations alternate with intervals with less stable signals. This leads to difficulties for conditional averaging in this case. Therefore, as mentioned in section 2.3, we used averaging for only 8 oscillation cycles for \(\Gamma=2\), whereas for other values of \(\Gamma\) the averaging was performed for 16 oscillation cycles. ### Oscillation frequency Any periodic oscillation of the JRV has a certain dominant frequency \(f_{0}\). For each studied case, these frequencies were extracted both from the velocity and temperature time series. The frequencies were non-dimensionalised using the dissipative time scale. Two length scales were used for the dissipative time scale: the height of the domain \(H\) and the overall path length of the LSC, following Cheng _et al._ (2022), is coarsely approximated as \(2l\), \(l\equiv H+L^{\prime}\), where \(L^{\prime}\) equals to \(2H\), \(2.5H\), \(3H\) and \(5/2H\) for \(\Gamma=2\), \(2.5\), \(3\) and \(5\) respectively. Note that in the latter case \(L^{\prime}=5/2H\), since for the \(\Gamma=5\) container, the flow pattern consists of two JRV building blocks, repeated in both horizontal directions, as it was shown above. The diffusion times are then \(\tau_{\kappa}=H^{2}/\kappa\) and \(\tau_{\kappa}^{I}=(H+L^{\prime})^{2}/\kappa\) with the corresponding diffusion frequencies \(f_{k}=1/\tau_{\kappa}=\kappa/H^{2}\) and \(f_{k}^{I}=1/\tau_{\kappa}^{I}=\kappa/(H+L^{\prime})^{2}\) respectively. Figure 10 shows the values of the dominant frequencies \(f_{0}\) versus \(Ra\) for all considered values of \(\Gamma\). The data from the present study are shown in red and blue colours, all other data are shown in grey colour. The values that the frequencies take in different flow configurations are presented in table 2. Variation of \(f_{0}\), normalised with the dissipative time scale \((H+L^{\prime})^{2}/\kappa\), is shown in figure 10\(a\), as a function of \(Ra\). One can see that our new experimental data for \(\Gamma=3\) (red circles) and those from Akashi _et al._ (2022) for \(\Gamma=5\) (grey circles) collapse onto one master scaling line. Note that for \(\Gamma=3\) the oscillatory JRV mode occurs at higher \(Ra\) compared to the case of \(\Gamma=5\). The so-called roll regime at lower \(Ra\) does not have clear dominant frequencies, therefore only data for the cases where an oscillatory mode exists are presented in the figure. A comparison between the experimental and simulation results for \(\Gamma=3\) shows that the oscillation frequency values obtained in the experiment (as it was already shown in figure 5) is slightly higher than that evaluated from the simulation data for \(\Gamma=3\). The numerically obtained normalized frequencies for \(\Gamma=3\), \(2.5\), and \(2\) are very close to each other. For \(\Gamma=5\), which is the only case with two JRV building blocks, the normalized frequency is generally higher than the frequencies for other \(\Gamma\) (cf. crosses and asterisk at \(Ra=10^{6}\)). The numerically obtained dimensionless frequency for \(\Gamma=5\) at \(Ra=10^{6}\) is in very good agreement with the scaling line for \(\Gamma=5\) reported in Akashi _et al._ (2022) and with the new experimental data for \(\Gamma=3\). For a lower Rayleigh number, \(Ra=1.2\times 10^{5}\), the numerically obtained dimensionless frequency for \(\Gamma=5\) is also in very good agreement with the numerical and experimental data from Akashi _et al._ (2022). Experimental data for a \(\Gamma=2\) cylinder from Vogt _et al._ (2018) and Cheng _et al._ (2022) give scaling relations with slightly lower exponent values and for the considered \(Ra\) range; they locate slightly below the fitting lines for \(\Gamma=3\) and \(5\). The frequency value from our \(\Gamma=2\) box simulations is close to that for \(\Gamma=2\) cylinder experimental data from Vogt _et al._ (2018). The numerical data for all considered \(\Gamma\) are located between the fitting lines obtained in the experiments for \(\Gamma=3\), \(5\) box and cylinder \(\Gamma=2\). To sum up, all the experimental and numerical data show a very similar frequency dependence, as one can see in a \(f_{0}\) versus \(Ra\) plot, across all aspect ratios with the frequency normalisation based on the path length \(l\). In figure 10\(b\) we normalise the frequency \(f_{0}\) with the thermal diffusion time \(\tau_{\kappa}=H^{2}/\kappa\). In that case, without taking into account the spatial Figure 7: Phase-averaged streamlines in Rayleigh–Bénard convection for \(Pr=0.03\), \(Ra=10^{6}\), as obtained in the direct numerical simulations for \(\Gamma=2.5\), \(t=0.25\tau_{OS}\). Vortices in the upper part of the domain are highlighted in (\(a\)), vortices in the lower part of the domain are highlighted in (\(b\)). For convenience, all other streamlines outside the center of the domain are shown transparent. Colours correspond to the vertical coordinate of the structure \(z\). length of the vortex path, the deviation between the data points for \(\Gamma=5\) and \(2.5\) remains the same (as the length scale is the same for these two cases), while the data points for \(\Gamma=3\) move down and the points for \(\Gamma=2\) move significantly up. We conclude that the spatial length of the domain is an important control parameter, which together with the height of the fluid layer determines the relevant length and the scaling relations for the oscillation frequency. ### Heat transport In this section, the effect of the flow dynamics on the heat transport is discussed. The volume-averaged Nusselt number \(Nu_{vol}\) can be evaluated from the simulation data as follows: \[Nu_{vol}=\langle\Omega_{z}\rangle_{V,t}, \tag{1}\] Figure 8: Data for \(Ra=10^{6}\) for \(\Gamma=2\), as obtained in the direct numerical simulations. Measurement positions in the midplane at \(z=0.5H\) are shown in (\(a\)): the central straight line marked “M” and the circle marked “C” are shown in black and blue colours, respectively. Spatio-temporal (\(b\)) velocity and (\(c\)) temperature maps along the M-line. (\(d\)) Spatio-temporal temperature map along the C-circle. The instantaneous position angle of the LSC is marked with the green line (cf. figure 4 in Vogt _et al._ (2018), for a cylinder with the same \(\Gamma\) and \(Ra\)). where \(\Omega_{z}\) is a component of the full heat flux vector \(\mathbf{\Omega}\equiv(\mathbf{u}T-\kappa\mathbf{\nabla}T)/(\kappa\Delta/H)\) along the vertical axis and \(\langle\cdot\rangle_{V,t}\) denotes the the time-volume average. In the experiments, the Nusselt numbers \(Nu\) are computed as discussed in section 2.2. The global heat transport scaling across various \(Ra\) is shown in figure 11. The flow dynamics does not seem to have any dramatic effect on the heat transport. This is true for all studied aspect ratios. The cases without oscillations are shown in the figure with open symbols. The fitted curve gives a scaling relation \(Nu=0.22\times Ra^{0.23}\), which differs slightly from that reported by Vogt _et al._ (2021): \(Nu=0.166\times Ra^{0.25}\). However, this difference can be attributed to the difference in the geometry of the cell. An interesting feature of the JRV regimes is that the Nusselt numbers, which are computed using the phase-average method as discussed in section 2.3, demonstrate an oscillatory behaviour during the JRV cycle. This is demonstrated for one period of oscillation in figure 12. Qualitatively, the oscillatory behaviour of the local vertical heat flux \(Nu(t)\) during the JRV cycle is the same for all considered \(\Gamma\). It is clear that the \(Nu\) shows oscillatory behaviour with distinct peaks of maxima and minima. This sort of behaviour was also reported in previous study of Akashi _et al._ (2022) for \(\Gamma=5\). However, the amplitude of the oscillations for a certain given \(Ra\) is different: it decreases with increasing \(\Gamma\). In addition to figure 12 that shows the volume-averaged Nusselt number \(Nu_{vol}\) during one Figure 9: Data for \(Ra=10^{6}\) and \(\Gamma=2\), as obtained in the direct numerical simulations. A similar figure to figure 8 (\(b\)-\(d\)), but the time series is longer. Spatio-temporal (\(a\)) velocity and (\(b\)) temperature maps along the M-line. (\(c\)) Spatio-temporal temperature map along the C-circle. The instantaneous position angle of the LSC is marked with the green line. Figure 11: Scaling of the Nusselt number \(Nu\) with the Rayleigh number \(Ra\), for all studied \(\Gamma\). Figure 10: Dominant frequencies \(f_{0}\), which are non-dimensionalised using the dissipative time scales \((a)\)\((H+L^{\prime})^{2}/\kappa\) and \((b)\)\(H^{2}/\kappa\), as functions of \(Ra\). Here \(L^{\prime}\) equals to \(2H\), \(2.5H\), \(3H\) and \(5/2H\) for \(\Gamma=2\), \(2.5\), \(3\) and \(5\), respectively. oscillation period, we present in figure 13 the values of \(Nu\), which are computed over the surfaces: \(Nu_{bot}\) at the bottom plate, \(Nu_{top}\) at the top plate and \(Nu_{mid}\) over the horizontal cross-section in the middle plane at \(z=0.5H\). Compared to \(Nu_{vol}\), for all studied values of \(\Gamma\), there is a shift between \(Nu\) evaluated at the plates (\(Nu_{bot}\), \(Nu_{top}\)) and the volume-averaged Nusselt number \(Nu_{vol}\). The maxima and minima of \(Nu\), calculated at the horizontal walls, occur always later than they appear in the \(Nu_{vol}\) evolution. \(Nu_{bot}\) and \(Nu_{top}\) are synchronised with each other. \(Nu_{mid}\) seems to be less smooth and gives a larger difference between the maximum and minimum values compared to \(Nu_{bot}\) and \(Nu_{top}\). Figure 14 shows phase-averaged isosurfaces of the full heat transport vector \(\mathbf{\Omega}\) as obtained in the direct numerical simulations for cuboid domains at the beginning (\(t=0\)) and at the middle (\(t=0.5\tau_{OS}\)) of the oscillation period. The isosurfaces of the full heat transport vector \(\mathbf{\Omega}\) follow the JRV flow structure at all considered \(\Gamma\) values (cf. figure 6). Figure 15 demonstrates \(\mathbf{\Omega}\)-isosurfaces together with the distribution of the magnitude, \(|\mathbf{\Omega}|\), in the horizontal cross-section at \(z=0.5H\) for \(\Gamma=5\) and \(\Gamma=2.5\). The heat flow is mainly realized in the gaps between the isosurfaces that envelop the JRVs. Thus, the JRVs are not efficient in transporting the heat and are located in the areas of minimum heat flux. Figure 16 shows the vertical component of the local heat flux, \(\Omega_{z}\), at \(z=0.5H\) for \(\Gamma=5\) and \(\Gamma=2.5\). Analogously to figure 3 with the temperature distributions, here one can see a similarity of the \(\Omega_{z}\) distribution pattern in the case \(\Gamma=2.5\) with the pattern in the 1/4 of the area in the case \(\Gamma=5\) (see supplementary movies). But the resemblance is not complete, probably because of the influence of the sidewalls. How sidewalls affect the movements of vortices is a subject for future study. ## 4 Discussion We have presented a combined numerical and experimental investigation of a liquid metal convection flow in different geometries. The Prandtl number in these investigations is \(Pr\approx 0.03\) and the Rayleigh numbers are between \(2.9\times 10^{4}\leqslant Ra\leqslant 1.6\times 10^{6}\). The investigations focus on the influence of the size of the flow domain (via its aspect ratio) on the dominant oscillation modes of the large-scale circulation. Results for four different cuboid domains with varying spatial length-to-height aspect ratios \(\Gamma=5\), \(\Gamma=3\), \(\Gamma=2.5\) and \(\Gamma=2\) were compared with the results of a cylindrical \(\Gamma=2\) cell. The results show that the oscillations in all aspect ratios investigated are due to the presence Figure 12: Phase-averaged Nusselt number \(Nu(t)\) during one oscillation period, as it is evaluated from the direct numerical simulations for \(Ra=10^{6}\) and different aspect ratios \(\Gamma\) of a parallelepiped container. of jump rope vortices. A jump rope vortex forms at the centre of the large-scale circulation, and moves analogous to a swirling jump rope. However, the direction of motion of the JRV is opposite to the direction of flow of the LSC. The JRV, which was first discovered in a cylindrical \(\Gamma=2\) convection cell (Vogt _et al._, 2018), also forms in a square cuboid domain of aspect ratio, \(\Gamma=2\), as demonstrated in this work. The appearance of the JRV is almost identical in both the cylindrical and cuboid domain of same aspect ratio. If a cylinder is numerically cut out from the rectangular cell, the similarity becomes more pronounced, also with respect to the JRV-induced sidewall temperature distribution. In domains with larger spatial length, the appearance of the JRV vortices changes. For domains with aspect ratios of \(\Gamma=2.5\) and \(\Gamma=3\), the vortices form an orthogonal cross that periodically rotates alternately clockwise and counterclockwise. In a \(\Gamma=5\) cell, a lattice of four JRVs interlace each other, which oscillate in a synchronised manner. Therefore, a key finding of this work is that the JRV is an extremely robust flow feature that adapts and reorganises depending on different aspect ratio of a domain, with ability to form intricate lattice of repetitive flow structures in large aspect-ratio containers. Moreover, our findings further reinforce that the shape of the domain does matter: we encounter the presence of a JRV in a square cuboid of \(\Gamma=3\), whereas, (Cheng _et al._, 2022) did not find any evidence of a JRV in a cylinder of the same aspect ratio. The frequency of the oscillations show a consistent scaling for the different aspect ratios with a good agreement between numerics Figure 13: Evolution of the phase-averaged Nusselt number \(Nu(t)\) during the one oscillation period, as it is evaluated from the direct numerical simulations for \(Ra=10^{6}\) and \(\Gamma=2\) (\(a\)), \(\Gamma=2.5\) (\(b\)), \(\Gamma=3\) (\(c\)) and \(\Gamma=5\) (\(d\)). \(Nu\) is calculated over the top and bottom plates, over the horizontal cross-section in the middle plane at \(z=0.5H\) and over the entire volume. and experiment. Slight deviations between the different aspect ratios are likely due to the non-uniform path length of the LSC for the different aspect ratios. The heat transport scaling relations show only minor (if any) deviations between different flow pattern regimes. The data from the regime close to the onset, convection roll dominated regime and the turbulent JRV regime collapse on a master curve. However, the oscillations of the JRV are clearly visible in the time evolution of the Nusselt number. The frequency of the _Nu_ oscillations is thereby twice as high as that of the JRVs. The maxima of the Nusselt numbers occur when the horizontal velocity components reach a minima during the JRV cycle (see Akashi _et al._, 2022). Questions, which are difficult to answer based on both experimental and numerical approaches, are whether the JRV structures have an upper _Ra_ number limit and whether Figure 14: Isosurfaces of the magnitude of the full heat transport vector \(\mathbf{\Omega}\equiv(\mathbf{u}T-\kappa\mathbf{\nabla}T)/(\kappa\Delta/H)\), for \(\Gamma=5\) (\(a\), \(b\)) and \(\Gamma=3\) (\(c\), \(d\)), \(\Gamma=2.5\) (\(e\), \(f\)), \(\Gamma=2\) (\(g\), \(h\)), as obtained in the simulations for \(Ra=10^{6}\). The surfaces are coloured by the temperature: blue (red) colour corresponds to the temperature below (above) the arithmetic mean of the top and bottom temperatures. they are subsequently displaced by other structures as soon as \(Ra\) exceeds a certain critical value. In the previous experiments, JRVs detected were stable over two orders of magnitude in \(Ra\)(see Vogt _et al._, 2018). Since the flows in these measurements and simulations are already in a turbulent state, one might expect that the JRV-like oscillatory structures can be observed in the turbulent state. Figure 16: Phase-averaged vertical component of the local heat flux \(\Omega_{z}\) at \(z=0.5H\) for \(\Gamma=5\) (\(a\), \(b\)) and \(\Gamma=2.5\) (\(c\), \(d\)) for \(Ra=10^{6}\). The black squares indicate the areas that correspond to the areas of the container with \(\Gamma=2.5\) (see supplementary movies). Figure 15: Isosurfaces of the magnitude of the full heat transport vector \(\mathbf{\Omega}\) are shown together with the \(|\mathbf{\Omega}|\) distribution at \(z=0.5H\) for \(\Gamma=5\) (\(a\)) and \(\Gamma=2.5\) (\(b\)) for \(Ra=10^{6}\). The JRV-like vortex structures are associated with the minimal heat flux. occur for even larger \(Ra\). It is worth noting that the JRVs occur not only for low Prandtl numbers like that we studied here, but have also been detected in a \(\Gamma=2\) cylinder for water, which has approximately two orders of magnitude higher \(Pr\)(Horn _et al._, 2022). Our study poses few more questions for future studies that could potentially be investigated, such as: how do the JRVs behave in even larger containers with even higher spatial length domains, and what role do they play in formation of convective turbulent superstructures? The present study suggests that in the case of large \(\Gamma\), the global structure of the oscillatory mode can be thought of as a lattice of interlaced JRV-like building blocks found for the aspect ratio \(\Gamma\approx 2.5\), repeated spatially. However, such investigations come with their own challenges. Numerical cost increases with the square of the domain aspect ratio, whereas the stabilizing influence of the sidewalls decreases with increasing aspect ratios, giving the flow more degrees of freedom, which results in JRVs that are less stable. This makes intractable the detection of the JRVs by known experimental techniques or by numerical techniques such as conditional averaging. Current ongoing research efforts at the HZDR aim to tackle this problem head-on by experimentally investigating the dynamics of oscillatory liquid metal thermal convection in a square cuboid with a large aspect ratio of \(\Gamma=25\), which is under construction at the time of writing this paper. Supplementary material and movies are available at... **Acknowledgements.** The authors thank Felix Schindler for assisting in the calibration of the set-up, and Susanne Horn for fruitful discussions. **Funding.** This work is supported by the Deutsche Forschungsgemeinschaft (DFG) under grants SH 405/16 and the Priority Programme SPP 1881 "Turbulent Superstructures" of the DFG under grants SH 405/7 and VO 2331/3. **Declaration of interests.** The authors report no conflict of interest. **Data availability statement.** The data that support the findings of this study are available under request. **Author ORCIDs.** Andrei Teimurazov [https://orcid.org/0000-0002-2832-0335](https://orcid.org/0000-0002-2832-0335); Sanjay Singh [https://orcid.org/0000-0002-5305-7524](https://orcid.org/0000-0002-5305-7524); Sylvie Su [https://orcid.org/0000-0002-1794-1355](https://orcid.org/0000-0002-1794-1355); Sven Eckert [https://orcid.org/0000-0003-1639-5417](https://orcid.org/0000-0003-1639-5417); Olga Shishkina [https://orcid.org/0000-0002-6773-6464](https://orcid.org/0000-0002-6773-6464); Tobias Vogt [https://orcid.org/0000-0002-0022-5758](https://orcid.org/0000-0002-0022-5758). **Author contributions.** A. T. and S. S. contributed equally to this work and should be considered joint first authors. S. S., S. Su and A. T. analysed the data. The numerical (experimental) part of the work was done by Gottingen (Dresden) group. Principle investigators of the project are O. S. and T. V. All authors contributed to the writing of the paper.
2306.07458
Accuracy-Time Tradeoffs in AI-Assisted Decision Making under Time Pressure
In settings where users both need high accuracy and are time-pressured, such as doctors working in emergency rooms, we want to provide AI assistance that both increases decision accuracy and reduces decision-making time. Current literature focusses on how users interact with AI assistance when there is no time pressure, finding that different AI assistances have different benefits: some can reduce time taken while increasing overreliance on AI, while others do the opposite. The precise benefit can depend on both the user and task. In time-pressured scenarios, adapting when we show AI assistance is especially important: relying on the AI assistance can save time, and can therefore be beneficial when the AI is likely to be right. We would ideally adapt what AI assistance we show depending on various properties (of the task and of the user) in order to best trade off accuracy and time. We introduce a study where users have to answer a series of logic puzzles. We find that time pressure affects how users use different AI assistances, making some assistances more beneficial than others when compared to no-time-pressure settings. We also find that a user's overreliance rate is a key predictor of their behaviour: overreliers and not-overreliers use different AI assistance types differently. We find marginal correlations between a user's overreliance rate (which is related to the user's trust in AI recommendations) and their personality traits (Big Five Personality traits). Overall, our work suggests that AI assistances have different accuracy-time tradeoffs when people are under time pressure compared to no time pressure, and we explore how we might adapt AI assistances in this setting.
Siddharth Swaroop, Zana Buçinca, Krzysztof Z. Gajos, Finale Doshi-Velez
2023-06-12T23:24:16Z
http://arxiv.org/abs/2306.07458v3
# Adaptive interventions for both accuracy and time ###### Abstract In settings where users are both time-pressured and need high accuracy, such as doctors working in Emergency Rooms, we want to provide AI assistance that both increases accuracy and reduces time. However, different types of AI assistance have different benefits: some reduce time taken while increasing overreliance on AI, while others do the opposite. We therefore want to adapt what AI assistance we show depending on various properties (of the question and of the user) in order to best trade off our two objectives. We introduce a study where users have to prescribe medicines to aliens, and use it to explore the potential for adapting AI assistance. We find evidence that it is beneficial to adapt our AI assistance depending on the question, leading to good tradeoffs between time taken and accuracy. Future work would consider machine-learning algorithms (such as reinforcement learning) to automatically adapt quickly. Machine Learning, ICML ## 1 Introduction Artificially intelligent (AI) systems are being used to help humans in many settings make decisions or predictions, ranging from helping doctors in disease diagnosis (Musen et al., 2014) to helping judges make pretrial-release decisions (Green and Chen, 2019). Most studies focus on how different AI assistance types (for example, providing only an AI recommendation, or providing an AI recommendation and explanation) impact the overall accuracy of the human decision-maker, often finding that humans can overrly on an AI prediction (Bussone et al., 2015; Lai and Tan, 2019; Jacobs et al., 2021). Recent work suggests that the cognitive effort induced by an AI assistance may also play a part in the overreliance rate and accuracy achieved (Bucinca et al., 2021). Other studies look at how AI assistance impacts how long a human takes to make a decision, with mixed findings (Arshad et al., 2015; Fogliato et al., 2022). However, instead of focussing on a single metric, in many settings we may need to consider how AI assistance impacts multiple metrics. For example, a doctor might be under stringent time constraints, such as needing to address a long queue of patients at an Emergency Room, and needs to obtain as high an accuracy as possible given these constraints (Patel et al., 2008; Franklin et al., 2011; Rundo et al., 2020). In this paper, we focus on how AI assistance impacts these two metrics in detail: accuracy and time. Different types of AI assistance trade off accuracy and time differently, and previous results indicate that no single type leads to both optimal accuracy and minimal time taken. These tradeoffs may also be related to the cognitive effort or cost required (Bucinca et al., 2021; Vasconcelos et al., 2023), and how much humans overly on the AI prediction. AI assistance types that require more cognitive effort take longer to process, but can lead to higher accuracy. Conversely, AI assistance types requiring less cognitive effort, such as providing an AI recommendation, can lead to overreliance on the AI assistance (Busone et al., 2015; Lai and Tan, 2019; Jacobs et al., 2021), which may be undesirable (and potentially lowers accuracy). Additionally, when shown an AI assistance type requiring higher cognitive effort, humans with a higher intrinsic motivation to think (their Need for Cognition (NFC) trait) may perform better (Bucinca et al., 2021). In general, we therefore want to adapt what assistance is shown depending on the question and the human, and we explore this idea in this paper. On easier questions, where humans are less likely to overrely on AI recommendations or where AI accuracy is likely to be higher, we can reduce the cognitive effort required and reduce time without sacrificing accuracy. On harder questions, we may want humans to engage in more depth, increasing time in order to increase accuracy. Additionally, we may have to adapt our AI assistance to different people. For example, humans with higher NFC may see the benefits of increased cognitive effort more, and have higher tolerance to such AI assistances. Humans with a higher skill (for example, more proficient doctors) may not require as much assistance from an AI. In this paper, we introduce a task where users are motivated to do well in both metrics. In our alien prescription task, participants have to prescribe medicine to a series of sick aliens, and have a set time to get through as many aliens as possible, while maintaining a high accuracy. Our pilot studies indicate that different AI assistance types trade off accuracy and time in different ways, and that it would be beneficial to adapt the assistance depending on various properties of the question, such as adapting to the question's difficulty. We also find that, as time progresses in the study, users start making more mistakes, and so we may also want to adapt the AI assistance depending on how much overall time has passed. We also expect each user's NFC and skill to be important to adapt to, but leave this for future work. ## 2 Related Works **AI-assisted human decision making.** Initial studies expected AI+human teams to perform better than either alone (Kamar et al., 2012; Amershi et al., 2019), however, recent studies have found that this is not the case, with accuracy of the team usually worse than AI-only accuracy (Bussone et al., 2015; Lai and Tan, 2019; Green and Chen, 2019; Bansal et al., 2021). This may be because humans overrely on AI predictions, making mistakes by agreeing with a wrong AI prediction (even when the human may not have made the mistake on their own), instead of achieving complementary performance (Bussone et al., 2015; Lai and Tan, 2019; Jacobs et al., 2021). As a way to combat this, Bucinca et al. (2021) introduced cognitive forcing functions as interaction design interventions to reduce overreliance on AI. They showed that the update condition, in which participants are asked to make a decision on their own first before seeing an AI recommendation, reduced overreliance. But the update condition may also reduce appropriate reliance, as experts may pay less attention to the recommendation after spending effort and time to make the decision unassisted (Fogliato et al., 2022). We explore the update condition in our work, finding it can increase accuracy and reduce overreliance (although not eliminate it entirely). **Adaptive interventions.** A few recent studies have considered adapting the AI assistance shown to users. Noti and Chen (2022) train a classifier on previous data to adaptively show AI recommendations. They find that they can increase AI+human performance by showing AI recommendations only on questions that the AI is more likely to be right. Ma et al. (2023) also find similar results in their setting. Bhatt et al. (2023) consider adapting the form of AI assistance shown to different users' preferences, using contextual bandits to trade off accuracy against the cost of assistance. Overall, we believe these results show that adaptive interventions are a promising research direction, and we consider their potential to trade off accuracy and time. **Accuracy and time tradeoff.** To the best of our knowledge, no prior work has focussed explicitly on the tradeoff between accuracy and time in AI-assisted decision-making. Multiple studies, however, report response times of participants when shown different conditions or interventions. However, the results present mixed empirical evidence. Some studies find that people spend more time on instances that they perceive as more difficult inherently (Arshad et al., 2015; Levy et al., 2021), but this additional time spent does not translate to increased accuracy. We also find this in the absence of any AI assistance. For clinical annotations, Levy et al. (2021) found that despite additional time spent on instances with incorrect AI recommendations, accuracy was lower compared to instances with correct recommendations. Fogliato et al. (2022) found that time spent on the task did not differ among standard and update conditions, while we find that response time does increase in the update condition in our setting. ## 3 Experiments In this section, we describe the alien prescription task, the various design choices we made, and the procedures for the pilot studies we ran. **Alien prescription task design.** We designed a task where users are asked to prescribe medicines to sick aliens, which we base on a previous work (Lage et al., 2019). Participants were shown a series of sick aliens for a fixed time of 15-20 minutes, corresponding to their'medical shift', and asked to prescribe a single medicine to each alien. By asking participants to act like doctors, and by emphasizing the importance of treating patients correctly, we aimed to motivate participants to obtain a high accuracy, while getting through as many sick patients as possible during their medical shift. Figure 1 shows an example of a single alien task. Based on observed symptoms and the 'treatment plan' (which is a set of decision set rules unique to each alien), participants must decide a single medicine to give the alien. We chose decision sets as they are relatively easy for humans to parse (Lakkaraju et al., 2016). When we provide an AI assistance, we show it in a red box, as shown in Figure 1. This box can provide both an AI recommendation and explanation (explanations are always an intermediate symptom that lead to the recommended medicine), and is provided before or after the participant's initial decision. We expanded on the setup in Lage et al. (2019) in three ways. First, we always introduced intermediate symptoms to the task, which require participants to perform additional computation steps, and worked well as the explanation of an AI's recommendation. Second, we also allowed two possible correct medicines per alien. We defined the better medicine to be one that addressed more of the observed symptoms. Having a suboptimal medicine helps us to better analyze the role of overreliance on AI recommendations: suboptimal medicines are easily verified to be correct, and so participants can overrely on them more easily than overrelying on a wrong recommendation. Third, we introduced two different difficulties of questions: easy and hard. We designed these such that easy questions require less cognitive effort for a human to find the best medicine, while hard questions require more computation. We ensured that both easy and hard questions superficially look very similar to a human, by having a similar length of lines, number of lines, and other visual aspects. Further discussion and examples are in Appendix B. Interventions.We consider three AI assistance types. 1. _No-AI_: Do not provide any AI assistance. 2. _AI before_: An AI recommendation and explanation is provided to the participant along with the question, before the participant makes any decision. 3. _AI after (/update)_: The participant makes an initial decision without any AI assistance. They are then provided with an AI recommendation and explanation, and allowed to change their initial answer. In Appendix A, we also look at the effect of providing only the AI explanation (no AI recommendation) both before and after the user's initial decision. Our initial results indicate that this explanation-only assistance does not help participants, who often perform worse than if no AI assistance had been given. We believe this is due to the form of the explanation, and different explanations might lead to different results. Please see Appendix A for more discussion. For most of our results, we had a timer shown on the screen, indicating the time remaining for participants to answer questions (the length of their'medical shift'). We also examine how participants perform when no timer is shown in Figure 2. Procedure.Before starting the main part of the study, participants had to accept a consent form, read instructions, and successfully complete three practice questions (for which they had two attempts, similar to Lage et al. (2019)). In the initial pilot studies, the participants then had 15 minutes to answer as many questions as possible. These pilot studies Figure 1: The alien prescription task, where participants must prescribe a single medicine. The information about the alien includes the alien’s unique treatment plan (a set of rules) and the alien’s observed symptoms. Participants have to use these observed symptoms and rules to prescribe a single medicine, such that only the observed symptoms and any potential intermediate (green) symptoms are used, and no other unobserved symptoms. When an AI assistance is shown, it is shown in a red box, like in this example. Here, the AI recommendation is the best possible (tranquilizers uses the most observed symptoms). Vitamins is also a correct medicine, but is suboptimal as it uses fewer observed symptoms. All other medicines are incorrect. were split into two halves (the order of the halves was randomized): one with a particular AI assistance type, and one with no-AI. This allowed us to both change the difficulty of the question (as measured by performance without AI assistance), and measure the effects of a specific AI assistance. In our first pilot studies, we found that participants found the task too easy, with many participants spending less than 20 seconds per question while achieving 100% accuracy. We increased the difficulty (such as by increasing the number and length of lines, and increasing the number of observed symptoms (Lage et al., 2019)) until participants took about one minute per easy question. Approximately half of the questions participants saw were easy questions, and the other half hard questions. Once we fixed the difficulty of the questions, later pilot studies were 20 minutes long, with AI assistance type randomly assigned to each question. By increasing the length of the study, we expected participants to get more tired during the study (and less willing to cognitively engage with questions). By randomly assigning AI assistance type to questions, we make the pilot studies more realistic, as eventually the AI assistance type will be adaptively assigned to each question. Our results in Figure 2 are from these later pilot studies. After the study, participants were shown a final screen where they were asked what their strategy was (and if it changed when there was an AI input), and for any other feedback. Participants.We ran six pilot studies on Prolific, with 20 participants each. Only English speakers were allowed to participate. In each study, we remove 3-7 participants from analysis, as they either failed the practice questions or let the timer run out without answering questions. Participants were paid at a rate of $12 per hour ($6 for the 30 minute studies, and $7 for the 35 minute studies; these times include 15 minutes for reading instructions and completing practice questions). We also incentivized participant performance by providing a bonus $3 reward to the top-performing participant in each study. If the participants failed the practice questions twice, the study ended early, with a smaller pay of $2. In general, we found that participants seemed to engage positively with the study, with many commenting that they had tried their best to treat their sick alien patients, and that they found the study was well-designed. Design and analysis.We report four metrics. 1. _Accuracy_: if participants chose the _best_ medicine for the alien, we gave them a score of 1, a _suboptimal_ (but correct) medicine has a score of 0.5, and a _wrong_ medicine has a score of 0. We calculate the average accuracy over questions for each participant, and report mean and standard error across participants. 2. _Response time_: we measure how long each participant takes to answer questions. We report the mean and standard error across participants. 3. _Overreliance_: we define overreliance to be the proportion of times a participant gave the same answer as the AI when the AI was wrong or suboptimal (Bucinca et al., 2021; Vasconcelos et al., 2023). 4. _Underreliance_: we define underreliance to be the proportion of times a participant gave a non-optimal answer when the AI was optimal. Figure 2 also shows how participant accuracy and response time changes during the course of the study. To plot this figure, we find each participant's most recently answered question, and plot mean and standard error across the appropriate metric (accuracy or response time). We repeat this at equally-spaced intervals over the course of the study. When we show an AI assistance, there is 60% chance that the AI recommends the best medicine, 30% chance that the AI is suboptimal, and 10% chance that the AI is wrong. The explanation is chosen such that it is faithful to the (correct or incorrect) AI recommendation. Overall, this typically gives an average AI-only accuracy of 0.79\(\pm\)0.03 (mean and standard error across participants). We purposefully ensured that the AI-only accuracy is similar to human-only accuracy. ## 4 Results Our results show that, in our alien prescription task, there is potential for adapting the type of AI assistance depending on the question, in order to achieve good time-accuracy tradeoffs. We leave a detailed look at the effect of cognitive effort and NFC for future work. The results we present are based on pilot studies, with 15-20 participants per study. The trends we see are promising for a future, larger study that we plan to run. Due to the small sample sizes, we do not report p-values. Performance without any AI assistance.We first look at participant performance with the No-AI condition, summarized in Table 1. We see that average accuracy is 0.80, and \begin{table} \begin{tabular}{l c c} \hline \hline **Difficulty** & **Avg acc** & **Avg time (s)** \\ \hline All & 0.80\(\pm\)0.05 & 66\(\pm\)7 \\ Easy & 0.91\(\pm\)0.02 & 60\(\pm\)7 \\ Hard & 0.69\(\pm\)0.08 & 76\(\pm\)8 \\ \hline \hline \end{tabular} \end{table} Table 1: Mean and standard error of accuracy and response time on the No-AI condition (\(n=14\) participants). Humans achieve an accuracy of 80% on average, taking 66 seconds per question. Humans have higher accuracy and are quicker on ‘easy’ questions. the average response time is 66 seconds. We can also see the difference between the two question difficulties, easy and hard. On easy questions, participants are marginally quicker (60 seconds) and have higher accuracy (0.91). On hard questions, participants are slower (76 seconds) and have lower accuracy (0.69). We also note that the standard deviation across participants is large, as some participants are quicker and/or better at the task than others. The standard deviation in accuracy is 0.18, and in response time is 26 seconds (note that we report standard error in Table 1, not standard deviation). Therefore, when comparing to different AI assistance types, we look at how each AI assistance type impacts each participant separately (by comparing metrics to the No-AI condition), and then average across participants. The AI before condition reduces time taken, but increases overreliance.We see in Table 2 that the AI before condition does not impact average accuracy significantly, but does reduce time taken to answer questions. This is the case in both easy and hard questions. Participants' overreliance rate is 48\(\pm\)9% in this case, which is high. Conversely, participants' underreliance rate is 8\(\pm\)3%, which is very low, showing they usually trust the AI recommendation. The AI after condition increases time taken, but also increases accuracy on hard questions.The AI after condition, on average, increases time taken (by 9 seconds), and increases accuracy. This is because participants are able to spot mistakes they made, and correct them, without necessarily overrelying on the AI recommendation. In fact, overreliance rate is significantly lower than with the AI before condition, at 14\(\pm\)8%, while underreliance rate is similarly low at 12\(\pm\)4%. This indicates that participants use the AI input to cognitively engage more with the question, similar to results in previous studies (Bucinca et al., 2021; Vasconcelos et al., 2023). In fact, we found that participants never changed their answer to match the AI's recommendation when the AI was suboptimal or wrong, but did sometimes change their answer to agree with the AI recommendation when the AI was right. When we analyze these results in terms of easy and hard questions, we find that participant accuracy does not significantly increase on easy questions (likely as accuracy is already high). On hard questions however, participant accuracy does increase. This indicates that the AI after condition is particularly beneficial when shown on hard questions. When a timer is shown, users maintain a fast pace of answering questions, but accuracy reduces later in the study.We next look at how participants' accuracy and response time changes during the course of the 20-minute studies. Figure 2 (top) shows results for when a timer is shown to the participants. We see that users maintain a fast pace of answering questions throughout the study (perhaps even getting marginally faster due to the time pressure). However, accuracy tends to decrease slowly. In the first 5 minutes of the study, participants are likely still learning how best to answer questions, which leads to the initial increase in accuracy. Overall, we see that the time pressure due to a timer leads to a reduction in accuracy over the duration of the study, as participants appear to feel pressured into answering questions as quickly as possible. We hypothesize that this reduction may also occur because participants are getting tired during the course of the study, and are no longer willing to expend as much cognitive effort. \begin{table} \begin{tabular}{l l l l} \hline \hline **Question** & **AI** & **Change in** & **Change in** \\ **difficulty** & **condition** & **avg acc** & **avg time (s)** \\ \hline All & AI before & -0.005\(\pm\)0.03 & -13\(\pm\)3 \\ & AI after & 0.09\(\pm\)0.03 & 9\(\pm\)1 \\ \hline Easy & AI before & -0.04\(\pm\)0.04 & -11\(\pm\)4 \\ & AI after & 0.02\(\pm\)0.03 & 8\(\pm\)1 \\ \hline Hard & AI before & 0.007\(\pm\)0.06 & -12\(\pm\)6 \\ & AI after & 0.17\(\pm\)0.07 & 9\(\pm\)2 \\ \hline \hline \end{tabular} \end{table} Table 2: Effect of AI assistance types, measured as within-participant differences to the No-AI condition (mean and standard error). The AI before condition (\(n=17\) participants) saves time without significantly impacting accuracy, and can be used on easy questions. The AI after condition (\(n=14\) participants) increases response time, but increases accuracy on hard questions. Figure 2: Plots of response time and accuracy during the course of the study. Top row: when a timer is shown to participants (\(n=14\) participants), they maintain a fast pace of answering questions (top left), but accuracy reduces later in the study (top right). Bottom row: When no timer is shown to participants (\(n=13\) participants), they maintain a constant accuracy (bottom right), but average response time is high (bottom left). When there is no timer, users maintain a constant accuracy during the course of the study.In Figure 2 (bottom), we see that when no timer is shown on the screen, participants maintain a constant accuracy. Figure 2 (bottom left) shows that question response time changes a lot during the study, and we believe this may be because of noise due to the small sample size. However, overall, response times are larger than when a timer is shown, and accuracy is lower. ## 5 Discussion Our results indicate that cleverly choosing AI assistance type can lead to good tradeoffs between time taken and overall accuracy. For instance, providing AI assistance before a user's initial decision can save time when the AI is likely to be right, particularly on easy questions. Providing AI assistance after a user's initial decision increases response time, but causes users to engage more with the question, increasing accuracy (without increasing overreliance). We can do this when the user's initial decision disagrees with the AI's recommendation. When time-pressured (with a timer shown on the screen), users drop in accuracy while maintaining a constant response time per question. We could try to slow down users in order to increase their accuracy. Using the results from these pilot studies, we intend to show the benefits of adapting AI assistance type on a larger study. We also expect that we should adapt to different humans based on their intrinsic motivation to think (their NFC trait), as well as potentially their overall skill level on the task. For example, humans with a higher NFC may react better to AI assistance types that prompt them to think more (Bucinca et al., 2021). They may also not make as many mistakes later on in the study. We leave a detailed analysis of this for future work. In our current pilot studies, any signal regarding participants with different NFC is too small to comment on. We hope that a larger-scale study would show any signal more. Eventually, in a more general setting, we may want to use reinforcement learning to adaptively choose the AI assistance type depending on properties of the question (such as difficulty) and the human (such as NFC). We note that our study is conducted in a low-stakes environment, and future work should look at how we can adaptively choose AI assistance type in a high-stakes environment too. Our results would not generalize if there are sufficiently different pressures and stakes in such settings. However, it should still be beneficial to adapt the AI assistance type. Future work could also look at the form of the explanation shown. In our case, the explanation was an intermediate symptom, and is verifiable by the participants. This verifiability might make it easier for participants to avoid overrelying on the AI assistance, especially in the AI after condition (Fok and Weld, 2023). Other forms of explanation may not be easily verifiable, and may lead to different results. ## Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. IIS-2107391. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2307.08442
Fast Algorithms for Energy Games in Special Cases
In this paper, we study algorithms for special cases of energy games, a class of turn-based games on graphs that show up in the quantitative analysis of reactive systems. In an energy game, the vertices of a weighted directed graph belong either to Alice or to Bob. A token is moved to a next vertex by the player controlling its current location, and its energy is changed by the weight of the edge. Given a fixed starting vertex and initial energy, Alice wins the game if the energy of the token remains nonnegative at every moment. If the energy goes below zero at some point, then Bob wins. The problem of determining the winner in an energy game lies in $\mathsf{NP} \cap \mathsf{coNP}$. It is a long standing open problem whether a polynomial time algorithm for this problem exists. We devise new algorithms for three special cases of the problem. The first two results focus on the single-player version, where either Alice or Bob controls the whole game graph. We develop an $\tilde{O}(n^\omega W^\omega)$ time algorithm for a game graph controlled by Alice, by providing a reduction to the All-Pairs Nonnegative Prefix Paths problem (APNP). Thus we study the APNP problem separately, for which we develop an $\tilde{O}(n^\omega W^\omega)$ time algorithm. For both problems, we improve over the state of the art of $\tilde O(mn)$ for small $W$. For the APNP problem, we also provide a conditional lower bound which states that there is no $O(n^{3-\epsilon})$ time algorithm for any $\epsilon > 0$, unless the APSP Hypothesis fails. For a game graph controlled by Bob, we obtain a near-linear time algorithm. Regarding our third result, we present a variant of the value iteration algorithm, and we prove that it gives an $O(mn)$ time algorithm for game graphs without negative cycles, which improves a previous upper bound.
Sebastian Forster, Antonis Skarlatos, Tijn de Vos
2023-07-17T12:38:06Z
http://arxiv.org/abs/2307.08442v4
# Fast Algorithms for Energy Games in Special Cases+ ###### Abstract In this paper, we study algorithms for special cases of energy games, a class of turn-based games on graphs that show up in the quantitative analysis of reactive systems. In an energy game, the vertices of a weighted directed graph belong either to Alice or to Bob. A token is moved to a next vertex by the player controlling its current location, and its energy is changed by the weight of the edge. Given a fixed starting vertex and initial energy, Alice wins the game if the energy of the token remains nonnegative at every moment. If the energy goes below zero at some point, then Bob wins. The problem of determining the winner in an energy game lies in \(\mathsf{NP}\cap\mathsf{coNP}\). It is a long standing open problem whether a polynomial time algorithm for this problem exists. We devise new algorithms for three special cases of the problem. The first two results focus on the single-player version, where either Alice or Bob controls the whole game graph. We develop an \(\tilde{O}(n^{\omega}W^{\omega})\) time algorithm for a game graph controlled by Alice, by providing a reduction to the All-Pairs Nonnegative Prefix Paths problem (APNP), where \(W\) is the maximum absolute value of any edge weight and \(\omega\) is the best exponent for matrix multiplication. Thus we study the APNP problem separately, for which we develop an \(\tilde{O}(n^{\omega}W^{\omega})\) time algorithm. For both problems, we improve over the state of the art of \(\tilde{O}(mn)\) for small \(W\). For the APNP problem, we also provide a conditional lower bound which states that there is no \(O(n^{3-\varepsilon})\) time algorithm for any \(\varepsilon>0\), unless the APSP Hypothesis fails. For a game graph controlled by Bob, we obtain a near-linear time algorithm. Regarding our third result, we present a variant of the value iteration algorithm, and we prove that it gives an \(O(mn)\) time algorithm for game graphs without negative cycles, which improves a previous upper bound. The all-Bob algorithm is randomized, all other algorithms are deterministic. ## 1 Introduction Energy games belong to a class of turn-based games on graphs that show up in the quantitative analysis of reactive systems. A game graph can possibly represent a scheduling problem, where vertices are the configurations of the system and edges carry positive or negative values representing the evolution of resources. Thus, in this model resources can be consumed or produced. The energy games problem has been introduced in the early 2000s [14, 7], but also have been implicitly studied before due to its ties to mean-payoff games [22]. Energy games have applications in, among others, computer aided verification and automata theory [14, 6, 13], and in online and streaming problems [32]. From its computational perspective, the problem of determining the winner in an energy game lies in \(\mathsf{NP}\cap\mathsf{coNP}\). It is an intriguing open problem whether a polynomial time algorithm for this problem exists. An energy game is played by two players, say Alice and Bob, on a _game graph_, which is a weighted directed graph such that each vertex is either controlled by Alice or Bob. The game starts by placing a token with an initial energy on a starting vertex. The game is played in rounds, and every time the token is located at a vertex controlled by Alice, then Alice chooses the next location of the token among the outgoing edges, otherwise Bob chooses the next move. The token has an _energy level_ (in the beginning this is equal to the initial energy) and every time it traverses an edge, the weight of the edge is added to the energy level (a negative weight amounts to a reduction of the energy level). The objectives of the players are as follows: Alice wants to minimize the initial energy that is necessary to keep the energy level nonnegative at all times, whereas Bob wants to maximize this value (and possibly drive it to ). The computational problem is to determine for each vertex the minimum initial energy such that Alice can guarantee against all choices of Bob that the energy level always stays nonnegative. Energy games are a generalization of parity games [25, 10], polynomial-time equivalent to mean-payoff games [7, 10], and a special case of simple stochastic games [32]. Recent progress on parity games yielded several quasipolynomial time algorithms [12], but the corresponding techniques seem to not carry over to energy and mean-payoff games [21]. Consequently, the complexity of energy games is still "stuck" at pseudopolynomial [10] or subexponential time [5]. Hence, in this paper we focus on interesting special cases (which are non-trivial problems) that admit fast algorithms. Two of these cases are game graphs where all vertices are controlled by one player, and the third case are game graphs with no negative cycles. All-Pairs Nonnegative Prefix Paths.We also study another reachability problem with energy constraints [23, 18], the _All-Pairs Nonnegative Prefix Paths (APNP) problem_. In this problem, the goal is to find for every pair of vertices whether there exists a path such that the weight of each prefix of is nonnegative. We use this problem to obtain the result for the special case where Alice controls the whole game graph, since the two problems are closely related. Dorfman, Kaplan, Tarjan, and Zwick [18] solve the more general problem, where for each pair of vertices the goal is to find the path of maximum weight among all options. This problem naturally generalizes APSP, and they solve it in. Energy Games.The state-of-the-art algorithms for the energy games are either deterministic with running time [9, 19] or randomized with subexponential running time [5]. Special cases of the energy games have been studied by Chatterjee, Henzinger, Krinninger, and Nanongkai [16]. They present a variant of the value iteration algorithm of [10] with running time, where is a sorted list containing all possible minimum energy values. This does not improve the general case, as in the worst case is the list. However, it does give a faster running time if the weights adhere to certain restrictions. Moreover, they develop a scaling algorithm with running time, where is a lower bound on the _penalty_ of the game. For the special case where there are no negative cycles in the game graph, the penalty can be set to, and the scaling algorithms of [16] solves the problem in time. For another special case where the whole game graph is controlled by Alice, Brim and Chaloupka [9] provided an running time algorithm as a subroutine for the two-players version. Mean-Payoff Games.In a mean-payoff game, the objective of Alice is to maximize the average of the weights of edges traversed so far, whereas Bob's objective is to minimize this mean payoff. It is well known that any energy games instance can be solved by solving [7], and any mean-payoff game instance can be solved by solving \(O(\log(nW))\) energy games with maximal weight \(nW\)[10]2. Thus, any of the aforementioned algorithms for solving energy games also yields an algorithm for solving mean-payoff games at the expense of at most an additional factor of \(O(n\log(nW))\) in the running time. Zwick and Paterson [32] provided the first pseudopolynomial time algorithm that computes all the mean-payoff values, with \(O(mn^{3}W)\) running time. Later, the running time was improved by Brim, Chaloupka, Doyen, Gentiline, and Raskin [10] to \(O(mn^{2}W\log(nW))\), using their reduction to energy games. The state-of-the-art algorithm for solving a mean-payoff game is due to Comin and Rizzi [17] which runs in \(O(mn^{2}W)\) time. Footnote 2: Unless stated otherwise, we always consider the versions of the games where we compute the mean-payoff value/minimum initial energy for _all_ vertices. ### Our Results and Techniques All-Pairs Nonnegative Prefix Paths.The version of All-Pairs Nonnegative Prefix Paths (APNP) problem where we want to find the path of maximum weight [18], naturally generalizes the All-Pairs Shortest Paths (APSP) problem. The APSP Hypothesis states that there is no \(O(n^{3-\varepsilon})\) time algorithm for the APSP, for any \(\varepsilon>0\). However, this version of APNP is more than what is necessary for the application of energy games. We show that the weaker version which only computes reachability (as APNP has been defined), also does not allow for a \(O(n^{3-\varepsilon})\) time algorithm for any \(\varepsilon>0\), under the APSP Hypothesis. **Theorem 1.1**.: _Unless the APSP Hypothesis fails, there is no \(O(n^{3-\varepsilon})\) time algorithm that solves the All-Pairs Nonnegative Prefix Paths problem, for any \(\varepsilon>0\)._ We parameterize the maximum absolute value of any edge weight \(W\), and we obtain an algorithm with a faster running time for small values of \(W\). **Theorem 1.2**.: _There exists a deterministic algorithm that, given a graph \(G=(V,E,w)\) with edge weights in the interval \([-W,W]\), solves the All-Pairs Nonnegative Prefix Paths problem in \(\tilde{O}(n^{\omega}W^{\omega})\) time._ All-Alice.Our first contribution regarding the special cases of energy games, concerns the _all-Alice case_ in which all vertices are controlled by Alice. Note that if we fix a strategy for Bob in any game graph, this can be seen as an all-Alice instance. **Theorem 1.3**.: _There exists a deterministic algorithm that, given a game graph \(G=(V,E,w)\) in which all vertices are controlled by Alice, computes the minimum sufficient energy of all vertices in \(\tilde{O}(n^{\omega}W^{\omega})\) time._ Note that the aforementioned reduction from energy games to mean-payoff games always introduces Bob vertices. Thus, algorithms for the all-Alice mean-payoff decision problem cannot be leveraged by this reduction to compute the minimum energies in the all-Alice case. Our approach for the all-Alice case consists of two steps. In the first step, we identify all vertices \(Z\) such that minimum initial energy \(0\) suffices, by using Theorem 1.2. In the second step, we compute the paths of least energy reaching any vertex in \(Z\). For small values of \(W\), this improves on the state-of-the-art \(\tilde{O}(mn)\) algorithm [9]. All-Bob.Our second contribution regarding the special cases of energy games, is a faster algorithm for the _all-Bob case_ in which all vertices are controlled by Bob. Note that if we fix a strategy for Alice in any game graph, this can be seen as an all-Bob instance. **Theorem 1.4**.: _There exists a randomized (Las Vegas) algorithm that, given a game graph \(G=(V,E,w)\) in which all vertices are controlled by Bob, computes the minimum sufficient energy of all vertices, and with high probability the algorithm takes \(O(m\log^{2}n\log nW\log\log n)\) time._ To the best of our knowledge, the fastest known algorithm for the all-Bob case is implied by the reduction to the mean-payoff decision problem and has a running time of \(\tilde{O}(mn\log^{2}W)\). This comes from \(\tilde{O}(n\log W)\) calls to the state-of-the-art negative-cycle detection algorithm [4, 11]. Our approach for the all-Bob case consists of two steps. In the first step, we run a negative-cycle detection algorithm to remove all vertices reaching a negative cycle. In the second step, we add an artificial sink to the graph with an edge from every vertex to the sink, and we compute the shortest path of every vertex to the sink using a single-source shortest paths (SSSP) algorithm. Note that this construction is very close to Johnson's method for computing suitable vertex potentials [24]. Further note that, since energy games are not symmetric for Alice and Bob, our near-linear time all-Bob algorithm has no implications for the all-Alice case. No Negative Cycles.Finally, we give an improved algorithm for the special case where there are no negative cycles. **Theorem 1.5**.: _There exists a deterministic algorithm that, given a game graph \(G=(V,E)\) without negative cycles, computes the minimum sufficient energy of all vertices in \(O(mn)\) time._ To the best of our knowledge, the fastest known algorithm for this special case has a running time of \(O(mn\log W)\) by running the above mentioned algorithm of Chatterjee, Henzinger, Krinninger, and Nanongkai [16] with penalty \(P=W\). We use a new variant of the value iteration algorithm where the energy function after \(i\) steps corresponds to the minimum energy function in an \(i\)_-round game_. A similar variant has been used by Chatterjee, Doyen, Randour, and Raskin [15] for the Mean-Payoff games. We adapt this algorithm and provide the necessary analysis to use it for energy games. An \(i\)-round game is a finite version of the energy game, where a token is passed for only \(i\) rounds. In this version, the goal is to find the initial energy that Alice needs, in order to keep the energy level nonnegative for these \(i\)-rounds. Then we show that in graphs without negative cycle, the infinite game is equivalent to the \(n\)-round game. Structure of the paper.In the next section, we provide some preliminaries, including the formal definition of an energy game. In Section 3, we study the All-Pairs Nonnegative Prefix Paths problem, and we present an algorithm for the special case that the edge weights are in \(\{-1,0,+1\}\), an algorithm for general edge weights, and a lower bound. Next in Section 4, we consider the all-Alice case by reducing this problem to the All-Pairs Nonnegative Prefix Paths problem. In Section 5, we consider the all-Bob case, and finally in Section 6, we consider game graphs without negative cycles. ## 2 Preliminaries Graphs.Given a directed graph \(G=(V,E,w)\), we denote by \(n=|V|\) the number of vertices, by \(m=|E|\) the number of edges, and by \(W\) the maximum absolute value of any edge weight. Also, we denote \(N^{+}(v)\) for the _out-neighborhood_ of \(v\), i.e., \(N^{+}(v):=\{u\in V:(v,u)\in E\}\). Further, we denote \(\deg^{+}(v)\) for the _out-degree_ of \(v\), i.e., \(\deg^{+}(v):=|N^{+}(v)|\). Similarly, \(N^{-}(v)\) and \(\deg^{-}(v)\) denote the _in-neighborhood_ and _in-degree_ respectively. A _path_\(P\) is a sequence of vertices \(u_{0}u_{1}\cdots\) such that \((u_{i},u_{i+1})\in E\) for every \(i\geq 0\). We say a path is _finite_ if it contains a finite number of vertices (counted with multiplicity). We say a path is _simple_ if each vertex appears at most once. A _lasso_ is a path of the form \(u_{0}u_{1}\cdots u_{j}u_{i}\), where the vertices \(u_{0},\ldots,u_{j}\) are disjoint and \(i<j\). In other words, it is a simple path leading to a cycle. A _nonnegative prefix path_ is a path \(P=u_{0}u_{1}\cdots\) such that \(\sum_{j=0}^{i-1}w(u_{j},u_{j+1})\geq 0\) for all \(1\leq i\leq|P|\). Further, we denote the weight of a path \(P=u_{0}u_{1}\cdots\) by \(w(P):=\sum_{j=0}^{|P|-1}w(u_{j},u_{j+1})\). For a fixed path \(P=u_{0}u_{1}\cdots\), the energy level \(e(u_{i})\) of a vertex \(u_{i}\) in \(P\) is equal to \(\sum_{j=0}^{i-1}w(u_{j},u_{j+1})\). That is, the sum of all the weights along \(P\) until \(u_{i}\). Let \(G=(V,E,w)\) be a directed graph with edge weights \(-1\) and \(+1\), and let \(s,t\in V\) be two vertices of \(G\). Then, a _Dyck path from \(s\) to \(t\)_ is a nonnegative prefix path from \(s\) to \(t\) of total weight zero [8]. For a graph \(H\), we refer to the corresponding functions by using \(H\) as subscript (e.g., we use the notation \(w_{H}(\cdot)\) for the weight function of \(H\)). Energy Games.An energy game is an infinite duration game played by two players, Alice and Bob. The game is played on a _game graph_ which a weighted directed graph \(G=(V,E,w)\), where each vertex has at least one outgoing edge. The weights are integers and lie in the range \(\{-W,-W+1,\ldots,W-1,W\}\). The set of vertices is partitioned in two sets \(V_{A}\) and \(V_{B}\), controlled by Alice and Bob respectively. Furthermore, we are given a starting vertex \(s\in V\), and initial energy \(e_{0}\geq 0\). We start with position \(v_{0}=s\). After the \(i_{\text{th}}\) round, we are at a position \(v_{i}\in V\) and have energy \(e_{i}\). In the \(i_{\text{th}}\) round, if \(v_{i-1}\in V_{A}\) (\(v_{i-1}\in V_{B}\)) then Alice (Bob) chooses a next vertex \(v_{i}\in N^{+}(v_{i-1})\) and the energy changes to \(e_{i}=e_{i-1}+w(v_{i-1},v_{i})\). The game ends when \(e_{i}<0\), in which case we say that Bob wins. If the game never ends, namely, \(e_{i}\geq 0\) for all \(i\geq 0\), we say that Alice wins. The goal is to find out the minimum initial energy \(e_{0}\geq 0\) such that Alice wins when both players play optimally. Note that allowing \(e_{0}=\infty\) means that such an energy always exist. To make this goal more formal, we have to introduce _strategies_. A strategy for Alice (Bob) tells us given the current point \(v_{i}\in V_{A}\) (\(v_{i}\in V_{B}\)) and the history of the game, \(v_{0},\ldots,v_{i}\), where to move next. It turns out that we can restrict ourselves to _positional strategies_[20, 7], which are deterministic and do not depend on the history of the game. We denote a positional strategy of Alice by \(\sigma\colon V_{A}\to V\) where \(\sigma(v)\in N^{+}(v)\) for \(v\in V_{A}\), and a positional strategy of Bob by \(\tau\colon V_{B}\to V\) where \(\tau(v)\in N^{+}(v)\) for \(v\in V_{B}\). For any pair of strategies \((\sigma,\tau)\) we define \(G(\sigma,\tau)\) to be the subgraph \((V,E^{\prime})\) corresponding to these strategies, where \(E^{\prime}=\{(v,\sigma(v)):v\in V_{A}\}\cup\{(v,\tau(v)):v\in V_{B}\}\). Note that in this graph each vertex has exactly one out-neighbor. Let \(P_{i}\) be the unique path \(s=u_{0},u_{1},\ldots,u_{i}\) in \(G(\sigma,\tau)\) of length \(i\) originating at \(s\). Then at vertex \(s\) with initial energy \(e_{0}\) and with these strategies, Alice wins if \(e_{0}+w(P_{i})\geq 0\) for all \(i\geq 0\), and Bob wins if \(e_{0}+w(P_{i})<0\) for at least one \(i\geq 0\). The _minimum sufficient energy at \(s\) with respect to \(\sigma\) and \(\tau\)_ is the minimum energy such that Alice wins, namely \(e_{G(\sigma,\tau)}(s):=\max\{0,-\inf_{i\geq 0}w(P_{i})\}\). Finally, we define the _minimum sufficient energy_ at \(s\) as follows: \[e_{G}^{*}(s):=\min_{\sigma}\max_{\tau}e_{G(\sigma,\tau)}(s),\] where the minimization and the maximization are over all the positional strategies \(\sigma\) of Alice and \(\tau\) of Bob, respectively. We omit the subscript \(G\), and use \(e_{\sigma,\tau}(s)\) instead of \(e_{G(\sigma,\tau)}(s)\), whenever this is clear from the context. By Martin's determinacy theorem [27], we have that \(\min_{\sigma}\max_{\tau}e_{\sigma,\tau}(s)=\max_{\tau}\min_{\sigma}e_{\sigma, \tau}(s)\), thus the outcome is independent of the order in which the players pick their strategy. Now we can define _optimal strategies_ as follows. A strategy \(\sigma^{*}\) is an optimal strategy for Alice, if \(e_{\sigma^{*},\tau}(s)\leq e^{*}(s)\) for any strategy \(\tau\) of Bob. Similarly, \(\tau^{*}\) is an optimal strategy for Bob, if \(e_{\sigma,\tau^{*}}(s)\geq e^{*}(s)\) for any strategy \(\sigma\) of Alice. An _energy function_ is a function \(e\colon V\to\mathbb{Z}_{\geq 0}\cup\{\infty\}\). The function \(e_{G}^{*}(\cdot)\) (or \(e^{*}(\cdot)\)) as defined above, is the _minimum sufficient energy function_. ## 3 All-Pairs Nonnegative Prefix Paths Problem In this section, we study the All-Pairs Nonnegative Prefix Paths (APNP) problem. The goal of this problem is to find for every pair of vertices whether there exists a nonnegative prefix path between them. A similar problem is the _All-Pairs Dyck-Reachability problem_, where the goal is to find for every pair of vertices whether there exists a Dyck path between them (given that the edge weights are in \(\{-1,+1\}\)). Furthermore, another standard problem is the _transitive closure problem_, which asks to find for every pair of vertices whether there exists a path between them. Bradford [8] provided an \(\tilde{O}(n^{\omega})\) time algorithm for the All-Pairs Dyck-Reachability problem. Moreover, the transitive closure problem admits an \(\tilde{O}(n^{\omega})\) algorithm [2]. **Theorem 3.1**.: _There exists a deterministic algorithm that, given a graph \(G=(V,E,w)\) with edge weights in \(\{-1,1\}\), solves the All-Pairs Dyck-Reachability problem in \(\tilde{O}(n^{\omega})\) time._ Our approach for the APNP problem consists of two stages. At first, we solve the APNP problem for the special case where the edge weights are from the set \(\{-1,0,+1\}\), by exploiting the algorithm of [8] for the All-Pairs Dyck-Reachability problem. Afterwards, we extend our algorithm to work with general weights, by showing that a reduction used in [3] preserves the properties we need. In the end of the section, we also present a conditional lower bound for the APNP problem under the APSP Hypothesis, which is one of the main hypotheses in fine-grained complexity. ### All-Pairs Nonnegative Prefix Paths with edge weights in \(\{-1,0,+1\}\) Consider a graph \(G=(V,E)\) with edge weights \(-1\) and \(+1\). By definition, we have that any Dyck path is also a nonnegative prefix path. However, the opposite is not necessarily true. Recall that nonnegative prefix paths allow the energy level of their last vertex to be a strictly positive value, while in Dyck paths this value must be zero. This implies that an All-Pairs Dyck-Reachability algorithm does not trivially gives us an All-Pairs Nonnegative Prefix Paths algorithm. Nevertheless, we show how to overcome this issue and we use an All-Pairs Dyck-Reachability algorithm as a subroutine in order to solve the All-Pairs Nonnegative Prefix Paths problem. Algorithm for the \(\{-1,0,+1\}\) case.Consider a directed graph \(G=(V,E,w)\), with edge weights in \(\{-1,0,+1\}\). In the beginning of the algorithm, we construct a graph \(G_{2}\) as follows. 1. Initially, we create a new graph \(G_{1}=(V_{1},E_{1},w)\) by replacing every edge of zero weight with an edge of weight \(+1\) and an edge of weight \(-1\). Specifically, for each vertex \(u\) with at least one outgoing edge \((u,v)\in E\) with \(w(u,v)=0\), we add a new vertex \(u^{\prime}\), and add an edge \((u,u^{\prime})\) with \(w(u,u^{\prime})=+1\). Next, for each edge \((u,v)\in E\) with \(w(u,v)=0\), we remove the edge \((u,v)\), and add the edge \((u^{\prime},v)\) with weight \(-1\).3 Footnote 3: Note that by doing the naive thing which is to replace each edge of zero weight by two edges, one with weight \(+1\) and one with weight \(-1\), potentially blows up the number of vertices to \(\Omega(m)\). In turn, since the running time depends on the number of vertices, this translates to a blow up of the running time. 2. Next, we run on \(G_{1}\) the algorithm of Theorem 3.1, which solves the All-Pairs Dyck-Reachability in time \(\tilde{O}(n^{\omega})\) for edge weights in \(\{-1,+1\}\). 3. Finally, we create another new graph \(G_{2}=(V,E_{2})\) with the original vertex set and an edge set \(E_{2}\) defined as follows. The set \(E_{2}\) contains an edge \((u,v)\in V\times V\) if and only if there is a Dyck path from \(u\) to \(v\) in \(G_{1}\) or \(w(u,v)=1\) in \(G\), if \((u,v)\in E\). In the end, we run on \(G_{2}\) a transitive closure algorithm, and we return that there is a nonnegative prefix path in \(G\) if and only if there is a path in \(G_{2}\). Notice that graphs \(G\) and \(G_{1}\) are weighted, while \(G_{2}\) is unweighted. Analysis of the algorithm.The following observation shows that the replacement of zero weight edges is valid, in the sense that nonnegative prefix paths of total weight zero4 in \(G\) correspond to Dyck paths in \(G_{1}\) and vice versa. Moreover, we prove the claim that the transitive closure problem in \(G_{2}\) is equivalent to the All-Pairs Nonnegative Prefix Paths problem in \(G\). Footnote 4: Observe that a Dyck path is a nonnegative prefix path of total weight zero consisting only of edges \(-1\) and \(+1\). Thus, we avoid to use the term _Dyck path_ for \(G\) because it may contains edges of weight zero. **Observation 3.2**.: _For every pair of vertices \(u,v\in V\), there exists a nonnegative prefix path of total weight zero from \(u\) to \(v\) in \(G_{1}\)._ **Lemma 3.3**.: _For every pair of vertices \(u,v\in V\), there exists a nonnegative prefix path from \(u\) to \(v\) in \(G\) if and only if there exists a path from \(u\) to \(v\) in \(G_{2}\)._ Proof.: Assume that there exists a nonnegative prefix path \(\pi\) from \(u\) to \(v\) in \(G\). Let \(a\) be the first vertex after \(u\) along \(\pi\) with a minimum energy level. Initially, we show that the edge \((u,a)\) appears in \(G_{2}\). Since \(\pi\) is a nonnegative prefix path, we have that \(e(u)\leq e(a)\). If \(e(u)<e(a)\), then there must be an edge \((u,a)\) in \(G\) with weight \(+1\). Also if \(e(u)=e(a)\), then the subpath of \(\pi\) from \(u\) to \(a\) is a nonnegative prefix path of total weight zero. Then by Observation 3.2, the subpath of \(\pi\) from \(u\) to \(a\) is a Dyck path in \(G_{1}\). Therefore, in both cases we have added the edge \((u,a)\) in \(G_{2}\). As the vertex \(a\) has a minimum energy level, we can apply the same argument iteratively starting from \(a\), to conclude that there exists a path from \(u\) to \(v\) in \(G_{2}\). Assume now that there exists a path \(\pi\) from \(u\) to \(v\) in \(G_{2}\). By construction, the edges of \(\pi\) correspond either to edges in \(G\) with weight \(+1\) or to Dyck paths in \(G_{1}\). By Observation 3.2, these Dyck paths in \(G_{1}\) correspond to nonnegative prefix paths of total weight zero in \(G\). Since positive edges increase the energy level and nonnegative prefix paths at least maintain the energy level, we conclude that there exists a nonnegative prefix path from \(u\) to \(v\) in \(G\). **Lemma 3.4**.: _There exists a deterministic algorithm that, given a graph \(G=(V,E,w)\) with edge weights in \(\{-1,0,+1\}\), solves the All-Pairs Nonnegative Prefix Paths problem in \(\tilde{O}(n^{\omega})\) time._ Proof.: The number of vertices of \(G_{1}\) is \(O(n)\) by construction, where \(n\) is the initial number of vertices in \(G\). Hence, the construction of \(G_{2}\) runs in \(\tilde{O}(n^{\omega})\). Moreover, the transitive closure problem in \(G_{2}\) can be solved in \(\tilde{O}(n^{\omega})\) time as well [2]. Thus by Lemma 3.3, the claim follows. ### All-Pairs Nonnegative Prefix Paths with general edge weights We extend now Lemma 3.4 for graphs with general edge weights, in the cost of an extra factor \(W^{\omega}\) in the running time. The idea is to use the reduction by Alon, Galil, and Margalit [3], who reduce the All-Pairs Shortest Paths (APSP) problem with general edge weights to the special case where the edge weights are in \(\{-1,0,+1\}\). We present the reduction for completeness, and we prove that the same reduction also preserves the properties that we need for the All-Pairs Nonnegative Prefix Paths problem. Reduction from general weights to \(\{-1,0,+1\}\) [2].Given a graph \(G=(V,E,w)\) with weights in the interval \([-W,W]\), we create another graph \(G^{\prime}\) with weights only in \(\{-1,0,+1\}\), as follows. For every vertex \(v\in V\) in \(G\), we create \(2W+1\) vertices \(\{v^{i}\}_{i=-W}^{W}\) in \(G^{\prime}\). We say that vertex \(v^{0}\) of \(G^{\prime}\) is the origin of vertex \(v\). Then, we add in \(G^{\prime}\) an edge \((v^{i+1},v^{i})\) of weight \(-1\), for every \(-W\leq i\leq-1\), and an edge \((v^{i-1},v^{i})\) of weight \(1\), for every \(1\leq i\leq W\). Moreover, for every edge \((u,v)\) of weight \(k\) in \(G\), we add an edge \((u^{k},v^{0})\) of zero weight in \(G^{\prime}\). **Theorem 1.2**.: _There exists a deterministic algorithm that, given a graph \(G=(V,E,w)\) with edge weights in the interval \([-W,W]\), solves the All-Pairs Nonnegative Prefix Paths problem in \(\tilde{O}(n^{0}W^{\omega})\) time._ Proof.: The idea is to apply the reduction mentioned above and use the algorithm of Lemma 3.4 in \(G^{\prime}\). Then, we claim that there exists a nonnegative prefix path from \(u\) to \(v\) in \(G\) if and only if there exists a nonnegative prefix path from \(u^{0}\) to \(v^{0}\) in \(G^{\prime}\). Regarding the running time, since the number of vertices of the new graph \(G^{\prime}\) after the reduction becomes \(\Theta(nW)\), the running time of the algorithm becomes \(\tilde{O}((nW)^{\omega})\). It remains to prove the correctness of the algorithm. Let \(\pi\) be a nonnegative prefix path from \(u\) to \(v\) in \(G\). We construct a path \(\pi^{\prime}\) from \(u^{0}\) to \(v^{0}\) in \(G^{\prime}\) as follows. For every edge \((a,b)\in\pi\) of weight \(k\), we add to \(\pi^{\prime}\) the unique subpath from \(a^{0}\) to \(b^{0}\) of weight \(k\) in \(G^{\prime}\), which exists by construction. Since \(\pi\) is a nonnegative prefix path in \(G\), and every subpath we add to \(\pi^{\prime}\) consists either only of edges with weight in \(\{-1,0\}\) or only of edges with weight in \(\{0,+1\}\), we can infer that \(\pi^{\prime}\) is a nonnegative prefix path from \(u^{0}\) to \(v^{0}\) in \(G^{\prime}\). For the other direction, let \(\pi^{\prime}\) be a nonnegative prefix path from \(u^{0}\) to \(v^{0}\) in \(G^{\prime}\). We construct a path \(\pi\) from \(u\) to \(v\) in \(G\) as follows. Let \(a^{0}\) be the first vertex after \(u^{0}\) along \(\pi^{\prime}\) such that, \(a^{0}\) is the origin vertex of a different vertex than \(u\) (i.e., \(a^{0}\) is the origin of a vertex \(a\neq u\)). By construction, there exists an edge \((u,a)\) of weight \(k\) in \(G\), where \(k\) is the weight of the subpath from \(u^{0}\) to \(a^{0}\) in \(\pi^{\prime}\). We add the edge \((u,a)\) in \(\pi\), and continue with the construction of \(\pi\) by applying the same argument iteratively starting from \(a^{0}\) until we reach \(v^{0}\). Since \(\pi^{\prime}\) is a nonnegative prefix path in \(G^{\prime}\), and each prefix of \(\pi\) corresponds to a prefix in \(\pi^{\prime}\), we can infer that \(\pi\) is a nonnegative prefix path from \(u\) to \(v\) in \(G\). Therefore, the pair of vertices \(\{u^{0},v^{0}\}\) in \(G^{\prime}\) contains the information for the pair of vertices \(\{u,v\}\) in \(G\), and so the claim follows. ### Lower bound for All-Pairs Nonnegative Prefix Paths We prove a lower bound on the running time of All-Pairs Nonnegative Prefix Paths problem under the APSP Hypothesis. The APSP Hypothesis is an assertions that the All-Pairs Shortest Paths (APSP) problem cannot be solved in truly subcubic \(O(n^{3-\varepsilon})\) time, for any \(\varepsilon>0\). Vassilevska Williams and Williams [29] proved that APSP and Negative Triangle are equivalent under subcubic reductions. The Negative Triangle problem is defined as follows. Given a graph \(G=(V,E,w)\), the goal is to find three vertices \(a,b,c\) such that \(w(a,b)+w(b,c)+w(c,a)<0\), that is, the vertices \(a,b,c\) form a negative weight cycle. Recently, a reduction from the Negative Triangle problem to the \(h\)-hop-bounded s-t path problem was given by Polak and Kociumaka [25], in order to prove a hardness result for the latter. Motivated by this reduction, we also reduce the Negative Triangle problem to the All-Pairs Nonnegative Prefix Paths problem to obtain a hardness result for the All-Pairs Nonnegative Prefix Paths problem, as shown in Theorem 1.1. We first provide an auxiliary lemma, which we also use later in Lemma 4.1. **Lemma 3.5**.: _Given a graph \(G=(V,E,w)\), let \(C\) be a nonnegative weight cycle in \(G\) (i.e., \(w(C)\geq 0\)). Then, there is a vertex \(u\in C\) in the cycle, such that there exists a nonnegative prefix path in \(G\) from \(u\) to itself along \(C\)._ Proof.: Let \(Q\subseteq C\) be a subpath of \(C\) with the most negative total weight, and \(Q^{\prime}\) be the rest of \(C\) (i.e., \(Q\cup Q^{\prime}=C\)). Notice that the weight of all prefixes in \(Q^{\prime}\) must be nonnegative, otherwise this negative weight prefix could be merged with \(Q\), contradicting the fact that \(Q\) is the subpath of \(C\) with the most negative total weight. Moreover, as \(w(C)\geq 0\) we have that \(w(Q^{\prime})\geq-w(Q)\). Since by definition of \(Q\), there is no prefix of \(Q\) with more negative total weight, it holds that \(Q^{\prime}\cup Q\) is a nonnegative prefix path from the first vertex of \(Q^{\prime}\) to itself along \(C\). **Theorem 1.1**.: _Unless the APSP Hypothesis fails, there is no \(O(n^{3-\epsilon})\) time algorithm that solves the All-Pairs Nonnegative Prefix Paths problem, for any \(\epsilon>0\)._ Proof.: Consider a Negative Triangle instance \(G=(V,E)\). We create a directed graph \(G_{1}=(V_{1},E_{1})\) as follows. The vertex set \(V_{1}\) of \(G_{1}\) consists of five copies of all vertices, i.e., \(V_{1}:=\{v^{j}:v\in V,i\in\{1,2,3,4,5\}\}\). For every edge \((u,v)\in E\) of weight \(w(u,v)\), we add an edge \((u^{i},v^{i+1})\) to \(E_{1}\) with weight \(-w(u,v)\), for \(1\leq i<4\). Also for each vertex \(v\in V\), we add an edge \((v^{4},v^{5})\) of weight \(w_{\min}=-1\). We claim that there exists a negative weight triangle in \(G\) if and only if there is a vertex \(v\in V\) such that there exists a nonnegative prefix path from \(v^{1}\) to \(v^{5}\) in \(G_{1}\). In this case, since the reduction is subcubic and the time to check all vertices in \(G_{1}\) is \(O(n)\), an \(O(n^{3-\epsilon})\) time algorithm for the All-Pairs Nonnegative Prefix Paths problem would imply an \(O(n^{3-\epsilon})\) time algorithm for the Negative Triangle problem, for any \(\epsilon>0\), contradicting the APSP Hypothesis. We proceed with the proof of the claim. Suppose that there are three vertices \(a,b,c\) that form a negative weight cycle \(C\) in \(G\), and let \(G_{2}\) be the graph \(G\) after flipping the sign of the weights. Then we have that \(w_{G_{2}}(C)>0\) in \(G_{2}\), and based on Lemma 3.5 there is a vertex \(v\in C\), such that there exists a nonnegative prefix path in \(G_{2}\) from \(v\) to itself along \(C\). Notice that \(v\) can be either \(a,b\) or \(c\), and by construction, the paths \(a^{1}b^{2}c^{3}a^{4}a^{5}\), \(b^{1}c^{2}a^{3}b^{4}b^{5}\), and \(c^{1}a^{2}b^{3}c^{4}c^{5}\) exist in \(G_{1}\). Thus without loss of generality, we can assume that \(v\) is \(a\) and we use the path \(a^{1}b^{2}c^{3}a^{4}a^{5}\) in \(G_{1}\). By construction, it holds that \(w_{G_{1}}(a^{1},b^{2})=w_{G_{2}}(a,b),w_{G_{1}}(b^{2},c^{3})=w_{G_{2}}(b,c)\), \(w_{G_{1}}(c^{3},a^{4})=w_{G_{2}}(c,a)\) and \(w_{G_{1}}(a^{4},a^{5})=w_{\min}\). The path \(abca\) is a nonnegative prefix path in \(G_{2}\), and so the path \(a^{1}b^{2}c^{3}a^{4}\) is a nonnegative prefix path in \(G_{1}\) as well. Moreover since \(w_{G_{2}}(C)>0\), we have that \(w_{G_{2}}(C)\geq-w_{\min}\), which implies that: \[w_{G_{1}}(a^{1},b^{2})+w_{G_{1}}(b^{2},c^{3})+w_{G_{1}}(c^{3},a^{4})\geq-w_{ \min}.\] Thus, we can conclude that the path \(a^{1}b^{2}c^{3}a^{4}a^{5}\) is a nonnegative prefix path in \(G_{1}\). For the other direction, let \(a^{1}b^{2}c^{3}a^{4}a^{5}\) be a nonnegative prefix path in \(G_{1}\). By construction of \(G_{1}\) and the fact that \(G\) does not contain self-loops, it must be the case that the corresponding vertices \(a,b,c\) must be pairwise different in \(G\). By definition of a nonnegative prefix path, it holds that: \[w_{G_{1}}(a^{1},b^{2})+w_{G_{1}}(b^{2},c^{3})+w_{G_{1}}(c^{3},a^{4})\geq-w_{ \min}>0.\] By construction, we have that \(w(a,b)=-w_{G_{1}}(a^{1},b^{2}),w(b,c)=-w_{G_{1}}(b^{2},c^{3})\) and \(w(c,a)=-w_{G_{1}}(c^{3},a^{4})\). Therefore, it is true that \(w(a,b)+w(b,c)+w(c,a)<0\), and the vertices \(a,b,c\) form a negative weight cycle in \(G\) ## 4 The All-Alice Case In this section, we develop an algorithm that computes the minimum sufficient energies of all vertices for game graphs controlled by Alice. In particular, we obtain the following result. **Theorem 1.3**.: _There exists a deterministic algorithm that, given a game graph \(G=(V,E,w)\) in which all vertices are controlled by Alice, computes the minimum sufficient energy of all vertices in \(\tilde{O}(n^{\omega}W^{\omega})\) time._ The idea is to use the algorithm of Theorem 1.2 for the All-Pairs Nonnegative Prefix Paths problem. Helouet, Markey, and Raha [23] provide a relevant reduction from the problem of whether zero energy suffices to the problem of whether there exists a nonnegative prefix path. Hence, one idea would be to apply this reduction and run the algorithm of Theorem 1.2. Unfortunately this reduction affects the weights, and the maximum weight of the new instance can be as big as \(mW\), which in turn affects the running time of the algorithm. To that end, we present another way to use the All-Pairs Nonnegative Prefix Paths algorithm of Theorem 1.2 without affecting the maximum weight of the graph. The algorithm consists of two phases. In the first phase, we detect all the vertices such that initial zero energy suffices, and in the second phase we compute the minimum sufficient energy for the rest of the vertices. In the first phase of the algorithm, initially we run the All-Pairs Nonnegative Prefix Paths algorithm of Theorem 1.2 on the game graph \(G=(V,E,w)\). Hence, we retrieve the information of whether there exists a nonnegative prefix path from a vertex \(u\) to a vertex \(v\), for any two vertices \(u,v\in V\times V\). Then for each vertex \(v\in V\), we check whether there is a vertex \(u\) (including \(v\)) such that there exists a nonnegative prefix path from \(v\) to \(u\) and from \(u\) to \(u\). If this is the case, then we add this vertex to a set \(Z\). The next lemma shows that the set \(Z\) is actually the set of all vertices such that initial energy zero suffices. **Lemma 4.1**.: _The set \(Z\) is the same as the set \(\{v\in V:e^{*}(v)=0\}\), and is computed in \(\tilde{O}(n^{\omega}W^{\omega})\) time._ Proof.: Suppose that the algorithm adds a vertex \(v\) to \(Z\). Then, there must be a vertex \(u\) (possibly \(u=v\)) such that there exists a nonnegative prefix path from \(v\) to \(u\) and from \(u\) to \(u\). By merging then these two paths, and by definition of minimum sufficient energy, we can conclude that \(e^{*}(v)=0\). Suppose now that the minimum sufficient energy of a vertex \(v\in V\) is zero (i.e., \(e^{*}(v)=0\)). By definition of minimum sufficient energy, there must exist a nonnegative prefix lasso \(P\) which contains a nonnegative cycle \(C\). Also by Lemma 3.5, there is a vertex \(u\in C\) in the cycle, such that there exists a nonnegative prefix path from \(u\) to itself. As a result, the algorithm finds these vertices \(v\) and \(u\) and adds \(v\) to \(Z\). The running time of this process is dominated by the running time of the All-Pairs Nonnegative Prefix Paths algorithm, which is \(\tilde{O}(n^{\omega}W^{\omega})\) based on Theorem 1.2. The set \(Z\) can be seen as the set of possible vertices to 'end' in. Any optimal strategy would still have to define how to move from such a vertex \(v\in Z\), but since we know that \(e^{*}(v)=0\), there has to be a path such that from this vertex no initial energy is necessary. So the goal of the second phase, is to find for each vertex \(v\in V\setminus Z\) the best way to hit a vertex in \(Z\). The following lemma shows that this comes down to a shortest path computation. Brim and Chaloupka [9] use a similar idea inside their subroutine for the Mean-Payoff games. **Lemma 4.2**.: _Given a game graph \(G=(V,E,w)\) where all vertices belong to Alice and the set \(Z:=\{v\in V:e^{*}(v)=0\}\) is known, we can compute the remaining minimum sufficient energies through a single SSSP computation in G._ For the proof we refer to the full version of the paper. Together, Lemma 4.1 and Lemma 4.2 prove Theorem 1.3, by using also the fact that we can compute SSSP deterministically in \(\tilde{O}(n^{\omega}W)\) time [28, 31]. ## 5 The All-Bob Case In this section, we restrict ourselves to the case where all vertices belong to Bob. We show that this special case admits a near-linear time algorithm, by essentially reducing the problem to detecting negative cycles and computing shortest paths. We obtain the following result. **Theorem 1.4**.: _There exists a randomized (Las Vegas) algorithm that, given a game graph \(G=(V,E,w)\) in which all vertices are controlled by Bob, computes the minimum sufficient energy of all vertices, and with high probability the algorithm takes \(O(m\log^{2}n\log nW\log\log n)\) time._ Proof.: We split the algorithm and proof in two parts, depending on who wins the game in a particular vertex. The first part of the algorithm consists of identifying the vertices with infinite energy (namely, the vertices where Bob wins), and the second part consists of calculating the finite energies of the remaining vertices (namely, the vertices where Bob loses). **Vertices where Bob wins.** First, we identify the vertices where Bob wins, i.e., the vertices \(v\) with \(e^{*}(v)=\infty\). Hereto, we decompose \(G\) in to strongly connected components \(C_{1},\ldots,C_{r}\), for some \(r\geq 1\). On each \(C_{i}\), we run a negative cycle detection algorithm. If there is a negative cycle, we set \(e(v)=\infty\) for all \(v\in C_{i}\). Next we find the vertices that can reach these cycles. Let \(A:=\{v\in V:e(v)=\infty\}\) be the union of the strongly connected components with a negative cycle. Then from \(A\) we run an inward reachability algorithm (e.g., DFS, BFS) towards each vertex \(v\) and if there is a path from \(v\) to \(A\), we set \(e(v)=\infty\). In the correctness proof, we show that \(e(v)=\infty\) if and only if Bob wins at \(v\). _Correctness._ For any vertex \(v\in V\), Bob wins if and only if there is a path from \(v\) to a negative cycle. Let \(v\) be a vertex where Bob wins, and let \(C^{(v)}\) be the negative cycle reachable from \(v\). If \(v\) belongs to the strongly connected component of \(C^{(v)}\), then our algorithm outputs \(e(v)=\infty\). If \(v\) belongs to a different connected component, then the path to the negative cycle is detected in the inward reachability algorithm and we also output \(e(v)=\infty\). Suppose we output \(e(v)=\infty\). If we do this because \(v\) belongs to a strongly connected in which we detected a negative cycle, then clearly there is path from \(v\) to the negative cycle, and hence Bob wins at \(v\). If we set \(e(v)=\infty\) because there is a path from \(v\) to \(A\), then there is a path from \(v\) towards a strongly connected component containing a negative cycle, and hence to a negative cycle itself. So again Bob wins at \(v\). _Running time._ We can decompose \(G\) in to strongly connected components in \(O(m)\) time [29]. On each connected component \(C_{i}\), we can detect whether there is a negative cycle in the graph in \(O(|E(C_{i})|\log^{2}n\log nW\log\log n)\) time w.h.p. [11], thus the total time is \(O(m\log^{2}n\log nW\log\log n)\) w.h.p. The inward reachability algorithm can be implemented by a simple DFS or BFS in \(O(m)\) time. Hence in total we obtain w.h.p a running time of \(O(m\log^{2}n\log nW\log\log n)\) for this part. **Vertices where Bob loses.** Second, we compute the correct value for the vertices where Bob loses, i.e., the vertices \(v\) with \(e(v)<\infty\). Note that for this part we can restrict ourselves to the subgraph where we omit all vertices with \(e(v)=\infty\). We also add a new sink vertex \(t\) to the graph, and for every \(v\in V\) we insert an edge \((v,t)\) with \(w(v,t)=0\). Now for each vertex \(v\), we compute the minimum distance \(d(v,t)\) from \(v\) to \(t\), and we set \(e(v)=\max\{-d(v,t),0\}\). In the correctness proof, we show that \(e^{*}(v)=e(v)\) for each \(v\in V\) with \(e(v)<\infty\). _Correctness._ Consider now a vertex \(v\) such that \(e(v)<\infty\). First we show that \(e^{*}(v)\geq e(v)\). Let \(u\) be the last vertex (excluding \(t\) itself) on the shortest path from \(v\) to \(t\), and \(P_{v,u}\) be the corresponding prefix from \(v\) to \(u\). Then Bob can choose to move along the path \(P_{v,u}\) forcing Alice to use at least \(\max\{-w(P_{v,u}),0\}\) initial energy. As \(d(v,t)=w(P_{v,u})+w(u,t)=w(P_{v,u})+0=w(P_{v,u})\), we conclude that Alice needs at least \(\max\{-d(v,t),0\}=e(v)\) initial energy. It remains to show \(e^{*}(v)\leq e(v)\). Since there are no negative cycles, by definition we have that \(e^{*}(v)=\max\{-\min_{u\in V}w(P_{u}),0\}\), where the minimization is over all the simple paths from \(v\) to \(u\). Also for all \(u\in V\), it holds that \(d(v,u)\leq w(P_{u})\) and \(d(v,t)\leq d(v,u)+w(u,t)=d(v,u)+0=d(v,u)\). Thus we get that \(e^{*}(v)=\max\{-\min_{u\in V}w(P_{u}),0\}\leq\max\{-\min_{u\in V}d(v,t),0\}= \max\{-d(v,t),0\}=e(v)\). _Running time._ To compute the shortest paths from \(v\) to \(t\), we flip the direction of all the edges and we compute the minimum distances from \(t\) to \(v\) in the new graph. This clearly corresponds to the minimum distances from \(v\) to \(t\) in the original graph. Since this computation is the negative weight single source shortest path problem, it can be done in \(O(m\log^{2}n\log nW\log\log n)\) time w.h.p. [11]. ## 6 Game Graphs Without Negative Cycles In this section, we provide an \(O(mn)\) time algorithm for the special case where the game graph has no negative cycles. We do this in three steps: first, we introduce a finite duration energy game, where a token is passed for \(i\) rounds. The goal is to compute for each vertex, the minimum initial energy that Alice needs in order to keep the energy nonnegative for those \(i\) rounds. Second, we provide an algorithm that computes this value in \(O(mi)\) time. Finally, we show that for graphs with no negative cycles, it suffices to find this minimum initial energy for a game of \(n\) rounds. ### Finite Duration Games We introduce a version of the energy game that lasts \(i\) rounds. We define strategies and energy functions analogous to the infinite duration game, as in Section 2. A strategy for Alice is a function \(\sigma_{i}:V^{*}V_{A}\to V\), such that for all finite paths \(u_{0}u_{1}\cdots u_{j}\) with \(j<i\) and \(u_{j}\in V_{A}\), we have that \(\sigma_{i}(u_{0}u_{1}\cdots u_{j})=v\) for some edge \((u_{j},v)\in E\). Similarly we define a strategy \(\tau_{i}\) for Bob by replacing \(V_{A}\) with \(V_{B}\). A path \(u_{0}u_{1}\cdots u_{j}\) of length \(j\) is consistent with respect to strategies \(\sigma_{i}\) and \(\tau_{i}\), if \(\sigma_{i}(u_{0}u_{1}\cdots u_{k})=u_{k+1}\) for all \(u_{k}\in V_{A}\) and \(\tau_{i}(u_{0}u_{1}\cdots u_{k})=u_{k+1}\) for all \(u_{k}\in V_{B}\), where \(0\leq k<j\leq i\). The minimum sufficient energy at a vertex \(u\) corresponding to strategies \(\sigma_{i}\) and \(\tau_{i}\) is defined as \(e_{\sigma_{i},\tau_{i}}(u):=\max\{-\min w(P),0\}\), where the minimization is over all the consistent paths \(P\) with respect to \(\sigma_{i}\) and \(\tau_{i}\) of length at most \(i\) originating at \(u\). The minimum sufficient energy at a vertex \(u\) is defined as follows: \[e^{*}_{i}(u):=\min_{\sigma_{i}}\max_{\tau_{i}}e_{\sigma_{i},\tau_{i}}(u),\] where we minimize over all strategies \(\sigma_{i}\) for Alice and maximize over all strategies \(\tau_{i}\) for Bob. As for the infinite duration game, we know by Martin's determinacy theorem [27] that \(\min_{\sigma_{i}}\max_{\tau_{i}}e_{\sigma_{i},\tau_{i}}(u)=\max_{\tau_{i}}\min _{\sigma_{i}}e_{\sigma_{i},\tau_{i}}(u)\). Now we define _optimal strategies_ as follows. A strategy \(\sigma^{*}_{i}\) is an optimal strategy for Alice at a vertex \(u\), if for any strategy \(\tau_{i}\) for Bob it holds that \(e_{\sigma^{*}_{i},\tau_{i}}(u)\leq e^{*}_{i}(u)\). Likewise a strategy \(\tau^{*}_{i}\) is an optimal strategy for Bob at a vertex \(u\), if for any strategy \(\sigma_{i}\) for Alice it holds that \(e_{\sigma_{i},\tau^{*}_{i}}(u)\geq e^{*}_{i}(u)\). A value \(e(u)\) is a sufficient energy at a vertex \(u\), if there exists a strategy \(\sigma_{i}\) such that for any strategy \(\tau_{i}\), it holds that \(e_{\sigma_{i},\tau_{i}}(u)\leq e(u)\). In this case, observe that the following is true: \[e^{*}_{i}(u)=\max_{\tau_{i}}e_{\sigma^{*}_{i},\tau_{i}}(u)\leq\max_{\tau_{i}} e_{\sigma_{i},\tau_{i}}(u)\leq e(u).\] Next, we show the following lemma about the minimum energy function, a similar version has also been used for the infinite duration game in [10] and [16]. For the proof, see the full version of the paper. **Lemma 6.1**.: _Given a game of \(i\) rounds and a vertex \(u\in V\), the energy \(e_{i}^{*}(u)\) satisfies the following properties:_ \[\text{if }u\in V_{A}\text{ then }\exists v\in N^{+}(u):e_{i}^{*}(u) +w(u,v)\geq e_{i-1}^{*}(v) \tag{1}\] \[\text{if }u\in V_{B}\text{ then }\forall v\in N^{+}(u):e_{i}^{*}(u) +w(u,v)\geq e_{i-1}^{*}(v) \tag{2}\] ### A Value Iteration Algorithm for Finite Duration Games In this section, we present Algorithm 1, a value iteration algorithm for a game lasting \(i\) rounds that computes for each vertex \(u\in V\) the value \(e_{i}^{*}(u)\). We note that Algorithm 1 consists of \(i\) steps, where at every step each edge is scanned at most once. Clearly this means the algorithm takes \(O(mi)\) time. ``` 0: A game graph \(G=(V,E,w,\langle V_{A},V_{B}\rangle)\), a number of iterations \(i\) 0: The minimum sufficient energy \(e_{i}(u)\) of each \(u\in V\), in order to play the game for \(i\) rounds 1\(\forall u\in V:e_{0}(u)\gets 0\) 2for\(j=1\)to\(i\)do 3foreach\(u\in V\)do 4if\(u\in V_{A}\)then 5\(e_{j}(u)\leftarrow\max\{\min_{(u,v)\in E}\{e_{j-1}(v)-w(u,v)\},0\}\) 6 end if 7if\(u\in V_{B}\)then 8\(e_{j}(u)\leftarrow\max\{\max_{(u,v)\in E}\{e_{j-1}(v)-w(u,v)\},0\}\) 9 end if 10 11 end for 12 13 end for 14 15 end for return\(e_{i}\) ``` **Algorithm 1** Value iteration algorithm for an \(i\)-round game **Lemma 6.2**.: _Let \(e_{i}(\cdot)\) be the function returned by Algorithm 1, then \(e_{i}(u)=e_{i}^{*}(u)\) for all \(u\in V\)._ Proof.: We prove the claim by induction on \(i\), which is both the number of steps of the algorithm and the duration of the game. _Base case:_ For \(i=0\) steps, the algorithm sets for each \(u\in V:e_{0}(u)=0=e_{0}^{*}(u)\). _Inductive Step:_ We assume that after \(i-1\) steps \(e_{i-1}(u)=e_{i-1}^{*}(u)\), and we prove that after \(i\) steps \(e_{i}(u)=e_{i}^{*}(u)\) as well. We first show that \(e_{i}(u)\geq e_{i}^{*}(u)\). Consider the case that \(u\in V_{A}\). Let \(v^{\prime}\) be the neighbor that minimizes the relation in the \(i_{th}\) step in Line 5. Then it holds that \(e_{i}(u)+w(u,v^{\prime})\geq e_{i-1}(v^{\prime})\). Using the edge \((u,v^{\prime})\) with initial energy \(e_{i}(u)\), Alice can move to \(v^{\prime}\) with remaining energy at least \(e_{i-1}(v^{\prime})\). By the inductive hypothesis it holds that \(e_{i-1}(v^{\prime})=e_{i-1}^{*}(v^{\prime})\), so there exists an optimal strategy \(\sigma_{i-1}^{*}\) such that for any strategy \(\tau_{i-1}\), we have that \(e_{\sigma_{i-1}^{*},\tau_{i-1}}(v^{\prime})\leq e_{i-1}(v^{\prime})\). Define the strategy \(\sigma_{i}\) in the following way: \(\forall x\in V^{*}V_{A}:\sigma_{i}(ux)=\sigma_{i-1}^{*}(x)\) and \(\sigma_{i}(u)=v^{\prime}\). Then we get a strategy \(\sigma_{i}\) such that for any strategy \(\tau_{i}\), it holds that \(e_{\sigma_{i},\tau_{i}}(u)\leq e_{i}(u)\). This implies that \(e_{i}(u)\) is a sufficient energy at vertex \(u\), and so \(e_{i}(u)\geq e_{i}^{*}(u)\). Consider the case that \(u\in V_{B}\). Due to the \(i_{th}\) step in Line 8, it holds that \(e_{i}(u)+w(u,v)\geq e_{i-1}(v)\), for all \(v\in N^{+}(u)\). Hence for any choice of a neighboring edge \((u,v)\) with initial energy \(e_{i}(u)\), Bob moves to a neighbor \(v\) with remaining energy at least \(e_{i-1}(v)\). By the inductive hypothesis, for all \(v\in N^{+}(u)\) it holds that \(e_{i-1}(v)=e_{i-1}^{*}(v)\), so there exists an optimal strategy \(\sigma_{i-1}^{*}\) such that for any strategy \(\tau_{i-1}\), we have that \(e_{\sigma_{i-1}^{*},\tau_{i-1}}(v)\leq e_{i-1}(v)\). Define the strategy \(\sigma_{i}\) in the following way: \(\forall x\in V^{*}V_{A}:\sigma_{i}(ux)=\sigma_{i-1}^{*}(x)\). Then we get a strategy \(\sigma_{i}\) such that for any strategy \(\tau_{i}\), it holds that \(e_{\sigma_{i},\tau_{i}}(u)\leq e_{i}(u)\). This implies that \(e_{i}(u)\) is a sufficient energy at vertex \(u\), and so \(e_{i}(u)\geq e_{i}^{*}(u)\). It remains to show that \(e_{i}(u)\leq e_{i}^{*}(u)\). Consider the case that \(u\in V_{A}\). If \(e_{i}(u)=0\) then the claim holds trivially. If \(e_{i}(u)>0\), then based on Line 5, we have that \(e_{i}(u)+w(u,v)\leq e_{i-1}(v)\) for all \(v\in N^{+}(u)\). By Lemma 6.1, there exists \(v^{\prime}\in N^{+}(u)\) such that \(e_{i}^{*}(u)+w(u,v^{\prime})\geq e_{i-1}^{*}(v^{\prime})\), which means that: \[e_{i}(u)+w(u,v^{\prime})\leq e_{i-1}(v^{\prime})=e_{i-1}^{*}(v^{\prime})\leq e _{i}^{*}(u)+w(u,v^{\prime})\ \Rightarrow\ e_{i}(u)\leq e_{i}^{*}(u),\] where the equality holds by the inductive hypothesis. Consider the case that \(u\in V_{B}\). If \(e_{i}(u)=0\) then the claim holds trivially. Otherwise based on Line 8, there exists \(v^{\prime}\in N^{+}(u)\) such that \(e_{i}(u)+w(u,v^{\prime})=e_{i-1}(v^{\prime})\). By Lemma 6.1, we have that \(e_{i}^{*}(u)+w(u,v)\geq e_{i-1}^{*}(v)\) for all \(v\in N^{+}(u)\), which means that: \[e_{i}(u)+w(u,v^{\prime})=e_{i-1}(v^{\prime})=e_{i-1}^{*}(v^{\prime})\leq e_{i }^{*}(u)+w(u,v^{\prime})\ \Rightarrow\ e_{i}(u)\leq e_{i}^{*}(u),\] where the equality holds by the inductive hypothesis. ### No Negative Cycles The goal of this section is to show that for graphs with no negative cycles, it holds that \(e_{n}^{*}(u)=e^{*}(u)\), for all \(u\in V\). Hereto, we show in Lemma 6.4 that as in the infinite duration game, positional strategies suffice when no negative cycles are present. In the proof, we use the following alternative characterization of \(e_{\sigma_{i},\tau_{i}}(u)\). Let \(\sigma_{i}\) and \(\tau_{i}\) be strategies for Alice and Bob respectively, and let \(u\in V\) be a vertex. Moreover, let \(u_{0}u_{1}\cdots u_{j}\) be the consistent path of length \(j\) with respect to \(\sigma_{i}\) and \(\tau_{i}\), where \(u_{0}=u\). Then given an initial energy \(e_{\mathrm{init}}\), the energy level at vertex \(u_{j}\) is equal to the value \(e_{\mathrm{init}}+\sum_{k=0}^{j-1}w(u_{k},u_{k+1})\). We denote \(e_{\mathrm{init}}^{*}(u)\) for the minimum nonnegative initial energy such that the energy level at each vertex of the corresponding consistent path of length \(i\), is nonnegative. The following lemma shows that \(e_{\sigma_{i},\tau_{i}}(u)=e_{\mathrm{init}}^{*}(u)\) (for the proof, see the full version of the paper). **Lemma 6.3**.: _For a vertex \(u\) and two fixed strategies \(\sigma_{i}\) and \(\tau_{i}\), let \(P\) be the consistent path with respect to \(\sigma_{i}\) and \(\tau_{i}\) of length \(i\) originating at \(u\). Then it holds that \(e_{\sigma_{i},\tau_{i}}(u)=e_{\mathrm{init}}^{*}(u)\)._ Now we are ready to show that positional strategies suffice in graphs without negative cycles. For the proof see the full version of the paper. **Lemma 6.4**.: _Consider a graph with no negative cycles and a game of \(i\) rounds. Then for the minimum sufficient energy \(e_{i}^{*}(u)\) at a vertex \(u\in V\), it suffices for both players to play positional strategies._ We use this fact to show that a game of \(n\) rounds is equivalent to a game of infinite duration for a game graph without negative cycles. **Lemma 6.5**.: _Consider a graph with no negative cycles. Then for each vertex \(u\in V\), the minimum sufficient energy needed at \(u\) for a game of \(n\) rounds, is equal to the minimum sufficient energy needed at \(u\) for a game of infinite rounds. In other words, \(e_{n}^{*}(u)=e_{\infty}^{*}(u)=e^{*}(u)\) for all \(u\in V\)._ Proof.: Let \(\sigma\) and \(\tau\) be two arbitrary positional strategies for the infinite duration game. By definition, we have that \(e_{\sigma,\tau}(u)=\max\{-\min w(P),0\}\), where the minimization is over all the consistent paths with respect to \(\sigma\) and \(\tau\) originating at \(u\). Since the graph contains only nonnegative cycles and the strategies are positional, the path that minimizes the relation is a simple path, and so, its length is at most \(n\). Hence it follows that \(e_{\sigma,\tau}(u)=\max\{-\min_{|P|\leq n}w(P),0\}\). In turn, this is equivalent to using positional strategies for a game of \(n\) rounds. Hence it holds that \(e_{\sigma,\tau}(u)=e_{\sigma_{n},\tau_{n}}(u)\), where \(\sigma_{n}\) and \(\tau_{n}\) are the strategies \(\sigma\) and \(\tau\) respectively, restricted to the first \(n\) rounds. This implies that \(e^{*}(u)=\min_{\sigma_{n}}\max_{\tau_{n}}e_{\sigma_{n},\tau_{n}}(u)\), where \(\sigma_{n}\) and \(\tau_{n}\) are positional strategies for a game of \(n\) rounds. By Lemma 6.4, this equals \(e_{n}^{*}(u)\) and the claim follows. Together, Lemma 6.2 and Lemma 6.5 prove Theorem 1.5.
2306.09475
Motor elétrico -- SimuFísica: um aplicativo para o ensino de eletromagnetismo
In this work, we present the Electric Motor simulator, an application on the SimuF\'isica platform intended for use in the classroom. We briefly describe the technologies behind the application, the equations that govern its operation, some studies showing the dynamics of the electric motor and, finally, examples of approach in the classroom. -- Apresentamos neste trabalho o simulador Motor el\'etrico, um aplicativo da plataforma SimuF\'isica voltado para uso em sala de aula. Descrevemos brevemente as tecnologias por tr\'as do aplicativo, as equa\c{c}\~oes que regem o seu funcionamento, alguns estudos mostrando a din\^amica do motor el\'etrico e, por fim, exemplos de abordagem em sala de aula.
Marco P. M. de Souza, Sidnei P. Oliveira, Valdenice L. Luiz
2023-06-15T19:58:53Z
http://arxiv.org/abs/2306.09475v1
# Motor eletrico - SimuFisica(r): um aplicativo para o ensino de eletromagnetismo ###### Abstract In this work, we present the Electric Motor simulator, an application on the SimuFisica(r) platform intended for use in the classroom. We briefly describe the technologies behind the application, the equations that govern its operation, some studies showing the dynamics of the electric motor and, finally, examples of approach in the classroom. _Keywords:_ Electric motor, computer simulation, Application, Simulator, Electromagnem. ## 1 Introducao A tematica do motor eletrico pode ser um ponto de partida eficaz para ensinar eletromagnetismo devido aos varios topicos desse assuto relacionados ao seu funcionamento. Sendo uma aplicacao pratica dos principios do eletromagnetismo, os alunos podem ver diretamente como as leis do eletromagnetismo se manifestam no mundo real, tornando o assunto mais concreto e relevante. O estudo do motor eletrico requer a compreensao e integracao de varios conceitos de eletromagnetismo, como corrente eletrica, campo magnetico, forca magnetica, espiras e bobinas, e tambem da mecanica, como rotacao, torque e aceleracao angular. Experimentos envolvendo a aplicacao do motor eletrico no ensino de Fisica tem sido um tema recorrente na literatura, aparecendo em varios artigos [1, 2, 3, 4, 5] e dissertacoes [6, 7]. Por outro lado, simulacoes computacionais de um motor eletrico com fins didaticos sao bastante escassas. Isso nos motivou a desenvolver e apresentar o aplicativo Motor eletrico, disponviel na plataforma SimuFisica1, que abordamos na proximas secoes. Footnote 1: [https://simufisica.com/](https://simufisica.com/) ## 2 A plataforma SimuFisica(r) O SimuFisica(r) e uma colecao de aplicativos simuladores voltados para o ensino de Fisica em nivel medio e superior. Com sua natureza multilingue e multiplataforma, o SimuFisica(r) possui simulacoes que abrangem assuntos como o consumo de energia eletrica, o gas ideal e a propagacao da funcao de onda mecanica quantica, por exemplo. Esses simuladores podem ser acessados online ou instalados em computadores, tablets e smartphones de diversos sistemas operacionais. Tendo como publico-alvo estudantes e professores, o uso do SimuFisica(r) em sala de aula, aliado a um bom planejamento, pode trazer van
2305.10474
Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models
Despite tremendous progress in generating high-quality images using diffusion models, synthesizing a sequence of animated frames that are both photorealistic and temporally coherent is still in its infancy. While off-the-shelf billion-scale datasets for image generation are available, collecting similar video data of the same scale is still challenging. Also, training a video diffusion model is computationally much more expensive than its image counterpart. In this work, we explore finetuning a pretrained image diffusion model with video data as a practical solution for the video synthesis task. We find that naively extending the image noise prior to video noise prior in video diffusion leads to sub-optimal performance. Our carefully designed video noise prior leads to substantially better performance. Extensive experimental validation shows that our model, Preserve Your Own Correlation (PYoCo), attains SOTA zero-shot text-to-video results on the UCF-101 and MSR-VTT benchmarks. It also achieves SOTA video generation quality on the small-scale UCF-101 benchmark with a $10\times$ smaller model using significantly less computation than the prior art.
Songwei Ge, Seungjun Nah, Guilin Liu, Tyler Poon, Andrew Tao, Bryan Catanzaro, David Jacobs, Jia-Bin Huang, Ming-Yu Liu, Yogesh Balaji
2023-05-17T17:59:16Z
http://arxiv.org/abs/2305.10474v3
# Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models ###### Abstract Despite tremendous progress in generating high-quality images using diffusion models, synthesizing a sequence of animated frames that are both photorealistic and temporally coherent is still in its infancy. While off-the-shelf billion-scale datasets for image generation are available, collecting similar video data of the same scale is still challenging. Also, training a video diffusion model is computationally much more expensive than its image counterpart. In this work, we explore finetuning a pretrained image diffusion model with video data as a practical solution for the video synthesis task. We find that naively extending the image noise prior to video noise prior in video diffusion leads to sub-optimal performance. Our carefully designed video noise prior leads to substantially better performance. Extensive experimental validation shows that our model, Preserve Your Own COrerelation (PyCo), attains SOTA zero-shot text-to-video results on the UCF-101 and MSR-VTT benchmarks. It also achieves SOTA video generation quality on the small-scale UCF-101 benchmark with a \(10\times\) smaller model using significantly less computation than the prior art. The project page is available at [https://research.nvidia.com/labs/dir/pyoco/](https://research.nvidia.com/labs/dir/pyoco/). A very happy fuzzy panda dressed as a chef eating piza in the New York street food truck. The supernova explosion of a white dwarf in the universe, photo realistic. The upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two-dimensional image, the upper two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two-dimensional image, the upper two two two-dimensional image, the upper two two two-dimensional image, the upper two two two-dimensional image, the upper two two two-dimensional image, the upper two two two-dimensional image, the upper two two two two-dimensional image, the upper two two two two-dimensional image, the upper two two two two two-dimensional image, the upper two video diffusion models. Ho _et al_. [16] proposed a UNet-based architecture for the video synthesis task that is trained using joint image-video denoising losses. Imagen video [13] extends the cascaded text-to-image generation architecture of Imagen [38] for video generation. In both works, the authors directly train a video generation model from scratch. While these approaches achieve great success and produce high-quality videos, they are inherently expensive to train, requiring hundreds of high-end GPUs or TPUs and several weeks of training. After all, video generators not only need to learn to form individual images but should also learn to synthesize coherent temporal dynamics, which makes the video generation task much more challenging. While the formation of individual frames is a shared component in an image and video synthesis, these works disregard the existence of powerful pretrained text-to-image diffusion models and train their video generators from scratch. We explore a different avenue for building large-scale text-to-video diffusion models by starting with a pretrained text-to-image diffusion model. Our motivation is that most of the components learned for the image synthesis task can effectively be reused for video generation, leading to knowledge transfer and efficient training. A similar idea is adopted by several recent works [41, 63, 3]. Without exception, when finetuning, they naively extend the image diffusion noise prior (i.i.d. noise) used in the text-to-image model to a video diffusion noise prior by adding an extra dimension to the 2D noise map. We argue that this approach is not ideal as it does not utilize the natural correlations in videos that are already learned by the image models. This is illustrated in Figure 2, where we visualize the t-SNE plot of noise maps corresponding to different input frames as obtained from a pretrained text-to-image diffusion model. The noise maps corresponding to different frames coming from the same video (blue dots in Figure 1(a)) are clustered together, exhibiting a high degree of correlation. The use of i.i.d. noise prior does not model this correlation, which would impede the finetuning process. Our careful analysis of the video diffusion noise prior leads us to a noise prior that is better tailored for finetuning an image synthesis model to the video generation task. As illustrated in Figure 1(b), our proposed noise prior (shown in blue dots) aptly captures the correlations in noise maps corresponding to video frames. We then proceed to build a large-scale diffusion-based text-to-video model. We leverage several design choices from the prior works, including the use of temporal attention [16], joint image-video finetuning [16], a cascaded generation architecture [13], and an ensemble of expert denoisers [1]. Together with these techniques and the proposed video noise prior, our model establishes a new state-of-the-art for video generation outperforming competing methods on several benchmark datasets. Figure 1 shows our model can achieve high-quality zero-shot video synthesis capability with SOTA photorealism and temporal consistency. In short, our work makes the following key contributions. 1. We propose a video diffusion noise tailored for finetuning text-to-image diffusion models for text-to-video. Figure 2: **Visualizing the noise map correlations. (a) visualizes the t-SNE plot of the noise maps corresponding to input frames randomly sampled from videos. These noise maps are obtained by running a diffusion ODE [44, 43] on the input frames using a trained text-to-image model, but in the opposite direction of image synthesis (\(\sigma:0\rightarrow\sigma_{\text{max}}\)). The green dots in the background denote the reference noise maps sampled from an i.i.d. Gaussian distribution. The red dots and yellow dots are noise maps corresponding to input frames coming from different videos. We found they are spread out and share no correlation. On the other hand, the noise maps corresponding to the frames coming from the same video (shown in blue dots) are clustered together. (b) Using an i.i.d. noise model (orange dots) for finetuning text-to-image models for video synthesis is not ideal since temporal correlations between frames are not modeled. To remedy this, we propose a progressive noise model in which the correlation between different noise maps is injected along the temporal axis. Our progressive noise model (blue dots) aptly models the correlations present in the video noise maps.** 2. We conduct extensive experimental validation and verify the effectiveness of the proposed noise prior. 3. We build a large-scale text-to-video diffusion model by finetuning a pretrained eDiff-I model with our noise prior and achieve state-of-the-art results on several benchmarks. ## 2 Related Work **Diffusion-based text-to-image models:** Diffusion models have significantly advanced the progress of text-based photorealistic, compositional image generation [34, 38]. Given the nature of the iterative denoising process that requires massive numbers of score function evaluations, earlier diffusion models focused on generating low-resolution images, e.g., \(64\times 64\)[14, 43]. To generate high-resolution images, two common approaches have been used. The first approach applies cascaded super-resolution models in the RGB space [29, 15, 38, 34], while the second approach leverages a decoder to exploit latent space [36, 10]. Based on these models, advanced image and video editing have been achieved through finetuning the model [37, 61, 4, 22, 55, 26] or controlling the inference process [27, 12, 30, 9, 31, 6, 28, 2]. Here, we study the problem of using large-scale diffusion models for text-to-video generation. **Video generation models:** Generating realistic and novel videos have long been an attractive and essential research direction [53, 35, 59]. Previously studies have resorted to different types of generative models such as GANs [53, 39, 49, 47], Autoregressive models [46, 57, 23, 8, 17], and implicit neural representations [42, 60]. Recently, driven by the tremendous success of the diffusion model in image synthesis, multiple works have proposed to explore diffusion models for video synthesis [52, 11, 63, 55, 3, 21, 18, 52, 58]. For example, Singer _et al_. extends the unCLIP framework [34] to text-to-video generation, which allows training without video captions [41]. Ho _et al_. [16] extend the Imagen framework [38] by repeatedly up-scaling low-resolution small-fps videos in both spatial and temporal directions with multiple models [13]. Our work also falls into this line of work which uses a diffusion model. We focus on augmenting an image diffusion model for video and study the design choice of the diffusion noise priors for such an image-to-video finetuning task. **Leverage knowledge from images for text-to-video generation:** Like text-to-image models, text-to-video models require massive amounts of data to learn caption-relatedness, frame photorealism, and temporal dynamics. But in contrast to the abundant image data resource, video data are more limited in style, volume, and quality. To resolve such scarcity issue of text-video data, previous works have resorted to different strategies to leverage knowledge from image data for text-to-video generation, including joint training on the text-image data from scratch [16, 13, 51, 54], first training a text-to-image model and then finetuning partially [17, 3, 55, 26] or entirely [41, 7] on the video dataset, and using CLIP image features as the conditional information [41, 63]. In this paper, we propose a new video diffusion noise prior that is tailored for finetuning a pretrained diffusion-based image generation model for the video generation task. We reuse several design choices in the prior work by finetuning jointly on text-image and text-video datasets. As a result, we can build a text-to-video generation system that achieves state-of-the-art zero-shot performances. ## 3 Preliminaries Diffusion models generate data by iteratively denoising samples drawn from a noise distribution. In the case of text-to-video models, text embeddings obtained from a pre-trained text encoder are used as additional inputs in the denoising process. Formally, let \(D(\mathbf{x},\mathbf{e},\sigma)\) denote a denoising network that operates on the noisy input video \(\mathbf{x}\in\mathbb{R}^{b\times n_{s}\times 3\times h\times w}\) where \(\mathbf{e}\) is the text embedding, and \(\sigma\) is the noise level. Here \(n_{s}\) is the sequence length of the input video, \(b\) is the batch size, and \(h\times w\) is the spatial resolution. The model \(D\) is trained to denoise the input \(\mathbf{x}\). TrainingWe follow the EDM formulation of Karras _et al_. [20] to optimize the denoiser \(D\) using the following objective \[\mathbb{E}_{p_{\text{data}}(\mathbf{x}_{\text{class}},\mathbf{e} ),p(\epsilon),p(\sigma)}\left[\lambda(\sigma)\|D(\mathbf{x}_{\text{noise}}; \mathbf{e},\sigma)-\mathbf{x}_{\text{clean}}\|_{2}^{2}\right] \tag{1}\] \[\text{where}\ \ \mathbf{x}_{\text{noise}}=\mathbf{x}_{\text{clean}}+\sigma\epsilon\] Here, \(\mathbf{x}_{\text{noise}}\) is the noisy sample obtained by corrupting the clean video \(\mathbf{x}\) with noise \(\sigma\epsilon\), where \(p(\epsilon)=\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\sigma\) is a scalar for the noise level drawn from \(p(\sigma)\). The loss weight, \(\lambda(\sigma)\), is a function of \(\sigma\) given by \(\lambda(\sigma)=(\sigma^{2}+\sigma_{\text{data}}^{2})/(\sigma\cdot\sigma_{ \text{data}})^{2}\). Eq. (1) is a simple denoising objective in which the denoiser \(D\) is trained to estimate the clean video \(\mathbf{x}_{\text{clean}}\) from the noisy input \(\mathbf{x}_{\text{noise}}\). Following EDM, we use a log-normal distribution for \(\sigma\) i.e., \(\ln(p(\sigma))=\mathcal{N}(P_{\text{mean}},P_{\text{std}}^{2})\) with \(P_{\text{mean}}=-1.2\) and \(P_{\text{std}}=1.2\). To train the denoising model, EDM uses preconditioning terms in its objective function to properly scale the inputs and output of the denoiser model \(D\). More specifically, the denoising model \(D\) is written as \[D(\mathbf{x};\mathbf{e},\sigma)\!:=\!\Big{(}\frac{\sigma_{\text{data}}}{ \sigma^{*}}\Big{)}^{2}\mathbf{x}+\frac{\sigma\cdot\sigma_{\text{data}}}{ \sigma^{*}}F_{\theta}\Big{(}\frac{\mathbf{x}}{\sigma^{*}}\,;\mathbf{e},\!\frac {\ln(\sigma)}{4}\Big{)}\] Here, \(F_{\theta}\) is a neural network with parameters \(\theta\) and \(\sigma^{*}=\sqrt{\sigma^{2}+\sigma_{\text{data}}^{2}}\). We use \(\sigma_{\text{data}}=0.5\). **Sampling** Once the denoising model is trained, sampling can be performed by solving the following ODE [20] \[\frac{d\mathbf{x}}{d\sigma}=-\sigma\nabla_{\mathbf{x}}\log p(\mathbf{x}| \mathbf{e},\sigma)=\frac{\mathbf{x}-D(\mathbf{x};\mathbf{e},\sigma)}{\sigma} \tag{2}\] for \(\sigma\) flowing backwards from \(\sigma=\sigma_{\text{max}}\) to \(\sigma=0\). The initial value for \(\mathbf{x}\) is obtained by sampling from the prior distribution \(\mathbf{x}\sim\mathcal{N}(\mathbf{0},\sigma_{\text{max}}^{2}\mathbf{I})\). Over the recent years, several samplers have been proposed for sampling from the trained diffusion models[62, 43, 24, 25, 14]. In this paper, we use DEIS [62] and its stochastic variant [20] for synthesizing samples from our model. ## 4 Method Training text-to-video models is much more challenging than training text-to-image diffusion models due to practical difficulties in collecting billion-scale video datasets and securing enough computational resources. Additionally, generating videos is much more challenging since individual frames need to be both photorealistic and temporally coherent. Prior works leverage large-scale image datasets to mitigate these difficulties by either joint training on the image datasets [54, 16, 13] or finetuning a text-to-image model on the video datasets [17, 41]. Here, we are interested in finetuning text-to-image diffusion models jointly on image and video datasets. We postulate that naively extending the image noise prior to video diffusion is not ideal. We carefully explore the design space of noise priors and propose one that is well suited for our video finetuning task, which leads to significant performance gains. Correlated noise modelAn image diffusion model is trained to denoise independent noise from a perturbed image. The noise vector \(\epsilon\) in the denoising objective (1) is sampled from an i.i.d. Gaussian distribution \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\). However, after training the image diffusion model and applying it to reverse real frames from a video into the noise space in a per-frame manner, we find that the noise maps corresponding to different frames are highly correlated. This is illustrated in Figure 2, where the t-SNE plot of noise maps corresponding to different video frames are plotted. When the input frames come from the same video (shown in blue dots in Figure 1(a)), noise maps are clustered. The use of i.i.d. sampling (shown in orange dots in Figure 1(b)) does not capture these correlations. As a result, the video diffusion model is coerced to forget such correlation among the noise between different frames, making it difficult to preserve knowledge from the image diffusion model. Motivated by this observation, we propose to modify the noise process to preserve the correlation between different frames. To this end, we investigate two noising strategies - mixed and progressive noising. **Mixed noise model:** Let \(\epsilon^{1},\epsilon^{2},\dots\epsilon^{n_{s}}\) denote the noise corresponding to individual video frames i.e., \(\epsilon^{i}\) corresponds to the \(i^{th}\) element of the noise tensor \(\epsilon\). In the mixed noise model, we generate two noise vectors \(\epsilon_{\text{shared}}\) and \(\epsilon_{\text{ind}}\). \(\epsilon_{\text{shared}}\) is a common noise vector shared among all video frames, while \(\epsilon_{\text{ind}}\) is the individual noise per frame. The linear combination of both these vectors is used as the final noise. \[\epsilon_{\text{shared}}\sim\mathcal{N}\left(\mathbf{0},\frac{ \alpha^{2}}{1+\alpha^{2}}\mathbf{I}\right),\epsilon_{\text{ind}}^{i}\sim \mathcal{N}\left(\mathbf{0},\frac{1}{1+\alpha^{2}}\mathbf{I}\right) \tag{3}\] \[\epsilon^{i}=\epsilon_{\text{shared}}+\epsilon_{\text{ind}}^{i}\] **Progressive noise model:** In the progressive noise model, the noise for each frame is generated in an autoregressive fashion in which the noise at frame \(i\) is generated by perturbing the noise at frame \(i-1\). Let \(\epsilon_{\text{ind}}^{i}\) denote the independent noise generated for frame \(i\). Then, progressive noising can be formulated as \[\epsilon^{0}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) \epsilon_{\text{ind}}^{i}\sim\mathcal{N}(\mathbf{0},\frac{1}{ \sqrt{1+\alpha^{2}}}\mathbf{I}) \tag{4}\] \[\epsilon^{i}=\frac{\alpha}{1+\alpha^{2}}\epsilon^{i-1}+\epsilon_{ \text{ind}}^{i}\] In both these models, \(\alpha\) controls how much noise is shared among different video frames. The higher the value of \(\alpha\), the more correlation exists among the noise maps corresponding to different frames. As \(\alpha\rightarrow\infty\), all frames would have the same noise which results in generating a frozen video. On the other hand, \(\alpha=0\) corresponds to i.i.d. noise. As shown in Figure 1(b), the use of progressive noise sampling (blue dots) better models the correlations between different noise maps by obtaining similar clustering patterns to the noise maps of real video frames embedded by a pre-trained text-to-image model in Figure 1(a) (blue dots). Model architectureAs visualized in Figure 3, our model consists of a cascade of four networks -- a base network and three upsampling stacks. The base network generates an output video of dimension \(16\times 64\times 64\) with a frameskip of \(5\). It generates the frames \(\{1,6,11,\dots 76\}\). The first upsampling network performs a temporal interpolation to produce a video of size \(76\times 64\times 64\). The second and the third super-resolution network performs spatial upsampling to produce the outputs of sizes \(76\times 256\times 256\) and \(76\times 1024\times 1024\). We utilize eDiff-I [1], a state-of-the-art text-to-image diffusion model, to initialize our base and spatial super-resolution models. Similar to prior works [16, 41], we adapt the image-based U-Net model for the video synthesis task by making the following changes: (1) Transforming 2D convolutions to 3D by adding a dimension \(1\) to temporal axis and (2) Adding temporal attention layers. Please refer to the supplementary material for more details. Similar to Ho [16], we jointly finetune the model on video and image datasets by concatenating videos and images in the temporal axis and applying our temporal modules only on the video part. Similarly to eDiff-I, our model uses both T5 text embeddings [33] and CLIP text embeddings [32]. We drop each of the embeddings independently at random during training, as in eDiff-I. ## 5 Experiments In this section, we evaluate our proposed strategy of training diffusion models for video synthesis on two sets of experiments. We first comprehensively analyze our proposed noise model on the small-scale UCF-101 dataset. We then scale up our experiments to the challenging large-scale text-to-video synthesis task. ### Experimental Setups We conduct ablation experiments in a small-scale unconditional video generation setting and pick the best configuration for our large-scale text-to-video generation run. DatasetsWe train our model on the UCF-101 dataset [45] for the small-scale experiments, where we follow the protocol defined in Ho [16] to generate videos of size \(16\times 64\times 64\). We use a collection of public and proprietary datasets to train our model for large-scale text-to-image pre-training and text-to-video finetuning. All data were filtered using a preset CLIP and aesthetic scores to ensure high quality. Our final image dataset contains around \(1.2\) billion text-image pairs and \(22.5\) million text-video pairs. Training detailsIn the unconditional generation experiment on the UCF-101 dataset, to do an ablation study on the model size, we design 3 models where each model has \(69\)M, \(112\)M, and \(253\)M parameters, respectively. As a comparison, the baseline Video Diffusion Model (VDM) [16] contains \(1.2\)B parameters. In the large-scale text-to-video experiment, our base and temporal interpolation models contain \(1.08\)B parameters. Our super-resolution model adapted from the efficient U-Net [38] architecture with temporal convolution layers [13, 41] contains \(313\)M parameters. Please refer to the supplementary material for more training details. EvaluationFor the small-scale experiments on UCF-101 dataset, we follow the protocol defined in the prior approaches [47, 42, 16] and report the Inception Score (IS) [40] calculated by a trained C3D model [48] and Frechet Video Distance (FVD) [50] by a trained I3D model [5]. For the large-scale text-to-video experiments, we perform the zero-shot evaluation of the video generation quality on the UCF-101 and MSR-VTT datasets following Make-A-Video [41]. However, we find that the protocol provided in this paper is vague. We reach out to the authors to ensure a fair comparison and carefully discuss the evaluation process below. baseline model achieves a new state-of-the-art CLIP-FID score of \(10.21\), while using ensemble models further improves both CLIP-FID and FID scores. In Figure 4, we qualitatively visualize the synthesis capability of our approach. Our model achieves high-quality zero-shot video synthesis capability with good photorealism and temporal coherency. We also provide a qualitative comparison with Make-A-Video [41] and Imagen Video [13] in Figure 5. We observe that our model is able to produce videos with better details than both approaches, as shown in the animal videos. We also produce better-stylized videos than Imagen Video. Small-scale unconditional video synthesisWe report IS and FVD scores on UCF-101 dataset in Table 3 and compare our model with multiple unconditional video generation baselines. Note that using class labels as conditional information could lead to sizeable improvement in IS and FVD scores [8], which we do not consider as the comparison. Our method attains state-of-the-art unconditional video generation quality. Compared with previous diffusion-based unconditional generation model [16], our model is \(\sim 10\times\) smaller and has \(\sim 14\times\) less training time (\(75\) GPU-days vs. \(925\) GPU-days). ### Ablation Study We quantitatively compare several training strategies for video diffusion models. Then, we perform ablation on the correlation ratio, a key hyper-parameter in our approach. Training strategiesWe compare training from scratch, a simple finetuning baseline, finetuning with mixed noising, and progressive noising using IS, FVD, and averaged frame FID metrics in Table 4. We first find that finetuning from an image diffusion model is much more effective than training from scratch. For finetuning from the image model, the \begin{table} \begin{tabular}{l|c c c} \hline \hline & IS(\(\uparrow\)) & FVD (\(\downarrow\)) & FID (\(\downarrow\)) \\ \hline Image Diffusion (ID) & - & - & 30.05 \\ \hline Training from scratch & 28.25 & 903.37 & 124.75 \\ Finetuning from ID & 41.25 & 566.67 & 56.43 \\ \hline + Mixed Noise & 52.71 & **337.40** & **31.57** \\ + Progressive Noise & **53.52** & 339.67 & 31.88 \\ \hline \hline \end{tabular} \end{table} Table 4: Quantitative results of different strategies. Figure 3: **Model architecture. Our pipeline consists of a cascade of four networks — a base model and three upsampling models. All four models take inputs as the text embeddings obtained from the T5 encoder and the CLIP text encoder. The base model produces \(16\) video frames of spatial resolution \(64\times 64\) with a frameskip of \(5\). The first upsampling model performs a temporal interpolation, resulting in videos of size \(76\times 64\times 64\) while the subsequent two super-resolution models perform spatial super-resolution to produce videos of sizes \(76\times 256\times 256\) and \(76\times 1024\times 1024\).** \begin{table} \begin{tabular}{l c c} \hline \hline Method & IS (\(\uparrow\)) & FVD (\(\downarrow\)) \\ \hline TGAN [39] & 15.83\({}_{\pm 18}\) & - \\ LDVD-GAN [19] & 22.91\({}_{\pm 19}\) & - \\ VideoGPT [57] & 24.69\({}_{\pm 30}\) & - \\ MoCoGAN-HD [47] & 32.36 & 838 \\ DIGAN [60] & 29.71\({}_{\pm 53}\) & 655\({}_{\pm 22}\) \\ CCVS [23] & 24.47\({}_{\pm 13}\) & 386\({}_{\pm 15}\) \\ StyleGAN-V [42] & 23.94\({}_{\pm 73}\) & - \\ VDM [16] & 57.00\({}_{\pm 62}\) & - \\ TATS [8] & 57.63\({}_{\pm 73}\) & 430 \({}_{\pm 18}\) \\ PYCoCo (\(112\)M) & 57.93\({}_{\pm 24}\) & 332 \(\pm 13\) \\ PYCoCo (\(253\)M) & **60.01\({}_{\pm 51}\)** & **310 \(\pm 13\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Unconditional UCF-101 generation results. Our approach achieves the state-of-the-art inception score and FVD, while having considerably smaller parameter count compared to other diffusion-based approaches such as VDM (1B parameters). correlated noise model produces better video generation quality than the independent noise model. In addition, we notice that the correlated noise better preserves the image quality learned by the pretrained image model and produces a lower frame FID. This is particularly desired in large-scale text-to-video training to fulfill the goal of inheriting the knowledge from the image model missing in the video datasets. As a result, as shown in Figure 4, our model can preserve properties learned from image datasets that are not presented in our video dataset, such as the artistic styles, and generate faithful motion on them. Correlation ratioThe hyperparameter \(\alpha\) in the Equations 3 and 4 controls the correlation between the noise of different frames. A larger \(\alpha\) injects more correlation into the noise. The correlation disappears when \(\alpha\to 0\), and the mixed and progressive noise models reproduce the vanilla noise model. To find optimal \(\alpha\), we train our UCF-small model (\(69\)M parameters) using \(\alpha\in\{0,0.1,0.2,0.5,1,1,2,5,10,\infty\}\) and report FVD in Figure 6. For each \(\alpha\) value, we repeat the Figure 4: Sample generations. _The figure is best viewed with Acrobat Reader. Click the images to play video clips._ experiment 3 times and report the mean. Note that \(\alpha=0\) indicates finetuning with the independent frame noise, and \(\alpha=\infty\) indicates using identical noise maps for all the frames, which produces frozen videos during the inference time. Finetuning an image diffusion model almost consistently outperforms the training-from-scratch baseline with different \(\alpha\)s. Using \(\alpha=1\) for mixed noising and \(\alpha=2\) for progressive noising produces similar best results. When \(\alpha\) is too small, we notice a quality drop visually in the generated frames and reduced video diversity. The model has difficulty generating correct motions when \(\alpha\) is too large. Model sizeWe pick the best \(\alpha\) for the mixed and progressive noise models and compare them with the model trained from scratch on models with different numbers of parameters, \(69\)M, \(112\)M, and \(253\)M. Figure 7 shows that our mixed and progressive models outperform the baseline consistently by a large margin in terms of FVD. Overall, mixed and progressive noising provide similar performance. In our large large-scale experiments, we choose progressive noising with \(\alpha=2\) due to its autoregressive nature. Figure 5: Qualitative comparison with baseline approaches. The two panels on the left show the comparison of our approach with Make-A-Video [41], while those on the right show the comparison with Imagen Video [13]. PYoCo achieves better photorealism compared to the two approaches. _Best viewed with Acrobat Reader: Click the images to play the video clips._ Figure 6: **Ablation on hyperparameter \(\alpha\). Finetuning with temporally correlated prior improves over training from scratch. Using a too-large or too-small \(\alpha\) leads to inferior results. \(\alpha=1\), \(\alpha=2\) each works the best for mixed and progressive noising, respectively.** Figure 7: **Ablation on model size. Larger models consistently improve the performance of both finetuning and training from scratch. Finetuning from image model consistently outperforms training from scratch.** ## 6 Conclusion We proposed a new efficient way of training text-to-video generation models. By observing that the noise maps generating the frames of a video are clustered together, we study mixed and progressive noise priors well-suited for sequential video frame generation. We apply our progressive noise prior to finetune a state-of-the-art diffusion-based text-to-image model to achieve a state-of-the-art large-scale text-to-video model. The high quality of the generated videos and the state-of-the-art Inception and FID scores demonstrate the strength of our approach. **Acknowledgment.** We thank Qinsheng Zhang, Tsung-Yi Lin, and Zekun Hao for their helpful discussion. This work is partly supported by NSF grants No. IIS-1910132 and IIS-2213335. ## Appendix A Experimental Setups In this section, we provide additional details of our experiments in terms of implementation, dataset, evaluation, model, and training. ### Implementation details Similar to prior works [16, 41], we adapt the image-based U-Net model for the video synthesis task by making the following changes: (1) We transform the 2D convolution layers to 3D by adding a dimension of \(1\) to the temporal axis. For instance, we convert a \(3\times 3\) convolution layer to \(1\times 3\times 3\) layer. (2) We replace the attention layers in the base and temporal interpolation models with a cascade of spatial and temporal attention layers. The spatial attention layers are reused from eDiff-I [1], while the temporal attention layers are initialized randomly with a projection layer at the end using zero-initialization. We apply temporal attention to the activation maps obtained by moving the spatial dimension of the feature tensor to the batch axis. (3) For the temporal interpolation model, we concatenate the input noise in the channel axis with \(16\) frames by infilling \(4\) real frames with zero frames. (4) We add a \(3\times 1\times 1\) convolution layer at the end of each efficient block of the super-resolution model [38]. (5) For all the models, we apply spatial attention to the reshaped activation maps obtained by moving the temporal dimension of the feature tensor to the batch axis. We apply the same operation to the feature maps input the GroupNorm [56] to mimic better the statistics the image model learned. We use cross-attention layers (between text and videos) only in the spatial attention block, as adding it to the temporal attention resulted in significant memory overhead. (6) We utilize eDiff-I [1] to initialize our base and spatial super-resolution models. We use a similar model architecture as the base model for our temporal interpolation model, as they share the same function of hallucinating unseen frames. After finetuning the base model for some time, we use its checkpoint to initialize the temporal interpolation model. (7) Similar to Ho [16], we jointly finetune the model on video and image datasets by concatenating videos and images in the temporal axis and applying our temporal modules only on the video part. (8) Similarly to eDiff-I, our model uses both T5 [33] text embeddings and CLIP text embeddings [32]. During training, we drop each of the embeddings independently at random, as in eDiff-I. ### Dataset and evaluation details Caption templates for categorical video datasetsGiven the name of the category [_class_] such as _kayaking_ and _yoga_, we consider the following templates to create video captions: * a man is [_class_]. * a woman is [_class_]. * a kid is [_class_]. * a group of people are [_class_]. * doing [_class_]. * a man is doing [_class_]. * a woman is doing [_class_]. * a kid is doing [_class_]. * a group of people are doing [_class_]. * [_class_]. Prompts used for UCF-101 evaluationIn our initial explorations, we find that the original class labels in the UCF-101 dataset often cannot describe the video content correctly. For example, the class _jump rope_ is more likely describing an object rather than a complete video. Therefore, we write one sentence for each class as the caption for video generation. We list these prompts for evaluating text-to-video generation models on the standard UCF-101 benchmark below. _applying eye makeup, applying lipstick, archery, baby crawling, gymnast performing on a balance beam, band marching, baseball pitcher throwing baseball, a basketball player shooting basketball, dunking basketball in a basketball match, bench press, biking, billiards, blow dry hair, blowing candles, body weight squats, a person bowling on bowling alley, boxing punching bag, boxing speed bag, swimmer doing breast stroke, brushing teeth, weightlifting with barbell, clean and jerk, cliff diving, bowling in cricket gameplay, cutting in kitchen, diver diving into a swimming pool from a springboard, drumming, two fencers have fencing match indoors, field hockey match, gymnast performing on the floor, group of people playing frisbee on the playground, swimmer doing front crawl, golfer swings and strikes the ball, haicutting, a person hammering a nail, an athlete performing the hammer throw, an athlete doing handstand push up, an athlete doing handstand walking, massagist doing head massage to man, an athlete doing high jump, horse race, group of people racing horse, person riding a horse, a woman doing hula hoop, man and woman dancing on the ice, ice dancing, athlete practicing javelin throw, a person juggling with balls, a young person doing jumping jacks, a person skipping with jump rope, a person kayaking in rapid water, knitting, an athlete doing long jump, a person doing lunges with barbell, military parade, mixing in the kitchen, mopping floor, a person practicing nunchuck, gymnat performing on parallel bars, a person tossing pizza dough, a musician playing the cello in a room, a musician playing the ddf, a musician playing the the indiad, a musician playing the flute, a musician playing the guitar, a musician playing the piano, a musician playing the sitar, a musician playing the table, a musician playing the violin, an athlete jumps over the bar, gymnat performing pommel horse exercise, a person doing pull ups on bar, boxing match, push ups, group of people rafting on fast moving river, rock climbing indoor, rope climbing, several people rowing a boat on the river, couple salsa dancing, young man shaving beand with razor, an athlete practicing shot put throw, a teenager skateboarding, skier skiing down, jet ski on the water, sky diving, soccer player juggling football, soccer player doing penalty kick in a soccer match, gymnat performing on still rings, sumo wrestling, surfing, kids swing at the park, a person playing table tennis, a person doing TaiChi, a person playing tennis, an athlete practicing discus throw, trampoline jumping, typing on computer keyboard, a gymnat performing on the uneven bars, people playing volleyball, walking with dog, a person standing, doing pushups on the wall, a person writing on the blackboard, a kid playing Yo-Yo_ ### Training details UCF-101 experiments.For image pretraining phase on the UCF-101 frames, we use an ADAM optimizer with a base learning rate of \(2e-4\). For video finetuning phase, we adopt an ADAM optimizer with a base learning rate of \(1e-4\). We use a linear warm up of \(5,000\) steps for both phases. For sampling, we use stochastic DEIS sampler [62, 20] with 3kutta, order \(6\) and \(25\) steps. Large-scale experiments.The hyper-parameters we use for the large-scale text-to-video experiments are provided in Table E. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Target-resolution 1024 (170M parameters)} \\ \hline Patch size & \(256\times 256\) \\ Channel multiplier & \([1,2,4,4]\) \\ Block multiplier & \([1,2,4,4]\) \\ Number of channels & \(128\) \\ Number of residual blocks & \(2\) \\ Spatial cross attention resolutions & \([32]\) \\ Use scale shift norm & True \\ \hline \hline \end{tabular} \end{table} Table 1: Architecture for the base model in text-to-video experiments. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Total-to-video base model (1.08B parameters)} \\ \hline Channel multiplier & \([1,2,4,4]\) \\ Dropout & \(0\) \\ Number of channels & \(256\) \\ Number of residual blocks & \(3\) \\ Spatial self attention resolutions & \([32,16,8]\) \\ Spatial cross attention resolutions & \([32,16,8]\) \\ Temporal attention resolution & \([32,16,8]\) \\ Number of channels in attention heads & 64 \\ Use scale shift norm & True \\ \hline \hline \end{tabular} \end{table} Table 1: Architecture for the temporal interpolation model in text-to-video experiments. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Spatial super-resolution 256 (300M parameters)} \\ \hline Channel multiplier & \([1,2,4,8]\) \\ Block multiplier & \([1,2,4,4]\) \\ Dropout & \(0\) \\ Number of channels & \(128\) \\ Number of residual blocks & \(2\) \\ Spatial self attention resolutions & \([32]\) \\ Spatial cross attention resolutions & \([32]\) \\ Number of channels in attention heads & 64 \\ Use scale shift norm & True \\ \hline \hline \end{tabular} \end{table} Table 1: Architecture for the spatial super-resolution model in text-to-video experiments. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Target-resolution 1024 (170M parameters)} \\ \hline Patch size & \(256\times 256\) \\ Channel multiplier & \([1,2,4,4]\) \\ Block multiplier & \([1,2,4,4]\) \\ Block multiplier & \([1,2,4,4]\) \\ Number of channels & \(128\) \\ Number of residual blocks & \(2\) \\ Spatial cross attention resolutions & \([32]\) \\ Use scale shift norm & True \\ \hline \hline \end{tabular} \end{table} Table 2: Architecture for the spatial super-resolution model in text-to-video experiments. \begin{table} \begin{tabular}{l c} \hline \hline \multicolumn{2}{c}{Target-resolution 1024 (170M parameters)} \\ \hline Channel multiplier & \([1,2,4,4]\) \\ Dropout & \(0\) \\ Number of channels & \(256\) \\ Number of residual blocks & \(3\) \\ Spatial self attention resolutions & \([32,16,8]\) \\ Spatial cross attention resolutions & \([32,16,8]\) \\ Temporal attention resolution & \([32,16,8]\) \\ Number of channels in attention heads & 64 \\ Use scale shift norm & True \\ \hline \hline \end{tabular} \end{table} Table 1: Architecture for the base model in text-to-video experiments.